Face Recognition Using Double Dimension Reduction

In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.

A Patricia-Tree Approach for Frequent Closed Itemsets

In this paper, we propose an adaptation of the Patricia-Tree for sparse datasets to generate non redundant rule associations. Using this adaptation, we can generate frequent closed itemsets that are more compact than frequent itemsets used in Apriori approach. This adaptation has been experimented on a set of datasets benchmarks.

Dempster-Shafer's Approach for Autonomous Virtual Agent Navigation in Virtual Environments

This paper presents a solution for the behavioural animation of autonomous virtual agent navigation in virtual environments. We focus on using Dempster-Shafer-s Theory of Evidence in developing visual sensor for virtual agent. The role of the visual sensor is to capture the information about the virtual environment or identifie which part of an obstacle can be seen from the position of the virtual agent. This information is require for vitual agent to coordinate navigation in virtual environment. The virual agent uses fuzzy controller as a navigation system and Fuzzy α - level for the action selection method. The result clearly demonstrates the path produced is reasonably smooth even though there is some sharp turn and also still not diverted too far from the potential shortest path. This had indicated the benefit of our method, where more reliable and accurate paths produced during navigation task.

Solving Partially Monotone Problems with Neural Networks

In many applications, it is a priori known that the target function should satisfy certain constraints imposed by, for example, economic theory or a human-decision maker. Here we consider partially monotone problems, where the target variable depends monotonically on some of the predictor variables but not all. We propose an approach to build partially monotone models based on the convolution of monotone neural networks and kernel functions. The results from simulations and a real case study on house pricing show that our approach has significantly better performance than partially monotone linear models. Furthermore, the incorporation of partial monotonicity constraints not only leads to models that are in accordance with the decision maker's expertise, but also reduces considerably the model variance in comparison to standard neural networks with weight decay.

Face Texture Reconstruction for Illumination Variant Face Recognition

In illumination variant face recognition, existing methods extracting face albedo as light normalized image may lead to loss of extensive facial details, with light template discarded. To improve that, a novel approach for realistic facial texture reconstruction by combining original image and albedo image is proposed. First, light subspaces of different identities are established from the given reference face images; then by projecting the original and albedo image into each light subspace respectively, texture reference images with corresponding lighting are reconstructed and two texture subspaces are formed. According to the projections in texture subspaces, facial texture with normal light can be synthesized. Due to the combination of original image, facial details can be preserved with face albedo. In addition, image partition is applied to improve the synthesization performance. Experiments on Yale B and CMUPIE databases demonstrate that this algorithm outperforms the others both in image representation and in face recognition.

Skew Detection Technique for Binary Document Images based on Hough Transform

Document image processing has become an increasingly important technology in the automation of office documentation tasks. During document scanning, skew is inevitably introduced into the incoming document image. Since the algorithm for layout analysis and character recognition are generally very sensitive to the page skew. Hence, skew detection and correction in document images are the critical steps before layout analysis. In this paper, a novel skew detection method is presented for binary document images. The method considered the some selected characters of the text which may be subjected to thinning and Hough transform to estimate skew angle accurately. Several experiments have been conducted on various types of documents such as documents containing English Documents, Journals, Text-Book, Different Languages and Document with different fonts, Documents with different resolutions, to reveal the robustness of the proposed method. The experimental results revealed that the proposed method is accurate compared to the results of well-known existing methods.

Estimating Shortest Circuit Path Length Complexity

When binary decision diagrams are formed from uniformly distributed Monte Carlo data for a large number of variables, the complexity of the decision diagrams exhibits a predictable relationship to the number of variables and minterms. In the present work, a neural network model has been used to analyze the pattern of shortest path length for larger number of Monte Carlo data points. The neural model shows a strong descriptive power for the ISCAS benchmark data with an RMS error of 0.102 for the shortest path length complexity. Therefore, the model can be considered as a method of predicting path length complexities; this is expected to lead to minimum time complexity of very large-scale integrated circuitries and related computer-aided design tools that use binary decision diagrams.

Context for Simplicity: A Basis for Context-aware Systems Based on the 3GPP Generic User Profile

The paper focuses on the area of context modeling with respect to the specification of context-aware systems supporting ubiquitous applications. The proposed approach, followed within the SIMPLICITY IST project, uses a high-level system ontology to derive context models for system components which consequently are mapped to the system's physical entities. For the definition of user and device-related context models in particular, the paper suggests a standard-based process consisting of an analysis phase using the Common Information Model (CIM) methodology followed by an implementation phase that defines 3GPP based components. The benefits of this approach are further depicted by preliminary examples of XML grammars defining profiles and components, component instances, coupled with descriptions of respective ubiquitous applications.

A Video-based Algorithm for Moving Objects Detection at Signalized Intersection

Mixed-traffic (e.g., pedestrians, bicycles, and vehicles) data at an intersection is one of the essential factors for intersection design and traffic control. However, some data such as pedestrian volume cannot be directly collected by common detectors (e.g. inductive loop, sonar and microwave sensors). In this paper, a video based detection algorithm is proposed for mixed-traffic data collection at intersections using surveillance cameras. The algorithm is derived from Gaussian Mixture Model (GMM), and uses a mergence time adjustment scheme to improve the traditional algorithm. Real-world video data were selected to test the algorithm. The results show that the proposed algorithm has the faster processing speed and more accuracy than the traditional algorithm. This indicates that the improved algorithm can be applied to detect mixed-traffic at signalized intersection, even when conflicts occur.

Data Envelopment Analysis under Uncertainty and Risk

Data Envelopment Analysis (DEA) is one of the most widely used technique for evaluating the relative efficiency of a set of homogeneous decision making units. Traditionally, it assumes that input and output variables are known in advance, ignoring the critical issue of data uncertainty. In this paper, we deal with the problem of efficiency evaluation under uncertain conditions by adopting the general framework of the stochastic programming. We assume that output parameters are represented by discretely distributed random variables and we propose two different models defined according to a neutral and risk-averse perspective. The models have been validated by considering a real case study concerning the evaluation of the technical efficiency of a sample of individual firms operating in the Italian leather manufacturing industry. Our findings show the validity of the proposed approach as ex-ante evaluation technique by providing the decision maker with useful insights depending on his risk aversion degree.

Information Fusion for Identity Verification

In this paper we propose a novel approach for ascertaining human identity based on fusion of profile face and gait biometric cues The identification approach based on feature learning in PCA-LDA subspace, and classification using multivariate Bayesian classifiers allows significant improvement in recognition accuracy for low resolution surveillance video scenarios. The experimental evaluation of the proposed identification scheme on a publicly available database [2] showed that the fusion of face and gait cues in joint PCA-LDA space turns out to be a powerful method for capturing the inherent multimodality in walking gait patterns, and at the same time discriminating the person identity..

SIP Authentication Scheme using ECDH

SIP (Session Initiation Protocol), using HTML based call control messaging which is quite simple and efficient, is being replaced for VoIP networks recently. As for authentication and authorization purposes there are many approaches and considerations for securing SIP to eliminate forgery on the integrity of SIP messages. On the other hand Elliptic Curve Cryptography has significant advantages like smaller key sizes, faster computations on behalf of other Public Key Cryptography (PKC) systems that obtain data transmission more secure and efficient. In this work a new approach is proposed for secure SIP authentication by using a public key exchange mechanism using ECC. Total execution times and memory requirements of proposed scheme have been improved in comparison with non-elliptic approaches by adopting elliptic-based key exchange mechanism.

Feature Point Detection by Combining Advantages of Intensity-based Approach and Edge-based Approach

In this paper, a novel corner detection method is presented to stably extract geometrically important corners. Intensity-based corner detectors such as the Harris corner can detect corners in noisy environments but has inaccurate corner position and misses the corners of obtuse angles. Edge-based corner detectors such as Curvature Scale Space can detect structural corners but show unstable corner detection due to incomplete edge detection in noisy environments. The proposed image-based direct curvature estimation can overcome limitations in both inaccurate structural corner detection of the Harris corner detector (intensity-based) and the unstable corner detection of Curvature Scale Space caused by incomplete edge detection. Various experimental results validate the robustness of the proposed method.

A Combinatorial Model for ECG Interpretation

A new, combinatorial model for analyzing and inter- preting an electrocardiogram (ECG) is presented. An application of the model is QRS peak detection. This is demonstrated with an online algorithm, which is shown to be space as well as time efficient. Experimental results on the MIT-BIH Arrhythmia database show that this novel approach is promising. Further uses for this approach are discussed, such as taking advantage of its small memory requirements and interpreting large amounts of pre-recorded ECG data.

Adaptive Group of Pictures Structure Based On the Positions of Video Cuts

In this paper we propose a method which improves the efficiency of video coding. Our method combines an adaptive GOP (group of pictures) structure and the shot cut detection. We have analyzed different approaches for shot cut detection with aim to choose the most appropriate one. The next step is to situate N frames to the positions of detected cuts during the process of video encoding. Finally the efficiency of the proposed method is confirmed by simulations and the obtained results are compared with fixed GOP structures of sizes 4, 8, 12, 16, 32, 64, 128 and GOP structure with length of entire video. Proposed method achieved the gain in bit rate from 0.37% to 50.59%, while providing PSNR (Peak Signal-to-Noise Ratio) gain from 1.33% to 0.26% in comparison to simulated fixed GOP structures.

A Genetic and Simulated Annealing Based Algorithms for Solving the Flow Assignment Problem in Computer Networks

Selecting the routes and the assignment of link flow in a computer communication networks are extremely complex combinatorial optimization problems. Metaheuristics, such as genetic or simulated annealing algorithms, are widely applicable heuristic optimization strategies that have shown encouraging results for a large number of difficult combinatorial optimization problems. This paper considers the route selection and hence the flow assignment problem. A genetic algorithm and simulated annealing algorithm are used to solve this problem. A new hybrid algorithm combining the genetic with the simulated annealing algorithm is introduced. A modification of the genetic algorithm is also introduced. Computational experiments with sample networks are reported. The results show that the proposed modified genetic algorithm is efficient in finding good solutions of the flow assignment problem compared with other techniques.

Journey on Image Clustering Based on Color Composition

Image clustering is a process of grouping images based on their similarity. The image clustering usually uses the color component, texture, edge, shape, or mixture of two components, etc. This research aims to explore image clustering using color composition. In order to complete this image clustering, three main components should be considered, which are color space, image representation (feature extraction), and clustering method itself. We aim to explore which composition of these factors will produce the best clustering results by combining various techniques from the three components. The color spaces use RGB, HSV, and L*a*b* method. The image representations use Histogram and Gaussian Mixture Model (GMM), whereas the clustering methods use KMeans and Agglomerative Hierarchical Clustering algorithm. The results of the experiment show that GMM representation is better combined with RGB and L*a*b* color space, whereas Histogram is better combined with HSV. The experiments also show that K-Means is better than Agglomerative Hierarchical for images clustering.

Illumination Invariant Face Recognition using Supervised and Unsupervised Learning Algorithms

In this paper, a comparative study of application of supervised and unsupervised learning algorithms on illumination invariant face recognition has been carried out. The supervised learning has been carried out with the help of using a bi-layered artificial neural network having one input, two hidden and one output layer. The gradient descent with momentum and adaptive learning rate back propagation learning algorithm has been used to implement the supervised learning in a way that both the inputs and corresponding outputs are provided at the time of training the network, thus here is an inherent clustering and optimized learning of weights which provide us with efficient results.. The unsupervised learning has been implemented with the help of a modified Counterpropagation network. The Counterpropagation network involves the process of clustering followed by application of Outstar rule to obtain the recognized face. The face recognition system has been developed for recognizing faces which have varying illumination intensities, where the database images vary in lighting with respect to angle of illumination with horizontal and vertical planes. The supervised and unsupervised learning algorithms have been implemented and have been tested exhaustively, with and without application of histogram equalization to get efficient results.

Volterra Filter for Color Image Segmentation

Color image segmentation plays an important role in computer vision and image processing areas. In this paper, the features of Volterra filter are utilized for color image segmentation. The discrete Volterra filter exhibits both linear and nonlinear characteristics. The linear part smoothes the image features in uniform gray zones and is used for getting a gross representation of objects of interest. The nonlinear term compensates for the blurring due to the linear term and preserves the edges which are mainly used to distinguish the various objects. The truncated quadratic Volterra filters are mainly used for edge preserving along with Gaussian noise cancellation. In our approach, the segmentation is based on K-means clustering algorithm in HSI space. Both the hue and the intensity components are fully utilized. For hue clustering, the special cyclic property of the hue component is taken into consideration. The experimental results show that the proposed technique segments the color image while preserving significant features and removing noise effects.