Abstract: Fingerprint based identification system; one of a well
known biometric system in the area of pattern recognition and has
always been under study through its important role in forensic
science that could help government criminal justice community. In
this paper, we proposed an identification framework of individuals by
means of fingerprint. Different from the most conventional
fingerprint identification frameworks the extracted Geometrical
element features (GEFs) will go through a Discretization process.
The intention of Discretization in this study is to attain individual
unique features that could reflect the individual varianceness in order
to discriminate one person from another. Previously, Discretization
has been shown a particularly efficient identification on English
handwriting with accuracy of 99.9% and on discrimination of twins-
handwriting with accuracy of 98%. Due to its high discriminative
power, this method is adopted into this framework as an independent
based method to seek for the accuracy of fingerprint identification.
Finally the experimental result shows that the accuracy rate of
identification of the proposed system using Discretization is 100%
for FVC2000, 93% for FVC2002 and 89.7% for FVC2004 which is
much better than the conventional or the existing fingerprint
identification system (72% for FVC2000, 26% for FVC2002 and
32.8% for FVC2004). The result indicates that Discretization
approach manages to boost up the classification effectively, and
therefore prove to be suitable for other biometric features besides
handwriting and fingerprint.
Abstract: This paper describes about the process of recognition and classification of brain images such as normal and abnormal based on PSO-SVM. Image Classification is becoming more important for medical diagnosis process. In medical area especially for diagnosis the abnormality of the patient is classified, which plays a great role for the doctors to diagnosis the patient according to the severeness of the diseases. In case of DICOM images it is very tough for optimal recognition and early detection of diseases. Our work focuses on recognition and classification of DICOM image based on collective approach of digital image processing. For optimal recognition and classification Particle Swarm Optimization (PSO), Genetic Algorithm (GA) and Support Vector Machine (SVM) are used. The collective approach by using PSO-SVM gives high approximation capability and much faster convergence.
Abstract: Linearization of graph embedding has been emerged
as an effective dimensionality reduction technique in pattern
recognition. However, it may not be optimal for nonlinearly
distributed real world data, such as face, due to its linear nature. So, a
kernelization of graph embedding is proposed as a dimensionality
reduction technique in face recognition. In order to further boost the
recognition capability of the proposed technique, the Fisher-s
criterion is opted in the objective function for better data
discrimination. The proposed technique is able to characterize the
underlying intra-class structure as well as the inter-class separability.
Experimental results on FRGC database validate the effectiveness of
the proposed technique as a feature descriptor.
Abstract: In present paper we proposed a simple and effective method to compress an image. Here we found success in size reduction of an image without much compromising with it-s quality. Here we used Haar Wavelet Transform to transform our original image and after quantization and thresholding of DWT coefficients Run length coding and Huffman coding schemes have been used to encode the image. DWT is base for quite populate JPEG 2000 technique.
Abstract: Performance of any continuous speech recognition system is highly dependent on performance of the acoustic models. Generally, development of the robust spoken language technology relies on the availability of large amounts of data. Common way to cope with little data for training each state of Markov models is treebased state tying. This tying method applies contextual questions to tie states. Manual procedure for question generation suffers from human errors and is time consuming. Various automatically generated questions are used to construct decision tree. There are three approaches to generate questions to construct HMMs based on decision tree. One approach is based on misrecognized phonemes, another approach basically uses feature table and the other is based on state distributions corresponding to context-independent subword units. In this paper, all these methods of automatic question generation are applied to the decision tree on FARSDAT corpus in Persian language and their results are compared with those of manually generated questions. The results show that automatically generated questions yield much better results and can replace manually generated questions in Persian language.
Abstract: Our work is part of the heterogeneous data
integration, with the definition of a structural and semantic mediation
model. Our aim is to propose architecture for the heterogeneous
sources metadata mediation, represented by XML, RDF and RuleML
models, providing to the user the metadata transparency. This, by
including data structures, of natures fundamentally different, and
allowing the decomposition of a query involving multiple sources, to
queries specific to these sources, then recompose the result.
Abstract: Software organizations are constantly looking for
better solutions when designing and using well-defined software
processes for the development of their products and services.
However, while the technical aspects are virtually easier to arrange,
many software development processes lack more support on project
management issues. When adopting such processes, an organization
needs to apply good project management skills along with technical
views provided by those models. This research proposes the
definition of a new model that integrates the concepts of PMBOK
and those available on the OPEN metamodel, helping not only
process integration but also building the steps towards a more
comprehensive and automatable model.
Abstract: Although face recognition seems as an easy task for
human, automatic face recognition is a much more challenging task
due to variations in time, illumination and pose. In this paper, the
influence of time-lapse on visible and thermal images is examined.
Orthogonal moment invariants are used as a feature extractor to
analyze the effect of time-lapse on thermal and visible images and the
results are compared with conventional Principal Component
Analysis (PCA). A new triangle square ratio criterion is employed
instead of Euclidean distance to enhance the performance of nearest
neighbor classifier. The results of this study indicate that the ideal
feature vectors can be represented with high discrimination power
due to the global characteristic of orthogonal moment invariants.
Moreover, the effect of time-lapse has been decreasing and enhancing
the accuracy of face recognition considerably in comparison with
PCA. Furthermore, our experimental results based on moment
invariant and triangle square ratio criterion show that the proposed
approach achieves on average 13.6% higher in recognition rate than
PCA.
Abstract: This paper considers the problem of finding low cost
chip set for a minimum cost partitioning of a large logic circuits. Chip
sets are selected from a given library. Each chip in the library has a
different price, area, and I/O pin. We propose a low cost chip set
selection algorithm. Inputs to the algorithm are a netlist and a chip
information in the library. Output is a list of chip sets satisfied with
area and maximum partitioning number and it is sorted by cost. The
algorithm finds the sorted list of chip sets from minimum cost to
maximum cost. We used MCNC benchmark circuits for experiments.
The experimental results show that all of chip sets found satisfy the
multiple partitioning constraints.
Abstract: In ad hoc networks, the main issue about designing of protocols is quality of service, so that in wireless sensor networks the main constraint in designing protocols is limited energy of sensors. In fact, protocols which minimize the power consumption in sensors are more considered in wireless sensor networks. One approach of reducing energy consumption in wireless sensor networks is to reduce the number of packages that are transmitted in network. The technique of collecting data that combines related data and prevent transmission of additional packages in network can be effective in the reducing of transmitted packages- number. According to this fact that information processing consumes less power than information transmitting, Data Aggregation has great importance and because of this fact this technique is used in many protocols [5]. One of the Data Aggregation techniques is to use Data Aggregation tree. But finding one optimum Data Aggregation tree to collect data in networks with one sink is a NP-hard problem. In the Data Aggregation technique, related information packages are combined in intermediate nodes and form one package. So the number of packages which are transmitted in network reduces and therefore, less energy will be consumed that at last results in improvement of longevity of network. Heuristic methods are used in order to solve the NP-hard problem that one of these optimization methods is to solve Simulated Annealing problems. In this article, we will propose new method in order to build data collection tree in wireless sensor networks by using Simulated Annealing algorithm and we will evaluate its efficiency whit Genetic Algorithm.
Abstract: In this work a new offline signature recognition system
based on Radon Transform, Fractal Dimension (FD) and Support Vector Machine (SVM) is presented. In the first step, projections of
original signatures along four specified directions have been performed using radon transform. Then, FDs of four obtained
vectors are calculated to construct a feature vector for each
signature. These vectors are then fed into SVM classifier for recognition of signatures. In order to evaluate the effectiveness of
the system several experiments are carried out. Offline signature
database from signature verification competition (SVC) 2004 is used
during all of the tests. Experimental result indicates that the proposed method achieved high accuracy rate in signature recognition.
Abstract: When programming in languages such as C, Java, etc.,
it is difficult to reconstruct the programmer's ideas only from the
program code. This occurs mainly because, much of the programmer's
ideas behind the implementation are not recorded in the code during
implementation. For example, physical aspects of computation such as
spatial structures, activities, and meaning of variables are not required
as instructions to the computer and are often excluded. This makes the
future reconstruction of the original ideas difficult. AIDA, which is a
multimedia programming language based on the cyberFilm model, can
solve these problems allowing to describe ideas behind programs
using advanced annotation methods as a natural extension to
programming. In this paper, a development environment that
implements the AIDA language is presented with a focus on the
annotation methods. In particular, an actual scientific numerical
computation code is created and the effects of the annotation methods
are analyzed.
Abstract: this paper presented a survey analysis subjected on
network bandwidth management from published papers referred in
IEEE Explorer database in three years from 2009 to 2011. Network
Bandwidth Management is discussed in today-s issues for computer
engineering applications and systems. Detailed comparison is
presented between published papers to look further in the IP based
network critical research area for network bandwidth management.
Important information such as the network focus area, a few
modeling in the IP Based Network and filtering or scheduling used in
the network applications layer is presented. Many researches on
bandwidth management have been done in the broad network area
but fewer are done in IP Based network specifically at the
applications network layer. A few researches has contributed new
scheme or enhanced modeling but still the issue of bandwidth
management still arise at the applications network layer. This survey
is taken as a basic research towards implementations of network
bandwidth management technique, new framework model and
scheduling scheme or algorithm in an IP Based network which will
focus in a control bandwidth mechanism in prioritizing the network
traffic the applications layer.
Abstract: This paper presents an idea to improve the efficiency
of security checks in airports through the active tracking and
monitoring of passengers and staff using OFDM modulation
technique and Finger print authentication. The details of the
passenger are multiplexed using OFDM .To authenticate the
passenger, the fingerprint along with important identification
information is collected. The details of the passenger can be
transmitted after necessary modulation, and received using various
transceivers placed within the premises of the airport, and checked at
the appropriate check points, thereby increasing the efficiency of
checking. OFDM has been employed for spectral efficiency.
Abstract: This paper describes an optimal approach for feature
subset selection to classify the leaves based on Genetic Algorithm
(GA) and Kernel Based Principle Component Analysis (KPCA). Due
to high complexity in the selection of the optimal features, the
classification has become a critical task to analyse the leaf image
data. Initially the shape, texture and colour features are extracted
from the leaf images. These extracted features are optimized through
the separate functioning of GA and KPCA. This approach performs
an intersection operation over the subsets obtained from the
optimization process. Finally, the most common matching subset is
forwarded to train the Support Vector Machine (SVM). Our
experimental results successfully prove that the application of GA
and KPCA for feature subset selection using SVM as a classifier is
computationally effective and improves the accuracy of the classifier.
Abstract: We developed a vision interface immersive projection system, CAVE in virtual rea using hand gesture recognition with computer vis background image was subtracted from current webcam and we convert the color space of the imag Then we mask skin regions using skin color range t a noise reduction operation. We made blobs fro gestures were recognized using these blobs. Using recognition, we could implement an effective bothering devices for CAVE. e framework for an reality research field vision techniques. ent image frame age into HSV space. e threshold and apply from the image and ing our hand gesture e interface without
Abstract: Nowadays, many manufacturing companies try to
reinforce their competitiveness or find a breakthrough by considering
collaboration. In Korea, more than 900 manufacturing companies are
using web-based collaboration systems developed by the
government-led project, referred to as i-Manufacturing. The system
supports some similar functions of Product Data Management (PDM)
as well as Project Management System (PMS). A web-based
collaboration system provides many useful functions for collaborative
works. This system, however, does not support new linking services
between buyers and suppliers. Therefore, in order to find new
collaborative partners, this paper proposes a framework which creates
new connections between buyers and suppliers facilitating their
collaboration, referred to as Excellent Manufacturer Scouting System
(EMSS). EMSS plays a role as a bridge between overseas buyers and
suppliers. As a part of study on EMSS, we also propose an evaluation
method of manufacturability of potential partners with six main factors.
Based on the results of evaluation, buyers may get a good guideline to
choose their new partners before getting into negotiation processes
with them.
Abstract: The massive proliferation of affordable computers, Internet broadband connectivity and rich education content has created a global phenomenon in which information and communication technology (ICT) is being used to transform education. Therefore, there is a need to redesign the educational system to meet the needs better. The advent of computers with sophisticated software has made it possible to solve many complex problems very fast and at a lower cost. This paper introduces the characteristics of the current E-Learning and then analyses the concept of cloud computing and describes the architecture of cloud computing platform by combining the features of E-Learning. The authors have tried to introduce cloud computing to e-learning, build an e-learning cloud, and make an active research and exploration for it from the following aspects: architecture, construction method and external interface with the model.
Abstract: The two-stage compensator designs of linear system are
investigated in the framework of the factorization approach. First, we
give “full feedback" two-stage compensator design. Based on this
result, various types of the two-stage compensator designs with partial
feedbacks are derived.
Abstract: The paper proposes a way of parallel processing of
SURF and Optical Flow for moving object recognition and tracking.
The object recognition and tracking is one of the most important task
in computer vision, however disadvantage are many operations cause
processing speed slower so that it can-t do real-time object recognition
and tracking. The proposed method uses a typical way of feature
extraction SURF and moving object Optical Flow for reduce
disadvantage and real-time moving object recognition and tracking,
and parallel processing techniques for speed improvement. First
analyse that an image from DB and acquired through the camera using
SURF for compared to the same object recognition then set ROI
(Region of Interest) for tracking movement of feature points using
Optical Flow. Secondly, using Multi-Thread is for improved
processing speed and recognition by parallel processing. Finally,
performance is evaluated and verified efficiency of algorithm
throughout the experiment.