Abstract: Knowledge capabilities are increasingly important for
the innovative technology enterprises to enhance the business
performance in terms of product competitiveness, innovation and
sales. Recognition of the company capability by auditing allows them
to further pursue advancement, strategic planning and hence gain
competitive advantages. This paper attempts to develop an
Organizations- Knowledge Capabilities Assessment (OKCA) method
to assess the knowledge capabilities of technology companies. The
OKCA is a questionnaire-based assessment tool which has been
developed to uncover the impact of various knowledge capabilities on
different organizational performance. The collected data is then
analyzed to find out the crucial elements for different technological
companies. Based on the results, innovative technology enterprises are
able to recognize the direction for further improvement on business
performance and future development plan. External environmental
factors affecting organization performance can be found through the
further analysis of some selected reference companies.
Abstract: On-line handwritten scripts are usually dealt with pen tip traces from pen-down to pen-up positions. Time evaluation of the pen coordinates is also considered along with trajectory information. However, the data obtained needs a lot of preprocessing including filtering, smoothing, slant removing and size normalization before recognition process. Instead of doing such lengthy preprocessing, this paper presents a simple approach to extract the useful character information. This work evaluates the use of the counter- propagation neural network (CPN) and presents feature extraction mechanism in full detail to work with on-line handwriting recognition. The obtained recognition rates were 60% to 94% using the CPN for different sets of character samples. This paper also describes a performance study in which a recognition mechanism with multiple thresholds is evaluated for counter-propagation architecture. The results indicate that the application of multiple thresholds has significant effect on recognition mechanism. The method is applicable for off-line character recognition as well. The technique is tested for upper-case English alphabets for a number of different styles from different peoples.
Abstract: Realistic 3D face model is more precise in representing
pose, illumination, and expression of face than 2D face model so that it
can be utilized usefully in various applications such as face recognition,
games, avatars, animations, and etc.
In this paper, we propose a 3D face modeling method based on 3D
dense morphable shape model. The proposed 3D modeling method
first constructs a 3D dense morphable shape model from 3D face scan
data obtained using a 3D scanner. Next, the proposed method extracts
and matches facial landmarks from 2D image sequence containing a
face to be modeled, and then reconstructs 3D vertices coordinates of
the landmarks using a factorization-based SfM technique. Then, the
proposed method obtains a 3D dense shape model of the face to be
modeled by fitting the constructed 3D dense morphable shape model
into the reconstructed 3D vertices. Also, the proposed method makes a
cylindrical texture map using 2D face image sequence. Finally, the
proposed method generates a 3D face model by rendering the 3D dense
face shape model using the cylindrical texture map. Through building
processes of 3D face model by the proposed method, it is shown that
the proposed method is relatively easy, fast and precise.
Abstract: Electronic commerce is growing rapidly with on-line
sales already heading for hundreds of billion dollars per year. Due to
the huge amount of money transferred everyday, an increased
security level is required. In this work we present the architecture of
an intelligent speaker verification system, which is able to accurately
verify the registered users of an e-commerce service using only their
voices as an input. According to the proposed architecture, a
transaction-based e-commerce application should be complemented
by a biometric server where customer-s unique set of speech models
(voiceprint) is stored. The verification procedure requests from the
user to pronounce a personalized sequence of digits and after
capturing speech and extracting voice features at the client side are
sent back to the biometric server. The biometric server uses pattern
recognition to decide whether the received features match the stored
voiceprint of the customer who claims to be, and accordingly grants
verification. The proposed architecture can provide e-commerce
applications with a higher degree of certainty regarding the identity
of a customer, and prevent impostors to execute fraudulent
transactions.
Abstract: The paper presents a multimodal approach for biometric authentication, based on multiple classifiers. The proposed solution uses a post-classification biometric fusion method in which the biometric data classifiers outputs are combined in order to improve the overall biometric system performance by decreasing the classification error rates. The paper shows also the biometric recognition task improvement by means of a carefully feature selection, as much as not all of the feature vectors components support the accuracy improvement.
Abstract: Needs of an efficient information retrieval in recent
years in increased more then ever because of the frequent use of
digital information in our life. We see a lot of work in the area of
textual information but in multimedia information, we cannot find
much progress. In text based information, new technology of data
mining and data marts are now in working that were started from the
basic concept of database some where in 1960.
In image search and especially in image identification,
computerized system at very initial stages. Even in the area of image
search we cannot see much progress as in the case of text based
search techniques. One main reason for this is the wide spread roots
of image search where many area like artificial intelligence,
statistics, image processing, pattern recognition play their role. Even
human psychology and perception and cultural diversity also have
their share for the design of a good and efficient image recognition
and retrieval system.
A new object based search technique is presented in this paper
where object in the image are identified on the basis of their
geometrical shapes and other features like color and texture where
object-co-relation augments this search process.
To be more focused on objects identification, simple images are
selected for the work to reduce the role of segmentation in overall
process however same technique can also be applied for other
images.
Abstract: Thrombosis can be life threatening, necessitating therefore its instant treatment. Hydergine, a nootropic agent is used as a cognition enhancer in stroke patients but relatively little is known about its anti-thrombolytic effect. To investigate this aspect, in vivo and ex vivo experiments were designed and conducted. Three groups of rats were injected 1.5mg, 3.0mg and 4.5mg hydergine intraperitonealy with and without prior exposure to fresh plasma. Positive and negative controls were run in parallel. Animals were sacrificed after 1.5hrs and BT, CT, PT, INR, APTT, plasma calcium levels were estimated. For ex vivo analyses, each 1ml blood aspirated was exposed to 0.1mg, 0.2mg, 0.3mg dose of hydergine with parallel controls. Parameters analyzed were as above. Statistical analysis was through one-way ANOVA. Dunken-s and Tukey-s tests provided intra-group variance. BT, CT, PT, INR and APTT increased while calcium levels dropped significantly (P
Abstract: Active research is underway on virtual touch screens
that complement the physical limitations of conventional touch
screens. This paper discusses a virtual touch screen that uses a
multi-layer perceptron to recognize and control three-dimensional
(3D) depth information from a time of flight (TOF) camera. This
system extracts an object-s area from the image input and compares it
with the trajectory of the object, which is learned in advance, to
recognize gestures. The system enables the maneuvering of content in
virtual space by utilizing human actions.
Abstract: This study aimed to develop and initially validate an instrument that measures social competency among tertiary level faculty members. A review of extant literature on social competence was done. The review of extant literature led to the writing of the items in the initial instrument which was evaluated by 11 Subject Matter Experts (SMEs). The SMEs were either educators or psychologists. The results of the evaluations done by the SMEs served as bases for the creation of the pre-try-out instrument used in the first trial-run. Insights from the first trial-run participants led to the development of the main try-out instrument used in the final test administration. One Hundred Forty-one participants from five private Higher Education Institutions (HEIs) in the National Capital Region (NCR) and five private HEIs in Central Luzon in the Philippines participated in the final test administration. The reliability of the instrument was evaluated using Cronbach-s Coefficient Alpha formula and had a Cronbach-s Alpha of 0.92. On the other hand, Factor Analysis was used to evaluate the validity of the instrument and six factors were identified. The development of the final instrument was based on the results of the evaluation of the instrument-s reliability and validity. For purposes of recognition, the instrument was named “Social Competency Inventory for Tertiary Level Faculty Members (SCI-TLFM)."
Abstract: The current speech interfaces in many military
applications may be adequate for native speakers. However,
the recognition rate drops quite a lot for non-native speakers
(people with foreign accents). This is mainly because the nonnative
speakers have large temporal and intra-phoneme
variations when they pronounce the same words. This
problem is also complicated by the presence of large
environmental noise such as tank noise, helicopter noise, etc.
In this paper, we proposed a novel continuous acoustic feature
adaptation algorithm for on-line accent and environmental
adaptation. Implemented by incremental singular value
decomposition (SVD), the algorithm captures local acoustic
variation and runs in real-time. This feature-based adaptation
method is then integrated with conventional model-based
maximum likelihood linear regression (MLLR) algorithm.
Extensive experiments have been performed on the NATO
non-native speech corpus with baseline acoustic model trained
on native American English. The proposed feature-based
adaptation algorithm improved the average recognition
accuracy by 15%, while the MLLR model based adaptation
achieved 11% improvement. The corresponding word error
rate (WER) reduction was 25.8% and 2.73%, as compared to
that without adaptation. The combined adaptation achieved
overall recognition accuracy improvement of 29.5%, and
WER reduction of 31.8%, as compared to that without
adaptation.
Abstract: In this paper, perceptions of actors on changes in
crop productivity, quantity and quality of water, and determinants of
their perception are analyzed using descriptive statistics and ordered
logit model. Data collected from 297 Ethiopian farmers and 103
agricultural professionals from December 2009 to January 2010 are
employed. Results show that the majority of the farmers and
professionals recognized decline in water resources, reasoning
climate changes and soil erosion as some of the causes. However,
there is a variation in views on changes in productivity. The
household asset, education level, age and geographical positions are
found to affect farmers- perception on changes in crop productivity.
But, the study underlines that there is no evidence that farmers-
economic status, age, or education level affects recognition of
degradation of water resources. Thus, more focus shall be given on
providing them different coping mechanisms and alternative
resource conserving technologies than educating about the
problems.
Abstract: Information Retrieval has the objective of studying
models and the realization of systems allowing a user to find the
relevant documents adapted to his need of information. The
information search is a problem which remains difficult because the
difficulty in the representing and to treat the natural languages such
as polysemia. Intentional Structures promise to be a new paradigm to
extend the existing documents structures and to enhance the different
phases of documents process such as creation, editing, search and
retrieval. The intention recognition of the author-s of texts can reduce
the largeness of this problem. In this article, we present intentions
recognition system is based on a semi-automatic method of
extraction the intentional information starting from a corpus of text.
This system is also able to update the ontology of intentions for the
enrichment of the knowledge base containing all possible intentions
of a domain. This approach uses the construction of a semi-formal
ontology which considered as the conceptualization of the intentional
information contained in a text. An experiments on scientific
publications in the field of computer science was considered to
validate this approach.
Abstract: Cognitive Science appeared about 40 years ago,
subsequent to the challenge of the Artificial Intelligence, as common
territory for several scientific disciplines such as: IT, mathematics,
psychology, neurology, philosophy, sociology, and linguistics. The
new born science was justified by the complexity of the problems
related to the human knowledge on one hand, and on the other by the
fact that none of the above mentioned sciences could explain alone
the mental phenomena. Based on the data supplied by the
experimental sciences such as psychology or neurology, models of
the human mind operation are built in the cognition science. These
models are implemented in computer programs and/or electronic
circuits (specific to the artificial intelligence) – cognitive systems –
whose competences and performances are compared to the human
ones, leading to the psychology and neurology data reinterpretation,
respectively to the construction of new models. During these
processes if psychology provides the experimental basis, philosophy
and mathematics provides the abstraction level utterly necessary for
the intermission of the mentioned sciences.
The ongoing general problematic of the cognitive approach
provides two important types of approach: the computational one,
starting from the idea that the mental phenomenon can be reduced to
1 and 0 type calculus operations, and the connection one that
considers the thinking products as being a result of the interaction
between all the composing (included) systems. In the field of
psychology measurements in the computational register use classical
inquiries and psychometrical tests, generally based on calculus
methods. Deeming things from both sides that are representing the
cognitive science, we can notice a gap in psychological product
measurement possibilities, regarded from the connectionist
perspective, that requires the unitary understanding of the quality –
quantity whole. In such approach measurement by calculus proves to
be inefficient. Our researches, deployed for longer than 20 years,
lead to the conclusion that measuring by forms properly fits to the
connectionism laws and principles.
Abstract: This paper illustrates the use of a combined neural
network model for classification of electrocardiogram (ECG) beats.
We present a trainable neural network ensemble approach to develop
customized electrocardiogram beat classifier in an effort to further
improve the performance of ECG processing and to offer
individualized health care.
We process a three stage technique for detection of premature
ventricular contraction (PVC) from normal beats and other heart
diseases. This method includes a denoising, a feature extraction and a
classification. At first we investigate the application of stationary
wavelet transform (SWT) for noise reduction of the
electrocardiogram (ECG) signals. Then feature extraction module
extracts 10 ECG morphological features and one timing interval
feature. Then a number of multilayer perceptrons (MLPs) neural
networks with different topologies are designed.
The performance of the different combination methods as well as
the efficiency of the whole system is presented. Among them,
Stacked Generalization as a proposed trainable combined neural
network model possesses the highest recognition rate of around 95%.
Therefore, this network proves to be a suitable candidate in ECG
signal diagnosis systems. ECG samples attributing to the different
ECG beat types were extracted from the MIT-BIH arrhythmia
database for the study.
Abstract: This paper describes a complex energy signal model
that is isomorphic with digital human fingerprint images. By using
signal models, the problem of fingerprint matching is transformed
into the signal processing problem of finding a correlation between
two complex signals that differ by phase-rotation and time-scaling. A
technique for minutiae matching that is independent of image
translation, rotation and linear-scaling, and is resistant to missing
minutiae is proposed. The method was tested using random data
points. The results show that for matching prints the scaling and
rotation angles are closely estimated and a stronger match will have a
higher correlation.
Abstract: Images of human iris contain specular highlights due
to the reflective properties of the cornea. This corneal reflection
causes many errors not only in iris and pupil center estimation but
also to locate iris and pupil boundaries especially for methods that
use active contour. Each iris recognition system has four steps:
Segmentation, Normalization, Encoding and Matching. In order to
address the corneal reflection, a novel reflection removal method is
proposed in this paper. Comparative experiments of two existing
methods for reflection removal method are evaluated on CASIA iris
image databases V3. The experimental results reveal that the
proposed algorithm provides higher performance in reflection
removal.
Abstract: There are two common methodologies to verify
signatures: the functional approach and the parametric approach. This
paper presents a new approach for dynamic handwritten signature
verification (HSV) using the Neural Network with verification by the
Conjugate Gradient Neural Network (NN). It is yet another avenue in
the approach to HSV that is found to produce excellent results when
compared with other methods of dynamic. Experimental results show
the system is insensitive to the order of base-classifiers and gets a
high verification ratio.
Abstract: This paper presents an effective traffic lights
recognition method at the daytime. First, Potential Traffic Lights
Detector (PTLD) use whole color source of YCbCr channel image and
make each binary image of green and red traffic lights. After PTLD
step, Shape Filter (SF) use to remove noise such as traffic sign, street
tree, vehicle, and building. At this time, noise removal properties
consist of information of blobs of binary image; length, area, area of
boundary box, etc. Finally, after an intermediate association step witch
goal is to define relevant candidates region from the previously
detected traffic lights, Adaptive Multi-class Classifier (AMC) is
executed. The classification method uses Haar-like feature and
Adaboost algorithm. For simulation, we are implemented through Intel
Core CPU with 2.80 GHz and 4 GB RAM and tested in the urban and
rural roads. Through the test, we are compared with our method and
standard object-recognition learning processes and proved that it
reached up to 94 % of detection rate which is better than the results
achieved with cascade classifiers. Computation time of our proposed
method is 15 ms.
Abstract: Heart failure is the most common reason of death
nowadays, but if the medical help is given directly, the patient-s life
may be saved in many cases. Numerous heart diseases can be
detected by means of analyzing electrocardiograms (ECG). Artificial
Neural Networks (ANN) are computer-based expert systems that
have proved to be useful in pattern recognition tasks. ANN can be
used in different phases of the decision-making process, from
classification to diagnostic procedures. This work concentrates on a
review followed by a novel method.
The purpose of the review is to assess the evidence of healthcare
benefits involving the application of artificial neural networks to the
clinical functions of diagnosis, prognosis and survival analysis, in
ECG signals. The developed method is based on a compound neural
network (CNN), to classify ECGs as normal or carrying an
AtrioVentricular heart Block (AVB). This method uses three
different feed forward multilayer neural networks. A single output
unit encodes the probability of AVB occurrences. A value between 0
and 0.1 is the desired output for a normal ECG; a value between 0.1
and 1 would infer an occurrence of an AVB. The results show that
this compound network has a good performance in detecting AVBs,
with a sensitivity of 90.7% and a specificity of 86.05%. The accuracy
value is 87.9%.
Abstract: This paper presents a system overview of Mobile to Server Face Recognition, which is a face recognition application developed specifically for mobile phones. Images taken from mobile phone cameras lack of quality due to the low resolution of the cameras. Thus, a prototype is developed to experiment the chosen method. However, this paper shows a result of system backbone without the face recognition functionality. The result demonstrated in this paper indicates that the interaction between mobile phones and server is successfully working. The result shown before the database is completely ready. The system testing is currently going on using real images and a mock-up database to test the functionality of the face recognition algorithm used in this system. An overview of the whole system including screenshots and system flow-chart are presented in this paper. This paper also presents the inspiration or motivation and the justification in developing this system.