Abstract: One of the most critical decision points in the design of a
face recognition system is the choice of an appropriate face representation.
Effective feature descriptors are expected to convey sufficient, invariant
and non-redundant facial information. In this work we propose a set of
Hahn moments as a new approach for feature description. Hahn moments
have been widely used in image analysis due to their invariance, nonredundancy
and the ability to extract features either globally and locally.
To assess the applicability of Hahn moments to Face Recognition we
conduct two experiments on the Olivetti Research Laboratory (ORL)
database and University of Notre-Dame (UND) X1 biometric collection.
Fusion of the global features along with the features from local facial
regions are used as an input for the conventional k-NN classifier. The
method reaches an accuracy of 93% of correctly recognized subjects for
the ORL database and 94% for the UND database.
Abstract: This study integrates a larger research empirical
project that examines second language (SL) learners’ profiles and
valid procedures to perform complete and diagnostic assessment in
schools. 102 learners of Portuguese as a SL aged 7 and 17 years
speakers of distinct home languages were assessed in several
linguistic tasks. In this article, we focused on writing performance in
the specific task of narrative essay composition. The written outputs
were measured using the score in six components adapted from an
English SL assessment context (Alberta Education): linguistic
vocabulary, grammar, syntax, strategy, socio-linguistic, and
discourse. The writing processes and strategies in Portuguese
language used by different immigrant students were analysed to
determine features and diversity of deficits on authentic texts
performed by SL writers. Differentiated performance was based on
the diversity of the following variables: grades, previous schooling,
home language, instruction in first language, and exposure to
Portuguese as Second Language. Indo-Aryan languages speakers
showed low writing scores compared to their peers and the type of
language and respective cognitive mapping (such as Mandarin and
Arabic) was the predictor, not linguistic distance. Home language
instruction should also be prominently considered in further research
to understand specificities of cognitive academic profile in a
Romance languages learning context. Additionally, this study also
examined the teachers’ representations that will be here addressed to
understand educational implications of second language teaching in
psychological distress of different minorities in schools of specific
host countries.
Abstract: Speaker Identification (SI) is the task of establishing
identity of an individual based on his/her voice characteristics. The SI
task is typically achieved by two-stage signal processing: training and
testing. The training process calculates speaker specific feature
parameters from the speech and generates speaker models
accordingly. In the testing phase, speech samples from unknown
speakers are compared with the models and classified. Even though
performance of speaker identification systems has improved due to
recent advances in speech processing techniques, there is still need of
improvement. In this paper, a Closed-Set Tex-Independent Speaker
Identification System (CISI) based on a Multiple Classifier System
(MCS) is proposed, using Mel Frequency Cepstrum Coefficient
(MFCC) as feature extraction and suitable combination of vector
quantization (VQ) and Gaussian Mixture Model (GMM) together
with Expectation Maximization algorithm (EM) for speaker
modeling. The use of Voice Activity Detector (VAD) with a hybrid
approach based on Short Time Energy (STE) and Statistical
Modeling of Background Noise in the pre-processing step of the
feature extraction yields a better and more robust automatic speaker
identification system. Also investigation of Linde-Buzo-Gray (LBG)
clustering algorithm for initialization of GMM, for estimating the
underlying parameters, in the EM step improved the convergence rate
and systems performance. It also uses relative index as confidence
measures in case of contradiction in identification process by GMM
and VQ as well. Simulation results carried out on voxforge.org
speech database using MATLAB highlight the efficacy of the
proposed method compared to earlier work.
Abstract: A large amount of software products offer a wide
range and number of features. This is called featuritis or creeping
featurism and tends to rise with each release of the product. Feautiris
often adds unnecessary complexity to software, leading to longer
learning curves and overall confusing the users and degrading their
experience. We take a look to a new design approach tendency that
has been coming up, the so-called “What You Get is What You
Need” concept that argues that products should be very focused,
simple and with minimalistic interfaces in order to help users conduct
their tasks in distraction-free ambiences. This isn’t as simple to
implement as it might sound and the developers need to cut down
features. Our contribution illustrates and evaluates this design method
through a novel distraction-free diagramming tool named Delineato
Pro for Mac OS X in which the user is confronted with an empty
canvas when launching the software and where tools only show up
when really needed.
Abstract: Feature selection has been used in many fields such as
classification, data mining and object recognition and proven to be
effective for removing irrelevant and redundant features from the
original dataset. In this paper, a new design of distributed intrusion
detection system using a combination feature selection model based
on bees and decision tree. Bees algorithm is used as the search
strategy to find the optimal subset of features, whereas decision tree
is used as a judgment for the selected features. Both the produced
features and the generated rules are used by Decision Making Mobile
Agent to decide whether there is an attack or not in the networks.
Decision Making Mobile Agent will migrate through the networks,
moving from node to another, if it found that there is an attack on one
of the nodes, it then alerts the user through User Interface Agent or
takes some action through Action Mobile Agent. The KDD Cup 99
dataset is used to test the effectiveness of the proposed system. The
results show that even if only four features are used, the proposed
system gives a better performance when it is compared with the
obtained results using all 41 features.
Abstract: File sharing in networks is generally achieved using
Peer-to-Peer (P2P) applications. Structured P2P approaches are
widely used in adhoc networks due to its distributed and scalability
features. Efficient mechanisms are required to handle the huge
amount of data distributed to all peers. The intrinsic characteristics of
P2P system makes for easier content distribution when compared to
client-server architecture. All the nodes in a P2P network act as both
client and server, thus, distributing data takes lesser time when
compared to the client-server method. CHORD protocol is a resource
routing based where nodes and data items are structured into a 1-
dimensional ring. The structured lookup algorithm of Chord is
advantageous for distributed P2P networking applications. However,
structured approach improves lookup performance in a high
bandwidth wired network it could contribute to unnecessary overhead
in overlay networks leading to degradation of network performance.
In this paper, the performance of existing CHORD protocol on
Wireless Mesh Network (WMN) when nodes are static and dynamic
is investigated.
Abstract: This paper introduces a boost converter with a new
active snubber cell. In this circuit, all of the semiconductor
components in the converter softly turns on and turns off with the
help of the active snubber cell. Compared to the other converters, the
proposed converter has advantages of size, number of components
and cost. The main feature of proposed converter is that the extra
voltage stresses do not occur on the main switches and main diodes.
Also, the current stress on the main switch is acceptable level.
Moreover, the proposed converter can operates under light load
conditions and wide input line voltage. In this study, the operating
principle of the proposed converter is presented and its operation is
verified with the Proteus simulation software for a 1 kW and 100 kHz
model.
Abstract: This paper presents the local mesh co-occurrence
patterns (LMCoP) using HSV color space for image retrieval system.
HSV color space is used in this method to utilize color, intensity and
brightness of images. Local mesh patterns are applied to define the
local information of image and gray level co-occurrence is used to
obtain the co-occurrence of LMeP pixels. Local mesh co-occurrence
pattern extracts the local directional information from local mesh
pattern and converts it into a well-mannered feature vector using gray
level co-occurrence matrix. The proposed method is tested on three
different databases called MIT VisTex, Corel, and STex. Also, this
algorithm is compared with existing methods, and results in terms of
precision and recall are shown in this paper.
Abstract: Phonocardiography is important in appraisal of
congenital heart disease and pulmonary hypertension as it reflects the
duration of right ventricular systoles. The systolic murmur in patients
with intra-cardiac shunt decreases as pulmonary hypertension
develops and may eventually disappear completely as the pulmonary
pressure reaches systemic level. Phonocardiography and auscultation
are non-invasive, low-cost, and accurate methods to assess heart
disease. In this work an objective signal processing tool to extract
information from phonocardiography signal using Wavelet is
proposed to classify the murmur as normal or abnormal. Since the
feature vector is large, a Binary Particle Swarm Optimization (PSO)
with mutation for feature selection is proposed. The extracted
features improve the classification accuracy and were tested across
various classifiers including Naïve Bayes, kNN, C4.5, and SVM.
Abstract: Typical load-bearing biological materials like bone,
mineralized tendon and shell, are biocomposites made from both
organic (collagen) and inorganic (biomineral) materials. This
amazing class of materials with intrinsic internally designed
hierarchical structures show superior mechanical properties with
regard to their weak components from which they are formed.
Extensive investigations concentrating on static loading conditions
have been done to study the biological materials failure. However,
most of the damage and failure mechanisms in load-bearing
biological materials will occur whenever their structures are exposed
to dynamic loading conditions. The main question needed to be
answered here is: What is the relation between the layout and
architecture of the load-bearing biological materials and their
dynamic behavior? In this work, a staggered model has been
developed based on the structure of natural materials at nanoscale and
Finite Element Analysis (FEA) has been used to study the dynamic
behavior of the structure of load-bearing biological materials to
answer why the staggered arrangement has been selected by nature to
make the nanocomposite structure of most of the biological materials.
The results showed that the staggered structures will efficiently
attenuate the stress wave rather than the layered structure.
Furthermore, such staggered architecture is effectively in charge of
utilizing the capacity of the biostructure to resist both normal and
shear loads. In this work, the geometrical parameters of the model
like the thickness and aspect ratio of the mineral inclusions selected
from the typical range of the experimentally observed feature sizes
and layout dimensions of the biological materials such as bone and
mineralized tendon. Furthermore, the numerical results validated with
existing theoretical solutions. Findings of the present work emphasize
on the significant effects of dynamic behavior on the natural
evolution of load-bearing biological materials and can help scientists
to design bioinspired materials in the laboratories.
Abstract: The quantitative study of cell mechanics is of
paramount interest, since it regulates the behaviour of the living cells
in response to the myriad of extracellular and intracellular
mechanical stimuli. The novel experimental techniques together with
robust computational approaches have given rise to new theories and
models, which describe cell mechanics as combination of
biomechanical and biochemical processes. This review paper
encapsulates the existing continuum-based computational approaches
that have been developed for interpreting the mechanical responses of
living cells under different loading and boundary conditions. The
salient features and drawbacks of each model are discussed from both
structural and biological points of view. This discussion can
contribute to the development of even more precise and realistic
computational models of cell mechanics based on continuum
approaches or on their combination with microstructural approaches,
which in turn may provide a better understanding of
mechanotransduction in living cells.
Abstract: Artificial neural networks have gained a lot of interest
as empirical models for their powerful representational capacity,
multi input and output mapping characteristics. In fact, most feedforward
networks with nonlinear nodal functions have been proved to
be universal approximates. In this paper, we propose a new
supervised method for color image classification based on selforganizing
feature maps (SOFM). This algorithm is based on
competitive learning. The method partitions the input space using
self-organizing feature maps to introduce the concept of local
neighborhoods. Our image classification system entered into RGB
image. Experiments with simulated data showed that separability of
classes increased when increasing training time. In additional, the
result shows proposed algorithms are effective for color image
classification.
Abstract: In light of the technological development and its
introduction into the field of education, an online course was
designed in parallel to the 'conventional' course for teaching the
''Qualitative Research Methods''. This course aimed to characterize
learning-teaching processes in a 'Qualitative Research Methods'
course studied in two different frameworks. Moreover, its objective
was to explore the difference between the culture of a physical
learning environment and that of online learning. The research
monitored four learner groups, a total of 72 students, for two years,
two groups from the two course frameworks each year. The courses
were obligatory for M.Ed. students at an academic college of
education and were given by one female-lecturer. The research was
conducted in the qualitative method as a case study in order to attain
insights about occurrences in the actual contexts and sites in which
they transpire. The research tools were open-ended questionnaire and
reflections in the form of vignettes (meaningful short pictures) to all
students as well as an interview with the lecturer. The tools facilitated
not only triangulation but also collecting data consisting of voices
and pictures of teaching and learning. The most prominent findings
are: differences between the two courses in the change features of the
learning environment culture for the acquisition of contents and
qualitative research tools. They were manifested by teaching
methods, illustration aids, lecturer's profile and students' profile.
Abstract: While choosing insulating oil, characteristic features
such as thermal cooling, endurance, efficiency and being
environment-friendly should be considered. Mineral oils are referred
as petroleum-based oil. In this study, vegetable oils investigated as an
alternative insulating liquid to mineral oil. Dissipation factor,
breakdown voltage, relative dielectric constant and resistivity
changes with the frequency and voltage of mineral, rapeseed and nut
oils were measured. Experimental studies were performed according
to ASTM D924 and IEC 60156 standards.
Abstract: The literature on language teaching and second
language acquisition has been largely driven by monolingual
ideology with a common assumption that a second language (L2) is
best taught and learned in the L2 only. The current study challenges
this assumption by reporting learners' positive perceptions of tertiary
level teachers' code switching practices in Vietnam. The findings of
this study contribute to our understanding of code switching practices
in language classrooms from a learners' perspective.
Data were collected from student participants who were working
towards a Bachelor degree in English within the English for Business
Communication stream through the use of focus group interviews.
The literature has documented that this method of interviewing has a
number of distinct advantages over individual student interviews. For
instance, group interactions generated by focus groups create a more
natural environment than that of an individual interview because they
include a range of communicative processes in which each individual
may influence or be influenced by others - as they are in their real
life. The process of interaction provides the opportunity to obtain the
meanings and answers to a problem that are "socially constructed
rather than individually created" leading to the capture of real-life
data. The distinct feature of group interaction offered by this
technique makes it a powerful means of obtaining deeper and richer
data than those from individual interviews. The data generated
through this study were analysed using a constant comparative
approach. Overall, the students expressed positive views of this
practice indicating that it is a useful teaching strategy. Teacher code
switching was seen as a learning resource and a source supporting
language output. This practice was perceived to promote student
comprehension and to aid the learning of content and target language
knowledge. This practice was also believed to scaffold the students'
language production in different contexts. However, the students
indicated their preference for teacher code switching to be
constrained, as extensive use was believed to negatively impact on
their L2 learning and trigger cognitive reliance on the L1 for L2
learning. The students also perceived that when the L1 was used to a
great extent, their ability to develop as autonomous learners was
negatively impacted.
This study found that teacher code switching was supported in
certain contexts by learners, thus suggesting that there is a need for
the widespread assumption about the monolingual teaching approach
to be re-considered.
Abstract: Today, there is a large number of political transcripts
available on the Web to be mined and used for statistical analysis,
and product recommendations. As the online political resources are
used for various purposes, automatically determining the political
orientation on these transcripts becomes crucial. The methodologies
used by machine learning algorithms to do an automatic classification
are based on different features that are classified under categories
such as Linguistic, Personality etc. Considering the ideological
differences between Liberals and Conservatives, in this paper, the
effect of Personality traits on political orientation classification is
studied. The experiments in this study were based on the correlation
between LIWC features and the BIG Five Personality traits. Several
experiments were conducted using Convote U.S. Congressional-
Speech dataset with seven benchmark classification algorithms. The
different methodologies were applied on several LIWC feature sets
that constituted by 8 to 64 varying number of features that are
correlated to five personality traits. As results of experiments,
Neuroticism trait was obtained to be the most differentiating
personality trait for classification of political orientation. At the same
time, it was observed that the personality trait based classification
methodology gives better and comparable results with the related
work.
Abstract: Background in music analysis: Traditionally, when we
think about a composer’s sketches, the chances are that we are
thinking in terms of the working out of detail, rather than the
evolution of an overall concept. Since music is a “time art,” it follows
that questions of a form cannot be entirely detached from
considerations of time. One could say that composers tend to regard
time either as a place gradually and partially intuitively filled, or they
can look for a specific strategy to occupy it. It seems that the one
thing that sheds light on Stockhausen’s compositional thinking is his
frequent use of “form schemas,” that is often a single-page
representation of the entire structure of a piece.
Background in music technology: Sonic Visualiser is a program
used to study a musical recording. It is an open source application for
viewing, analyzing, and annotating music audio files. It contains a
number of visualisation tools, which are designed with useful default
parameters for musical analysis. Additionally, the Vamp plugin
format of SV supports to provide analysis such as for example
structural segmentation.
Aims: The aim of paper is to show how SV may be used to obtain
a better understanding of the specific musical work, and how the
compositional strategy does impact on musical structures and musical
surfaces. It is known that “traditional” music analytic methods don’t
allow indicating interrelationships between musical surface (which is
perceived) and underlying musical/acoustical structure.
Main Contribution: Stockhausen had dealt with the most diverse
musical problems by the most varied methods. A characteristic which
he had never ceased to be placed at the center of his thought and
works, it was the quest for a new balance founded upon an acute
connection between speculation and intuition. In the case with
Mikrophonie I (1964) for tam-tam and 6 players Stockhausen makes
a distinction between the “connection scheme,” which indicates the
ground rules underlying all versions, and the form scheme, which is
associated with a particular version. The preface to the published
score includes both the connection scheme, and a single instance of a
“form scheme,” which is what one can hear on the CD recording. In
the current study, the insight into the compositional strategy chosen
by Stockhausen was been compared with auditory image, that is, with
the perceived musical surface. Stockhausen’s musical work is
analyzed both in terms of melodic/voice and timbre evolution.
Implications: The current study shows how musical structures
have determined of musical surface. The general assumption is this,
that while listening to music we can extract basic kinds of musical
information from musical surfaces. It is shown that interactive
strategies of musical structure analysis can offer a very fruitful way
of looking directly into certain structural features of music.
Abstract: In the field of fashion design, 3D Mannequin is a kind
of assisting tool which could rapidly realize the design concepts.
While the concept of 3D Mannequin is applied to the computer added
fashion design, it will connect with the development and the
application of design platform and system. Thus, the situation
mentioned above revealed a truth that it is very critical to develop a
module of 3D Mannequin which would correspond with the necessity
of fashion design. This research proposes a concrete plan that
developing and constructing a system of 3D Mannequin with Kinect.
In the content, ergonomic measurements of objective human features
could be attained real-time through the implement with depth camera
of Kinect, and then the mesh morphing can be implemented through
transformed the locations of the control-points on the model by
inputting those ergonomic data to get an exclusive 3D mannequin
model. In the proposed methodology, after the scanned points from the
Kinect are revised for accuracy and smoothening, a complete human
feature would be reconstructed by the ICP algorithm with the method
of image processing. Also, the objective human feature could be
recognized to analyze and get real measurements. Furthermore, the
data of ergonomic measurements could be applied to shape morphing
for the division of 3D Mannequin reconstructed by feature curves. Due
to a standardized and customer-oriented 3D Mannequin would be
generated by the implement of subdivision, the research could be
applied to the fashion design or the presentation and display of 3D
virtual clothes. In order to examine the practicality of research
structure, a system of 3D Mannequin would be constructed with JAVA
program in this study. Through the revision of experiments the
practicability-contained research result would come out.
Abstract: This paper describes the main features of a knowledge-based system evaluation method. System evaluation is placed in the context of a hybrid legal decision-support system, Advisory Support for Home Settlement in Divorce (ASHSD). Legal knowledge for ASHSD is represented in two forms, as rules and previously decided cases. Besides distinguishing the two different forms of knowledge representation, the paper outlines the actual use of these forms in a computational framework that is designed to generate a plausible solution for a given case, by using rule-based reasoning (RBR) and case-based reasoning (CBR) in an integrated environment. The nature of suitability assessment of a solution has been considered as a multiple criteria decision-making process in ASHAD evaluation. The evaluation was performed by a combination of discussions and questionnaires with different user groups. The answers to questionnaires used in this evaluations method have been measured as a fuzzy linguistic term. The finding suggests that fuzzy linguistic evaluation is practical and meaningful in knowledge-based system development purpose.
Abstract: In addition to environmental parameters like rain,
temperature diseases on crop is a major factor which affects
production quality & quantity of crop yield. Hence disease
management is a key issue in agriculture. For the management of
disease, it needs to be detected at early stage. So, treat it properly &
control spread of the disease. Now a day, it is possible to use the
images of diseased leaf to detect the type of disease by using image
processing techniques. This can be achieved by extracting features
from the images which can be further used with classification
algorithms or content based image retrieval systems. In this paper,
color image is used to extract the features such as mean and standard
deviation after the process of region cropping. The selected features
are taken from the cropped image with different image size samples.
Then, the extracted features are taken in to the account for
classification using Fuzzy Inference System (FIS).