Abstract: OPEN_EmoRec_II is an open multimodal corpus with
experimentally induced emotions. In the first half of the experiment,
emotions were induced with standardized picture material and in the
second half during a human-computer interaction (HCI), realized
with a wizard-of-oz design. The induced emotions are based on the
dimensional theory of emotions (valence, arousal and dominance).
These emotional sequences - recorded with multimodal data (facial
reactions, speech, audio and physiological reactions) during a
naturalistic-like HCI-environment one can improve classification
methods on a multimodal level.
This database is the result of an HCI-experiment, for which 30
subjects in total agreed to a publication of their data including the
video material for research purposes*. The now available open
corpus contains sensory signal of: video, audio, physiology (SCL,
respiration, BVP, EMG Corrugator supercilii, EMG Zygomaticus
Major) and facial reactions annotations.
Abstract: OPEN_EmoRec_II is an open multimodal corpus with
experimentally induced emotions. In the first half of the experiment,
emotions were induced with standardized picture material and in the
second half during a human-computer interaction (HCI), realized
with a wizard-of-oz design. The induced emotions are based on the
dimensional theory of emotions (valence, arousal and dominance).
These emotional sequences - recorded with multimodal data (facial
reactions, speech, audio and physiological reactions) during a
naturalistic-like HCI-environment one can improve classification
methods on a multimodal level.
This database is the result of an HCI-experiment, for which 30
subjects in total agreed to a publication of their data including the
video material for research purposes*. The now available open
corpus contains sensory signal of: video, audio, physiology (SCL,
respiration, BVP, EMG Corrugator supercilii, EMG Zygomaticus
Major) and facial reactions annotations.
Abstract: With the growing of computer and network, digital
data can be spread to anywhere in the world quickly. In addition,
digital data can also be copied or tampered easily so that the security
issue becomes an important topic in the protection of digital data.
Digital watermark is a method to protect the ownership of digital data.
Embedding the watermark will influence the quality certainly. In this
paper, Vector Quantization (VQ) is used to embed the watermark into
the image to fulfill the goal of data hiding. This kind of watermarking
is invisible which means that the users will not conscious the existing
of embedded watermark even though the embedded image has tiny
difference compared to the original image. Meanwhile, VQ needs a lot
of computation burden so that we adopt a fast VQ encoding scheme by
partial distortion searching (PDS) and mean approximation scheme to
speed up the data hiding process.
The watermarks we hide to the image could be gray, bi-level and
color images. Texts are also can be regarded as watermark to embed.
In order to test the robustness of the system, we adopt Photoshop to
fulfill sharpen, cropping and altering to check if the extracted
watermark is still recognizable. Experimental results demonstrate that
the proposed system can resist the above three kinds of tampering in
general cases.
Abstract: The development of adaptive user interfaces (UI)
presents for a long time an important research area in which
researcher attempt to call upon the full resources and skills of several
disciplines, The adaptive UI community holds a thorough knowledge
regarding the adaptation of UIs with users and with contexts of use.
Several solutions, models, formalisms, techniques and mechanisms
were proposed to develop adaptive UI. In this paper, we propose an
approach based on the fuzzy set theory for modeling the concept of
the appropriateness of different solutions of UI adaptation with
different situations for which interactive systems have to adapt their
UIs.
Abstract: This paper presents the development of a robot car
that can track the motion of an object by detecting its color through
an Android device. The employed computer vision algorithm uses the
OpenCV library, which is embedded into an Android application of a
smartphone, for manipulating the captured image of the object. The
captured image of the object is subjected to color conversion and is
transformed to a binary image for further processing after color
filtering. The desired object is clearly determined after removing
pixel noise by applying image morphology operations and contour
definition. Finally, the area and the center of the object are
determined so that object’s motion to be tracked. The smartphone
application has been placed on a robot car and transmits by Bluetooth
to an Arduino assembly the motion directives so that to follow
objects of a specified color. The experimental evaluation of the
proposed algorithm shows reliable color detection and smooth
tracking characteristics.
Abstract: In this paper we propose a novel methodology for
extracting a road network and its nodes from satellite images of
Algeria country.
This developed technique is a progress of our previous research
works. It is founded on the information theory and the mathematical
morphology; the information theory and the mathematical
morphology are combined together to extract and link the road
segments to form a road network and its nodes.
We therefore have to define objects as sets of pixels and to study
the shape of these objects and the relations that exist between them.
In this approach, geometric and radiometric features of roads are
integrated by a cost function and a set of selected points of a crossing
road. Its performances were tested on satellite images of Algeria
country.
Abstract: In the Hierarchical Temporal Memory (HTM) paradigm
the effect of overlap between inputs on the activation of columns in
the spatial pooler is studied. Numerical results suggest that similar
inputs are represented by similar sets of columns and dissimilar inputs
are represented by dissimilar sets of columns. It is shown that the
spatial pooler produces these results under certain conditions for
the connectivity and proximal thresholds. Following the discussion
of the initialization of parameters for the thresholds, corresponding
qualitative arguments about the learning dynamics of the spatial
pooler are discussed.
Abstract: The growth in the volume of text data such as books
and articles in libraries for centuries has imposed to establish
effective mechanisms to locate them. Early techniques such as
abstraction, indexing and the use of classification categories have
marked the birth of a new field of research called "Information
Retrieval". Information Retrieval (IR) can be defined as the task of
defining models and systems whose purpose is to facilitate access to
a set of documents in electronic form (corpus) to allow a user to find
the relevant ones for him, that is to say, the contents which matches
with the information needs of the user. This paper presents a new
semantic indexing approach of a documentary corpus. The indexing
process starts first by a term weighting phase to determine the
importance of these terms in the documents. Then the use of a
thesaurus like Wordnet allows moving to the conceptual level.
Each candidate concept is evaluated by determining its level of
representation of the document, that is to say, the importance of the
concept in relation to other concepts of the document. Finally, the
semantic index is constructed by attaching to each concept of the
ontology, the documents of the corpus in which these concepts are
found.
Abstract: In this paper the problem of the application of
temporal reasoning and case-based reasoning in intelligent decision
support systems is considered. The method of case-based reasoning
with temporal dependences for the solution of problems of real-time
diagnostics and forecasting in intelligent decision support systems is
described. This paper demonstrates how the temporal case-based
reasoning system can be used in intelligent decision support systems
of the car access control. This work was supported by RFBR.
Abstract: Margin-Based Principle has been proposed for a long
time, it has been proved that this principle could reduce the
structural risk and improve the performance in both theoretical
and practical aspects. Meanwhile, feed-forward neural network is
a traditional classifier, which is very hot at present with a deeper
architecture. However, the training algorithm of feed-forward neural
network is developed and generated from Widrow-Hoff Principle that
means to minimize the squared error. In this paper, we propose
a new training algorithm for feed-forward neural networks based
on Margin-Based Principle, which could effectively promote the
accuracy and generalization ability of neural network classifiers
with less labelled samples and flexible network. We have conducted
experiments on four UCI open datasets and achieved good results
as expected. In conclusion, our model could handle more sparse
labelled and more high-dimension dataset in a high accuracy while
modification from old ANN method to our method is easy and almost
free of work.
Abstract: Edge is variation of brightness in an image. Edge
detection is useful in many application areas such as finding forests,
rivers from a satellite image, detecting broken bone in a medical
image etc. The paper discusses about finding edge of multiple aerial
images in parallel. The proposed work tested on 38 images 37
colored and one monochrome image. The time taken to process N
images in parallel is equivalent to time taken to process 1 image in
sequential. Message Passing Interface (MPI) and Open Computing
Language (OpenCL) is used to achieve task and pixel level
parallelism respectively.
Abstract: This paper contains the description of argumentation
approach for the problem of inductive concept formation. It is
proposed to use argumentation, based on defeasible reasoning with
justification degrees, to improve the quality of classification models,
obtained by generalization algorithms. The experiment’s results on
both clear and noisy data are also presented.
Abstract: In this research article of modeling Underwater
Wireless Sensor Network Simulators, we provide a comprehensive
overview of the various currently available simulators used in UWSN
modeling. In this work, we compare their working environment,
software platform, simulation language, key features, limitations and
corresponding applications. Based on extensive experimentation and
performance analysis, we provide their efficiency for specific
applications. We have also provided guidelines for developing
protocols in different layers of the protocol stack, and finally these
parameters are also compared and tabulated. This analysis is
significant for researchers and designers to find the right simulator
for their research activities.
Abstract: Fabric textures are very common in our daily life.
However, the representation of fabric textures has never been explored
from neuroscience view. Theoretical studies suggest that primary
visual cortex (V1) uses a sparse code to efficiently represent natural
images. However, how the simple cells in V1 encode the artificial
textures is still a mystery. So, here we will take fabric texture as
stimulus to study the response of independent component analysis that
is established to model the receptive field of simple cells in V1. We
choose 140 types of fabrics to get the classical fabric textures as
materials. Experiment results indicate that the receptive fields of
simple cells have obvious selectivity in orientation, frequency and
phase when drifting gratings are used to determine their tuning
properties. Additionally, the distribution of optimal orientation and
frequency shows that the patch size selected from each original fabric
image has a significant effect on the frequency selectivity.
Abstract: This research paper presents highly optimized barrel
shifter at 22nm Hi K metal gate strained Si technology node. This
barrel shifter is having a unique combination of static and dynamic
body bias which gives lowest power delay product. This power delay
product is compared with the same circuit at same technology node
with static forward biasing at ‘supply/2’ and also with normal reverse
substrate biasing and still found to be the lowest. The power delay
product of this barrel sifter is .39362X10-17J and is lowered by
approximately 78% to reference proposed barrel shifter at 32nm bulk
CMOS technology. Power delay product of barrel shifter at 22nm Hi
K Metal gate technology with normal reverse substrate bias is
2.97186933X10-17J and can be compared with this design’s PDP of
.39362X10-17J. This design uses both static and dynamic substrate
biasing and also has approximately 96% lower power delay product
compared to only forward body biased at half of supply voltage. The
NMOS model used are predictive technology models of Arizona state
university and the simulations to be carried out using HSPICE
simulator.
Abstract: This research study aims to present a retrospective
study about speech recognition systems and artificial intelligence.
Speech recognition has become one of the widely used technologies,
as it offers great opportunity to interact and communicate with
automated machines. Precisely, it can be affirmed that speech
recognition facilitates its users and helps them to perform their daily
routine tasks, in a more convenient and effective manner. This
research intends to present the illustration of recent technological
advancements, which are associated with artificial intelligence.
Recent researches have revealed the fact that speech recognition is
found to be the utmost issue, which affects the decoding of speech. In
order to overcome these issues, different statistical models were
developed by the researchers. Some of the most prominent statistical
models include acoustic model (AM), language model (LM), lexicon
model, and hidden Markov models (HMM). The research will help in
understanding all of these statistical models of speech recognition.
Researchers have also formulated different decoding methods, which
are being utilized for realistic decoding tasks and constrained
artificial languages. These decoding methods include pattern
recognition, acoustic phonetic, and artificial intelligence. It has been
recognized that artificial intelligence is the most efficient and reliable
methods, which are being used in speech recognition.
Abstract: A new steganographic method via the use of numeric
data on public websites with a self-authentication capability is
proposed. The proposed technique transforms a secret message into
partial shares by Shamir’s (k, n)-threshold secret sharing scheme with
n = k + 1. The generated k+1 partial shares then are embedded into the
numeric items to be disguised as part of the website’s numeric content,
yielding the stego numeric content. Afterward, a receiver links to the
website and extracts every k shares among the k+1 ones from the stego
numeric content to compute k+1 copies of the secret, and the
phenomenon of value consistency of the computed k+1 copies is taken
as an evidence to determine whether the extracted message is authentic
or not, attaining the goal of self-authentication of the extracted secret
message. Experimental results and discussions are provided to show
the feasibility and effectiveness of the proposed method.
Abstract: Thousands of organisations store important and
confidential information related to them, their customers, and their
business partners in databases all across the world. The stored data
ranges from less sensitive (e.g. first name, last name, date of birth) to
more sensitive data (e.g. password, pin code, and credit card
information). Losing data, disclosing confidential information or
even changing the value of data are the severe damages that
Structured Query Language injection (SQLi) attack can cause on a
given database. It is a code injection technique where malicious SQL
statements are inserted into a given SQL database by simply using a
web browser. In this paper, we propose an effective pattern
recognition neural network model for detection and classification of
SQLi attacks. The proposed model is built from three main elements
of: a Uniform Resource Locator (URL) generator in order to generate
thousands of malicious and benign URLs, a URL classifier in order
to: 1) classify each generated URL to either a benign URL or a
malicious URL and 2) classify the malicious URLs into different
SQLi attack categories, and a NN model in order to: 1) detect either a
given URL is a malicious URL or a benign URL and 2) identify the
type of SQLi attack for each malicious URL. The model is first
trained and then evaluated by employing thousands of benign and
malicious URLs. The results of the experiments are presented in
order to demonstrate the effectiveness of the proposed approach.
Abstract: This paper investigates simple implicit force control
algorithms realizable with industrial robots. A lot of approaches
already published are difficult to implement in commercial robot
controllers, because the access to the robot joint torques is necessary
or the complete dynamic model of the manipulator is used. In
the past we already deal with explicit force control of a position
controlled robot. Well known schemes of implicit force control are
stiffness control, damping control and impedance control. Using such
algorithms the contact force cannot be set directly. It is further
the result of controller impedance, environment impedance and
the commanded robot motion/position. The relationships of these
properties are worked out in this paper in detail for the chosen
implicit approaches. They have been adapted to be implementable
on a position controlled robot. The behaviors of stiffness control
and damping control are verified by practical experiments. For this
purpose a suitable test bed was configured. Using the full mechanical
impedance within the controller structure will not be practical in the
case when the robot is in physical contact with the environment. This
fact will be verified by simulation.
Abstract: Software fault prediction models are created by using
the source code, processed metrics from the same or previous version
of code and related fault data. Some company do not store and keep
track of all artifacts which are required for software fault prediction.
To construct fault prediction model for such company, the training
data from the other projects can be one potential solution. Earlier we
predicted the fault the less cost it requires to correct. The training
data consists of metrics data and related fault data at function/module
level. This paper investigates fault predictions at early stage using the
cross-project data focusing on the design metrics. In this study,
empirical analysis is carried out to validate design metrics for cross
project fault prediction. The machine learning techniques used for
evaluation is Naïve Bayes. The design phase metrics of other projects
can be used as initial guideline for the projects where no previous
fault data is available. We analyze seven datasets from NASA
Metrics Data Program which offer design as well as code metrics.
Overall, the results of cross project is comparable to the within
company data learning.