Abstract: This paper presents a very simple and efficient
algorithm for codebook search, which reduces a great deal of
computation as compared to the full codebook search. The algorithm
is based on sorting and centroid technique for search. The results
table shows the effectiveness of the proposed algorithm in terms of
computational complexity. In this paper we also introduce a new
performance parameter named as Average fractional change in pixel
value as we feel that it gives better understanding of the closeness of
the image since it is related to the perception. This new performance
parameter takes into consideration the average fractional change in
each pixel value.
Abstract: In this work we numerically examine structures which
could confine light in nanometer areas. A system consisting of two silicon disks with in plane separation of a few tens of nanometers has
been studied first. The normalized unitless effective mode volume, Veff, has been calculated for the two lowest whispering gallery mode resonances. The effective mode volume is reduced significantly as the gap between the disks decreases. In addition, the effect of the substrate is also studied. In that case, Veff of approximately the same
value as the non-substrate case for a similar two disk system can be
obtained by using disks almost twice as thick. We also numerically examine a structure consisting of a circular slot waveguide which is formed into a silicon disk resonator. We show that the proposed structure could have high Q resonances thus raising the belief that it
is a very promising candidate for optical interconnects applications.
The study includes several numerical calculations for all the geometric parameters of the structure. It also includes numerical simulations of the coupling between a waveguide and the proposed
disk resonator leading to a very promising conclusion about its applicability.
Abstract: Nowadays, Gene Ontology has been used widely by many researchers for biological data mining and information retrieval, integration of biological databases, finding genes, and incorporating knowledge in the Gene Ontology for gene clustering. However, the increase in size of the Gene Ontology has caused problems in maintaining and processing them. One way to obtain their accessibility is by clustering them into fragmented groups. Clustering the Gene Ontology is a difficult combinatorial problem and can be modeled as a graph partitioning problem. Additionally, deciding the number k of clusters to use is not easily perceived and is a hard algorithmic problem. Therefore, an approach for solving the automatic clustering of the Gene Ontology is proposed by incorporating cohesion-and-coupling metric into a hybrid algorithm consisting of a genetic algorithm and a split-and-merge algorithm. Experimental results and an example of modularized Gene Ontology in RDF/XML format are given to illustrate the effectiveness of the algorithm.
Abstract: In this paper, we have proposed two novel plasmonic demultiplexing structures based on metal-insulator-metal surfaces which, beside their compact size, have a very good transmission spectrum. The impact of the key internal parameters on the transmission spectrum is numerically analyzed by using the twodimensional (2D) finite difference time domain (FDTD) method. The proposed structures could be used to develop ultra-compact photonic wavelength demultiplexing devices for large-scale photonic integration.
Abstract: In its attempt to offer new ways into autonomy for a
large population of disabled people, assistive technology has largely
been inspired by robotics engineering. Recent human-like robots
carry new hopes that it seems to us necessary to analyze by means of
a specific theory of anthropomorphism. We propose to distinguish a
functional anthropomorphism which is the one of actual wheelchairs
from a structural anthropomorphism based on a mimicking of human
physiological systems. If functional anthropomorphism offers the
main advantage of eliminating the physiological systems
interdependence issue, the highly link between the robot for disabled
people and their human-built environment would lead to privilege in
the future the anthropomorphic structural way. In this future
framework, we highlight a general interdependence principle : any
partial or local structural anthropomorphism generates new
anthropomorphic needs due to the physiological systems
interdependency, whose effects can be evaluated by means of
specific anthropomorphic criterions derived from a set theory-based
approach of physiological systems.
Abstract: Physical education (PE) is still neglected in schools
despite its academic, social, psychological, and health benefits.
Based on the assumption that Information and Communication
Technologies (ICTs) can contribute to the development of PE in
schools, this study aims to design a model of the factors affecting the
adoption of ICTs for PE in schools. The proposed model is based on
a sound theoretical framework. It was designed following a literature
review of technology adoption theories and of ICT adoption factors
for physical education. The technology adoption model that fitted to
the best all ICT adoption factors was then chosen as the basis for the
proposed model. It was found that the Unified Theory of Acceptance
and Use of Technology (UTAUT) is the most adequate theoretical
framework for the modeling of ICT adoption factors for physical
education.
Abstract: Chlorine is one of the most abundant elements in
nature, which undergoes a complex biogeochemical cycle. Chlorine
bound in some substances is partly responsible for atmospheric ozone
depletion and contamination of some ecosystems. As due to
international regulations anthropogenic burden of volatile
organochlorines (VOCls) in atmosphere decreases, natural sources
(plants, soil, abiotic formation) are expected to dominate VOCl
production in the near future. Examples of plant VOCl production are
methyl chloride, and bromide emission from (sub)tropical ferns,
chloroform, 1,1,1-trichloroethane and tetrachloromethane emission
from temperate forest fern and moss. Temperate forests are found to
emit in addition to the previous compounds tetrachloroethene, and
brominated volatile compounds. VOCls can be taken up and further
metabolized in plants. The aim of this work is to identify and
quantitatively analyze the formed VOCls in temperate forest
ecosystems by a cryofocusing/GC-ECD detection method, hence
filling a gap of knowledge in the biogeochemical cycle of chlorine.
Abstract: In this paper, we present a new algorithm for clustering data in large datasets using image processing approaches. First the dataset is mapped into a binary image plane. The synthesized image is then processed utilizing efficient image processing techniques to cluster the data in the dataset. Henceforth, the algorithm avoids exhaustive search to identify clusters. The algorithm considers only a small set of the data that contains critical boundary information sufficient to identify contained clusters. Compared to available data clustering techniques, the proposed algorithm produces similar quality results and outperforms them in execution time and storage requirements.
Abstract: The one-class support vector machine “support vector
data description” (SVDD) is an ideal approach for anomaly or outlier
detection. However, for the applicability of SVDD in real-world
applications, the ease of use is crucial. The results of SVDD are
massively determined by the choice of the regularisation parameter C
and the kernel parameter of the widely used RBF kernel. While for
two-class SVMs the parameters can be tuned using cross-validation
based on the confusion matrix, for a one-class SVM this is not
possible, because only true positives and false negatives can occur
during training. This paper proposes an approach to find the optimal
set of parameters for SVDD solely based on a training set from
one class and without any user parameterisation. Results on artificial
and real data sets are presented, underpinning the usefulness of the
approach.
Abstract: Much research into handwritten Thai character
recognition have been proposed, such as comparing heads of
characters, Fuzzy logic and structure trees, etc. This paper presents a
system of handwritten Thai character recognition, which is based on
the Ant-minor algorithm (data mining based on Ant colony
optimization). Zoning is initially used to determine each character.
Then three distinct features (also called attributes) of each character
in each zone are extracted. The attributes are Head zone, End point,
and Feature code. All attributes are used for construct the
classification rules by an Ant-miner algorithm in order to classify
112 Thai characters. For this experiment, the Ant-miner algorithm is
adapted, with a small change to increase the recognition rate. The
result of this experiment is a 97% recognition rate of the training set
(11200 characters) and 82.7% recognition rate of unseen data test
(22400 characters).
Abstract: The new framework the Higher Education is
immersed in involves a complete change in the way lecturers must
teach and students must learn. Whereas the lecturer was the main
character in traditional education, the essential goal now is to
increase the students' participation in the process. Thus, one of the
main tasks of lecturers in this new context is to design activities of
different nature in order to encourage such participation. Seminars
are one of the activities included in this environment. They are active
sessions that enable going in depth into specific topics as support of
other activities. They are characterized by some features such as
favoring interaction between students and lecturers or improving
their communication skills. Hence, planning and organizing strategic
seminars is indeed a great challenge for lecturers with the aim of
acquiring knowledge and abilities. This paper proposes a method
using Artificial Intelligence techniques to obtain student profiles
from their marks and preferences. The goal of building such profiles
is twofold. First, it facilitates the task of splitting the students into
different groups, each group with similar preferences and learning
difficulties. Second, it makes it easy to select adequate topics to be a
candidate for the seminars. The results obtained can be either a
guarantee of what the lecturers could observe during the development
of the course or a clue to reconsider new methodological strategies in
certain topics.
Abstract: The performance of adaptive beamforming degrades
substantially in the presence of steering vector mismatches. This
degradation is especially severe in the near-field, for the
3-dimensional source location is more difficult to estimate than the
2-dimensional direction of arrival in far-field cases. As a solution, a
novel approach of near-field robust adaptive beamforming (RABF) is
proposed in this paper. It is a natural extension of the traditional
far-field RABF and belongs to the class of diagonal loading
approaches, with the loading level determined based on worst-case
performance optimization. However, different from the methods
solving the optimal loading by iteration, it suggests here a simple
closed-form solution after some approximations, and consequently,
the optimal weight vector can be expressed in a closed form. Besides
simplicity and low computational cost, the proposed approach reveals
how different factors affect the optimal loading as well as the weight
vector. Its excellent performance in the near-field is confirmed via a
number of numerical examples.
Abstract: Every day human life experiences new equipments
more automatic and with more abilities. So the need for faster
processors doesn-t seem to finish. Despite new architectures and
higher frequencies, a single processor is not adequate for many
applications. Parallel processing and networks are previous solutions
for this problem. The new solution to put a network of resources on a
chip is called NOC (network on a chip). The more usual topology for
NOC is mesh topology. There are several routing algorithms suitable
for this topology such as XY, fully adaptive, etc. In this paper we
have suggested a new algorithm named Intermittent X, Y (IX/Y). We
have developed the new algorithm in simulation environment to
compare delay and power consumption with elders' algorithms.
Abstract: this paper presents a novel neural network controller
with composite adaptation low to improve the trajectory tracking
problems of biped robots comparing with classical controller. The
biped model has 5_link and 6 degrees of freedom and actuated by
Plated Pneumatic Artificial Muscle, which have a very high power to
weight ratio and it has large stoke compared to similar actuators. The
proposed controller employ a stable neural network in to approximate
unknown nonlinear functions in the robot dynamics, thereby
overcoming some limitation of conventional controllers such as PD
or adaptive controllers and guarantee good performance. This NN
controller significantly improve the accuracy requirements by
retraining the basic PD/PID loop, but adding an inner adaptive loop
that allows the controller to learn unknown parameters such as
friction coefficient, therefore improving tracking accuracy.
Simulation results plus graphical simulation in virtual reality show
that NN controller tracking performance is considerably better than
PD controller tracking performance.
Abstract: An end-member selection method for spectral unmixing that is based on Particle Swarm Optimization (PSO) is developed in this paper. The algorithm uses the K-means clustering algorithm and a method of dynamic selection of end-members subsets to find the appropriate set of end-members for a given set of multispectral images. The proposed algorithm has been successfully applied to test image sets from various platforms such as LANDSAT 5 MSS and NOAA's AVHRR. The experimental results of the proposed algorithm are encouraging. The influence of different values of the algorithm control parameters on performance is studied. Furthermore, the performance of different versions of PSO is also investigated.
Abstract: In the automotive industry test drives are being conducted
during the development of new vehicle models or as a part of
quality assurance of series-production vehicles. The communication
on the in-vehicle network, data from external sensors, or internal
data from the electronic control units is recorded by automotive
data loggers during the test drives. The recordings are used for fault
analysis. Since the resulting data volume is tremendous, manually
analysing each recording in great detail is not feasible.
This paper proposes to use machine learning to support domainexperts
by preventing them from contemplating irrelevant data and
rather pointing them to the relevant parts in the recordings. The
underlying idea is to learn the normal behaviour from available
recordings, i.e. a training set, and then to autonomously detect
unexpected deviations and report them as anomalies.
The one-class support vector machine “support vector data description”
is utilised to calculate distances of feature vectors. SVDDSUBSEQ
is proposed as a novel approach, allowing to classify subsequences
in multivariate time series data. The approach allows to
detect unexpected faults without modelling effort as is shown with
experimental results on recordings from test drives.
Abstract: The aim of this paper is to adopt a compromise ratio (CR) methodology for fuzzy multi-attribute single-expert decision making proble. In this paper, the rating of each alternative has been described by linguistic terms, which can be expressed as triangular fuzzy numbers. The compromise ratio method for fuzzy multi-attribute single expert decision making has been considered here by taking the ranking index based on the concept that the chosen alternative should be as close as possible to the ideal solution and as far away as possible from the negative-ideal solution simultaneously. From logical point of view, the distance between two triangular fuzzy numbers also is a fuzzy number, not a crisp value. Therefore a fuzzy distance measure, which is itself a fuzzy number, has been used here to calculate the difference between two triangular fuzzy numbers. Now in this paper, with the help of this fuzzy distance measure, it has been shown that the compromise ratio is a fuzzy number and this eases the problem of the decision maker to take the decision. The computation principle and the procedure of the compromise ratio method have been described in detail in this paper. A comparative analysis of the compromise ratio method previously proposed [1] and the newly adopted method have been illustrated with two numerical examples.
Abstract: Previous the 3D model texture generation from multi-view images and mapping algorithms has issues in the texture chart generation which are the self-intersection and the concentration of the texture in texture space. Also we may suffer from some problems due to the occluded areas, such as inside parts of thighs. In this paper we propose a texture mapping technique for 3D models using multi-view images on the GPU. We do texture mapping directly on the GPU fragment shader per pixel without generation of the texture map. And we solve for the occluded area using the 3D model depth information. Our method needs more calculation on the GPU than previous works, but it has shown real-time performance and previously mentioned problems do not occur.
Abstract: A key to success of high quality software development
is to define valid and feasible requirements specification. We have
proposed a method of model-driven requirements analysis using
Unified Modeling Language (UML). The main feature of our method
is to automatically generate a Web user interface mock-up from UML
requirements analysis model so that we can confirm validity of
input/output data for each page and page transition on the system by
directly operating the mock-up. This paper proposes a support method
to check the validity of a data life cycle by using a model checking tool
“UPPAAL" focusing on CRUD (Create, Read, Update and Delete).
Exhaustive checking improves the quality of requirements analysis
model which are validated by the customers through automatically
generated mock-up. The effectiveness of our method is discussed by a
case study of requirements modeling of two small projects which are a
library management system and a supportive sales system for text
books in a university.
Abstract: In ad hoc networks, the main issue about designing of protocols is quality of service, so that in wireless sensor networks the main constraint in designing protocols is limited energy of sensors. In fact, protocols which minimize the power consumption in sensors are more considered in wireless sensor networks. One approach of reducing energy consumption in wireless sensor networks is to reduce the number of packages that are transmitted in network. The technique of collecting data that combines related data and prevent transmission of additional packages in network can be effective in the reducing of transmitted packages- number. According to this fact that information processing consumes less power than information transmitting, Data Aggregation has great importance and because of this fact this technique is used in many protocols [5]. One of the Data Aggregation techniques is to use Data Aggregation tree. But finding one optimum Data Aggregation tree to collect data in networks with one sink is a NP-hard problem. In the Data Aggregation technique, related information packages are combined in intermediate nodes and form one package. So the number of packages which are transmitted in network reduces and therefore, less energy will be consumed that at last results in improvement of longevity of network. Heuristic methods are used in order to solve the NP-hard problem that one of these optimization methods is to solve Simulated Annealing problems. In this article, we will propose new method in order to build data collection tree in wireless sensor networks by using Simulated Annealing algorithm and we will evaluate its efficiency whit Genetic Algorithm.