Abstract: Motor imagery classification provides an important basis for designing Brain Machine Interfaces [BMI]. A BMI captures and decodes brain EEG signals and transforms human thought into actions. The ability of an individual to control his EEG through imaginary mental tasks enables him to control devices through the BMI. This paper presents a method to design a four state BMI using EEG signals recorded from the C3 and C4 locations. Principle features extracted through principle component analysis of the segmented EEG are analyzed using two novel classification algorithms using Elman recurrent neural network and functional link neural network. Performance of both classifiers is evaluated using a particle swarm optimization training algorithm; results are also compared with the conventional back propagation training algorithm. EEG motor imagery recorded from two subjects is used in the offline analysis. From overall classification performance it is observed that the BP algorithm has higher average classification of 93.5%, while the PSO algorithm has better training time and maximum classification. The proposed methods promises to provide a useful alternative general procedure for motor imagery classification
Abstract: The algorithms of convex hull have been extensively studied in literature, principally because of their wide range of applications in different areas. This article presents an efficient algorithm to construct approximate convex hull from a set of n points in the plane in O(n + k) time, where k is the approximation error control parameter. The proposed algorithm is suitable for applications preferred to reduce the computation time in exchange of accuracy level such as animation and interaction in computer graphics where rapid and real-time graphics rendering is indispensable.
Abstract: Artificial atoms are growing fields of interest due to their physical and optoelectronicapplications. The absorption spectra of the proposed artificial atom inpresence of Tera-Hertz field is investigated theoretically. We use the non-perturbativeFloquet theory and finite difference method to study the electronic structure of ArtificialAtom. The effect of static electric field on the energy levels of artificial atom is studied.The effect of orientation of static electric field on energy levels and diploe matrix elementsis also highlighted.
Abstract: This paper presents a novel CMOS four-transistor
SRAM cell for very high density and low power embedded SRAM
applications as well as for stand-alone SRAM applications. This cell
retains its data with leakage current and positive feedback without
refresh cycle. The new cell size is 20% smaller than a conventional
six-transistor cell using same design rules. Also proposed cell uses
two word-lines and one pair bit-line. Read operation perform from
one side of cell, and write operation perform from another side of
cell, and swing voltage reduced on word-lines thus dynamic power
during read/write operation reduced. The fabrication process is fully
compatible with high-performance CMOS logic technologies,
because there is no need to integrate a poly-Si resistor or a TFT load.
HSPICE simulation in standard 0.25μm CMOS technology confirms
all results obtained from this paper.
Abstract: System testing is actually done to the entire system
against the Functional Requirement Specification and/or the System
Requirement Specification. Moreover, it is an investigatory testing
phase, where the focus is to have almost a destructive attitude and
test not only the design, but also the behavior and even the believed
expectations of the customer. It is also intended to test up to and
beyond the bounds defined in the software/hardware requirements
specifications. In Motorola®, Automated Testing is one of the testing
methodologies uses by GSG-iSGT (Global Software Group - iDEN
TM
Subcriber Group-Test) to increase the testing volume, productivity
and reduce test cycle-time in iDEN
TM
phones testing. Testing is able
to produce more robust products before release to the market. In this
paper, iHopper is proposed as a tool to perform stress test on iDEN
TM
phonse. We will discuss the value that automation has brought to
iDEN
TM
Phone testing such as improving software quality in the
iDEN
TM
phone together with some metrics. We will also look into
the advantages of the proposed system and some discussion of the
future work as well.
Abstract: In this paper, we are concerned with the design and
its simulation studies of a modified extremum seeking control for
nonlinear systems. A standard extremum seeking control has a simple
structure, but it takes a long time to reach an optimal operating point.
We consider a modification of the standard extremum seeking control
which is aimed to reach the optimal operating point more speedily
than the standard one. In the modification, PD acceleration term
is added before an integrator making a principal control, so that it
enables the objects to be regulated to the optimal point smoothly. This
proposed method is applied to Monod and Williams-Otto models to
investigate its effectiveness. Numerical simulation results show that
this modified method can improve the time response to the optimal
operating point more speedily than the standard one.
Abstract: The fine structure of supercavitation in the wake of a
symmetrical cylinder is studied with high-speed video cameras. The
flow is observed in a cavitation tunnel at the speed of 8m/sec when the
sidewall and the wake are partially filled with the massive cavitation
bubbles. The present experiment observed that a two-dimensional
ripple wave with a wave length of 0.3mm is propagated in a
downstream direction, and then abruptly increases to a thicker
three-dimensional layer. IR-photography recorded that the wakes
originated from the horseshoe vortexes alongside the cylinder. The
wake was developed to inside the dead water zone, which absorbed the
bubbly wake propelled from the separated vortices at the center of the
cylinder. A remote sensing classification technique (maximum most
likelihood) determined that the surface porosity was 0.2, and the mean
speed in the mixed wake was 7m/sec. To confirm the existence of
two-dimensional wave motions in the interface, the experiments were
conducted at a very low frequency, and showed similar gravity waves
in both the upper and lower interfaces.
Abstract: A new deployment of the multiple criteria decision
making (MCDM) techniques: the Simple Additive Weighting
(SAW), and the Technique for Order Preference by Similarity to
Ideal Solution (TOPSIS) for portfolio allocation, is demonstrated in
this paper. Rather than exclusive reference to mean and variance as in
the traditional mean-variance method, the criteria used in this
demonstration are the first four moments of the portfolio distribution.
Each asset is evaluated based on its marginal impacts to portfolio
higher moments that are characterized by trapezoidal fuzzy numbers.
Then centroid-based defuzzification is applied to convert fuzzy
numbers to the crisp numbers by which SAW and TOPSIS can be
deployed. Experimental results suggest the similar efficiency of these
MCDM approaches to selecting dominant assets for an optimal
portfolio under higher moments. The proposed approaches allow
investors flexibly adjust their risk preferences regarding higher
moments via different schemes adapting to various (from
conservative to risky) kinds of investors. The other significant
advantage is that, compared to the mean-variance analysis, the
portfolio weights obtained by SAW and TOPSIS are consistently
well-diversified.
Abstract: A person-to-person information sharing is easily realized
by P2P networks in which servers are not essential. Leakage
of information, which are caused by malicious accesses for P2P
networks, has become a new social issues. To prevent information
leakage, it is necessary to detect and block traffics of P2P software.
Since some P2P softwares can spoof port numbers, it is difficult to
detect the traffics sent from P2P softwares by using port numbers.
It is more difficult to devise effective countermeasures for detecting
the software because their protocol are not public.
In this paper, a discriminating method of network applications
based on communication characteristics of application messages
without port numbers is proposed. The proposed method is based
on an assumption that there can be some rules about time intervals
to transmit messages in application layer and the number of necessary
packets to send one message. By extracting the rule from network
traffic, the proposed method can discriminate applications without
port numbers.
Abstract: The self-organizing map (SOM) model is a well-known neural network model with wide spread of applications. The main characteristics of SOM are two-fold, namely dimension reduction and topology preservation. Using SOM, a high-dimensional data space will be mapped to some low-dimensional space. Meanwhile, the topological relations among data will be preserved. With such characteristics, the SOM was usually applied on data clustering and visualization tasks. However, the SOM has main disadvantage of the need to know the number and structure of neurons prior to training, which are difficult to be determined. Several schemes have been proposed to tackle such deficiency. Examples are growing/expandable SOM, hierarchical SOM, and growing hierarchical SOM. These schemes could dynamically expand the map, even generate hierarchical maps, during training. Encouraging results were reported. Basically, these schemes adapt the size and structure of the map according to the distribution of training data. That is, they are data-driven or dataoriented SOM schemes. In this work, a topic-oriented SOM scheme which is suitable for document clustering and organization will be developed. The proposed SOM will automatically adapt the number as well as the structure of the map according to identified topics. Unlike other data-oriented SOMs, our approach expands the map and generates the hierarchies both according to the topics and their characteristics of the neurons. The preliminary experiments give promising result and demonstrate the plausibility of the method.
Abstract: An adaptive spatial Gaussian mixture model is proposed for clustering based color image segmentation. A new clustering objective function which incorporates the spatial information is introduced in the Bayesian framework. The weighting parameter for controlling the importance of spatial information is made adaptive to the image content to augment the smoothness towards piecewisehomogeneous region and diminish the edge-blurring effect and hence the name adaptive spatial finite mixture model. The proposed approach is compared with the spatially variant finite mixture model for pixel labeling. The experimental results with synthetic and Berkeley dataset demonstrate that the proposed method is effective in improving the segmentation and it can be employed in different practical image content understanding applications.
Abstract: The steady-state operation of maintaining voltage
stability is done by switching various controllers scattered all over
the power network. When a contingency occurs, whether forced or
unforced, the dispatcher is to alleviate the problem in a minimum
time, cost, and effort. Persistent problem may lead to blackout. The
dispatcher is to have the appropriate switching of controllers in terms
of type, location, and size to remove the contingency and maintain
voltage stability. Wrong switching may worsen the problem and that
may lead to blackout. This work proposed and used a Fuzzy CMeans
Clustering (FCMC) to assist the dispatcher in the decision
making. The FCMC is used in the static voltage stability to map
instantaneously a contingency to a set of controllers where the types,
locations, and amount of switching are induced.
Abstract: This paper proposes a method that predicts attractive
evaluation objects. In the learning phase, the method inductively
acquires trend rules from complex sequential data. The data is
composed of two types of data. One is numerical sequential data.
Each evaluation object has respective numerical sequential data. The
other is text sequential data. Each evaluation object is described in
texts. The trend rules represent changes of numerical values related
to evaluation objects. In the prediction phase, the method applies
new text sequential data to the trend rules and evaluates which
evaluation objects are attractive. This paper verifies the effect of the
proposed method by using stock price sequences and news headline
sequences. In these sequences, each stock brand corresponds to an
evaluation object. This paper discusses validity of predicted attractive
evaluation objects, the process time of each phase, and the possibility
of application tasks.
Abstract: The main aim of Supply Chain Management (SCM) is
to produce, distribute, logistics and deliver goods and equipment in
right location, right time, right amount to satisfy costumers, with
minimum time and cost waste. So implementing techniques that
reduce project time and cost, and improve productivity and
performance is very important. Emerging technologies such as the
Radio Frequency Identification (RFID) are now making it possible to
automate supply chains in a real time manner and making them more
efficient than the simple supply chain of the past for tracing and
monitoring goods and products and capturing data on movements of
goods and other events. This paper considers concepts, components
and RFID technology characteristics by concentration of warehouse
and inventories management. Additionally, utilization of RFID in the
role of improving information management in supply chain is
discussed. Finally, the facts of installation and this technology-s
results in direction with warehouse and inventory management and
business development will be presented.
Abstract: This paper presents preliminary results on modeling
and control of a quadrotor UAV. With aerodynamic concepts, a
mathematical model is firstly proposed to describe the dynamics
of the quadrotor UAV. Parameters of this model are identified by
experiments with Matlab Identify Toolbox. A group of PID controllers
are then designed based on the developed model. To verify
the developed model and controllers, simulations and experiments for
altitude control, position control and trajectory tracking are carried
out. The results show that the quadrotor UAV well follows the
referenced commands, which clearly demonstrates the effectiveness
of the proposed approach.
Abstract: Recently the use of data mining to scientific bibliographic data bases has been implemented to analyze the pathways of the knowledge or the core scientific relevances of a laureated novel or a country. This specific case of data mining has been named citation mining, and it is the integration of citation bibliometrics and text mining. In this paper we present an improved WEB implementation of statistical physics algorithms to perform the text mining component of citation mining. In particular we use an entropic like distance between the compression of text as an indicator of the similarity between them. Finally, we have included the recently proposed index h to characterize the scientific production. We have used this web implementation to identify users, applications and impact of the Mexican scientific institutions located in the State of Morelos.
Abstract: User-based Collaborative filtering (CF), one of the
most prevailing and efficient recommendation techniques, provides
personalized recommendations to users based on the opinions of other
users. Although the CF technique has been successfully applied in
various applications, it suffers from serious sparsity problems. The
cloud-model approach addresses the sparsity problems by
constructing the user-s global preference represented by a cloud
eigenvector. The user-based CF approach works well with dense
datasets while the cloud-model CF approach has a greater
performance when the dataset is sparse. In this paper, we present a
hybrid approach that integrates the predictions from both the
user-based CF and the cloud-model CF approaches. The experimental
results show that the proposed hybrid approach can ameliorate the
sparsity problem and provide an improved prediction quality.
Abstract: This paper presents the design and implementation of
the WebGD, a CORBA-based document classification and retrieval
system on Internet. The WebGD makes use of such techniques as Web,
CORBA, Java, NLP, fuzzy technique, knowledge-based processing
and database technology. Unified classification and retrieval model,
classifying and retrieving with one reasoning engine and flexible
working mode configuration are some of its main features. The
architecture of WebGD, the unified classification and retrieval model,
the components of the WebGD server and the fuzzy inference engine
are discussed in this paper in detail.
Abstract: The aim of this research is to design a collaborative
framework that integrates risk analysis activities into the geospatial
database design (GDD) process. Risk analysis is rarely undertaken
iteratively as part of the present GDD methods in conformance to
requirement engineering (RE) guidelines and risk standards.
Accordingly, when risk analysis is performed during the GDD, some
foreseeable risks may be overlooked and not reach the output
specifications especially when user intentions are not systematically
collected. This may lead to ill-defined requirements and ultimately in
higher risks of geospatial data misuse. The adopted approach consists
of 1) reviewing risk analysis process within the scope of RE and
GDD, 2) analyzing the challenges of risk analysis within the context
of GDD, and 3) presenting the components of a risk-based
collaborative framework that improves the collection of the
intended/forbidden usages of the data and helps geo-IT experts to
discover implicit requirements and risks.
Abstract: In order to develop forest management strategies in
tropical forest in Malaysia, surveying the forest resources and
monitoring the forest area affected by logging activities is essential.
There are tremendous effort has been done in classification of land
cover related to forest resource management in this country as it is a
priority in all aspects of forest mapping using remote sensing and
related technology such as GIS. In fact classification process is a
compulsory step in any remote sensing research. Therefore, the main
objective of this paper is to assess classification accuracy of
classified forest map on Landsat TM data from difference number of
reference data (200 and 388 reference data). This comparison was
made through observation (200 reference data), and interpretation
and observation approaches (388 reference data). Five land cover
classes namely primary forest, logged over forest, water bodies, bare
land and agricultural crop/mixed horticultural can be identified by
the differences in spectral wavelength. Result showed that an overall
accuracy from 200 reference data was 83.5 % (kappa value
0.7502459; kappa variance 0.002871), which was considered
acceptable or good for optical data. However, when 200 reference
data was increased to 388 in the confusion matrix, the accuracy
slightly improved from 83.5% to 89.17%, with Kappa statistic
increased from 0.7502459 to 0.8026135, respectively. The accuracy
in this classification suggested that this strategy for the selection of
training area, interpretation approaches and number of reference data
used were importance to perform better classification result.