Abstract: The self-organizing map (SOM) model is a well-known neural network model with wide spread of applications. The main characteristics of SOM are two-fold, namely dimension reduction and topology preservation. Using SOM, a high-dimensional data space will be mapped to some low-dimensional space. Meanwhile, the topological relations among data will be preserved. With such characteristics, the SOM was usually applied on data clustering and visualization tasks. However, the SOM has main disadvantage of the need to know the number and structure of neurons prior to training, which are difficult to be determined. Several schemes have been proposed to tackle such deficiency. Examples are growing/expandable SOM, hierarchical SOM, and growing hierarchical SOM. These schemes could dynamically expand the map, even generate hierarchical maps, during training. Encouraging results were reported. Basically, these schemes adapt the size and structure of the map according to the distribution of training data. That is, they are data-driven or dataoriented SOM schemes. In this work, a topic-oriented SOM scheme which is suitable for document clustering and organization will be developed. The proposed SOM will automatically adapt the number as well as the structure of the map according to identified topics. Unlike other data-oriented SOMs, our approach expands the map and generates the hierarchies both according to the topics and their characteristics of the neurons. The preliminary experiments give promising result and demonstrate the plausibility of the method.
Abstract: A new deployment of the multiple criteria decision
making (MCDM) techniques: the Simple Additive Weighting
(SAW), and the Technique for Order Preference by Similarity to
Ideal Solution (TOPSIS) for portfolio allocation, is demonstrated in
this paper. Rather than exclusive reference to mean and variance as in
the traditional mean-variance method, the criteria used in this
demonstration are the first four moments of the portfolio distribution.
Each asset is evaluated based on its marginal impacts to portfolio
higher moments that are characterized by trapezoidal fuzzy numbers.
Then centroid-based defuzzification is applied to convert fuzzy
numbers to the crisp numbers by which SAW and TOPSIS can be
deployed. Experimental results suggest the similar efficiency of these
MCDM approaches to selecting dominant assets for an optimal
portfolio under higher moments. The proposed approaches allow
investors flexibly adjust their risk preferences regarding higher
moments via different schemes adapting to various (from
conservative to risky) kinds of investors. The other significant
advantage is that, compared to the mean-variance analysis, the
portfolio weights obtained by SAW and TOPSIS are consistently
well-diversified.
Abstract: System testing is actually done to the entire system
against the Functional Requirement Specification and/or the System
Requirement Specification. Moreover, it is an investigatory testing
phase, where the focus is to have almost a destructive attitude and
test not only the design, but also the behavior and even the believed
expectations of the customer. It is also intended to test up to and
beyond the bounds defined in the software/hardware requirements
specifications. In Motorola®, Automated Testing is one of the testing
methodologies uses by GSG-iSGT (Global Software Group - iDEN
TM
Subcriber Group-Test) to increase the testing volume, productivity
and reduce test cycle-time in iDEN
TM
phones testing. Testing is able
to produce more robust products before release to the market. In this
paper, iHopper is proposed as a tool to perform stress test on iDEN
TM
phonse. We will discuss the value that automation has brought to
iDEN
TM
Phone testing such as improving software quality in the
iDEN
TM
phone together with some metrics. We will also look into
the advantages of the proposed system and some discussion of the
future work as well.
Abstract: Motor imagery classification provides an important basis for designing Brain Machine Interfaces [BMI]. A BMI captures and decodes brain EEG signals and transforms human thought into actions. The ability of an individual to control his EEG through imaginary mental tasks enables him to control devices through the BMI. This paper presents a method to design a four state BMI using EEG signals recorded from the C3 and C4 locations. Principle features extracted through principle component analysis of the segmented EEG are analyzed using two novel classification algorithms using Elman recurrent neural network and functional link neural network. Performance of both classifiers is evaluated using a particle swarm optimization training algorithm; results are also compared with the conventional back propagation training algorithm. EEG motor imagery recorded from two subjects is used in the offline analysis. From overall classification performance it is observed that the BP algorithm has higher average classification of 93.5%, while the PSO algorithm has better training time and maximum classification. The proposed methods promises to provide a useful alternative general procedure for motor imagery classification
Abstract: Cellular communication is being widely used by all
over the world. The users of handsets are increasing due to the
request from marketing sector. The important aspect that has to be
touch in this paper is about the security system of cellular
communication. It is important to provide users with a secure channel
for communication. A brief description of the new GSM cellular
network architecture will be provided. Limitations of cellular
networks, their security issues and the different types of attacks will
be discussed. The paper will go over some new security mechanisms
that have been proposed by researchers. Overall, this paper clarifies
the security system or services of cellular communication using
GSM. Three Malaysian Communication Companies were taken as
Case study in this paper.
Abstract: As mobile service's subscriber is increasing; mobile
contents services are getting more and more variables. So, mobile
contents development needs not only contents design but also
guideline for just mobile. And when mobile contents are developed, it
is important to pass the limit and restriction of the mobile. The
restrictions of mobile are small browser and screen size, limited
download size and uncomfortable navigation. So each contents of
mobile guideline will be presented for user's usability, easy of
development and consistency of rule. This paper will be proposed
methodology which is each contents of mobile guideline. Mobile web
will be developed by mobile guideline which I proposed.
Abstract: In this paper a new method is suggested for
distributed data-mining by the probability patterns. These patterns
use decision trees and decision graphs. The patterns are cared to be
valid, novel, useful, and understandable. Considering a set of
functions, the system reaches to a good pattern or better objectives.
By using the suggested method we will be able to extract the useful
information from massive and multi-relational data bases.
Abstract: Because of excellent properties, people has paid more
attention to SPIHI algorithm, which is based on the traditional wavelet
transformation theory, but it also has its shortcomings. Combined the
progress in the present wavelet domain and the human's visual
characteristics, we propose an improved algorithm based on human
visual characteristics of SPIHT in the base of analysis of SPIHI
algorithm. The experiment indicated that the coding speed and quality
has been enhanced well compared to the original SPIHT algorithm,
moreover improved the quality of the transmission cut off.
Abstract: In text categorization problem the most used method
for documents representation is based on words frequency vectors
called VSM (Vector Space Model). This representation is based only
on words from documents and in this case loses any “word context"
information found in the document. In this article we make a
comparison between the classical method of document representation
and a method called Suffix Tree Document Model (STDM) that is
based on representing documents in the Suffix Tree format. For the
STDM model we proposed a new approach for documents
representation and a new formula for computing the similarity
between two documents. Thus we propose to build the suffix tree
only for any two documents at a time. This approach is faster, it has
lower memory consumption and use entire document representation
without using methods for disposing nodes. Also for this method is
proposed a formula for computing the similarity between documents,
which improves substantially the clustering quality. This
representation method was validated using HAC - Hierarchical
Agglomerative Clustering. In this context we experiment also the
stemming influence in the document preprocessing step and highlight
the difference between similarity or dissimilarity measures to find
“closer" documents.
Abstract: An embedded system for SEU(single event upset) test
needs to be designed to prevent system failure by high-energy particles
during measuring SEU. SEU is a phenomenon in which the data is changed temporary in semiconductor device caused by high-energy particles. In this paper, we present an embedded system for
SRAM(static random access memory) SEU test. SRAMs are on the DUT(device under test) and it is separated from control board which
manages the DUT and measures the occurrence of SEU. It needs to
have considerations for preventing system failure while managing the
DUT and making an accurate measurement of SEUs. We measure the occurrence of SEUs from five different SRAMs at three different
cyclotron beam energies 30, 35, and 40MeV. The number of SEUs of SRAMs ranges from 3.75 to 261.00 in average.
Abstract: The capturing of gel electrophoresis image represents
the output of a DNA computing algorithm. Before this image is being
captured, DNA computing involves parallel overlap assembly (POA)
and polymerase chain reaction (PCR) that is the main of this
computing algorithm. However, the design of the DNA
oligonucleotides to represent a problem is quite complicated and is
prone to errors. In order to reduce these errors during the design stage
before the actual in-vitro experiment is carried out; a simulation
software capable of simulating the POA and PCR processes is
developed. This simulation software capability is unlimited where
problem of any size and complexity can be simulated, thus saving
cost due to possible errors during the design process. Information
regarding the DNA sequence during the computing process as well as
the computing output can be extracted at the same time using the
simulation software.
Abstract: Image interpolation is a common problem in imaging applications. However, most interpolation algorithms in existence suffer visually the effects of blurred edges and jagged artifacts in the image to some extent. This paper presents an adaptive feature preserving bidirectional flow process, where an inverse diffusion is performed to sharpen edges along the normal directions to the isophote lines (edges), while a normal diffusion is done to remove artifacts (“jaggies") along the tangent directions. In order to preserve image features such as edges, corners and textures, the nonlinear diffusion coefficients are locally adjusted according to the directional derivatives of the image. Experimental results on synthetic images and nature images demonstrate that our interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional interpolations.
Abstract: Decisions are regularly made during a project or
daily life. Some decisions are critical and have a direct impact on
project or human success. Formal evaluation is thus required,
especially for crucial decisions, to arrive at the optimal solution
among alternatives to address issues. According to microeconomic
theory, all people-s decisions can be modeled as indifference curves.
The proposed approach supports formal analysis and decision by
constructing indifference curve model from the previous experts-
decision criteria. These knowledge embedded in the system can be
reused or help naïve users select alternative solution of the similar
problem. Moreover, the method is flexible to cope with unlimited
number of factors influencing the decision-making. The preliminary
experimental results of the alternative selection are accurately
matched with the expert-s decisions.
Abstract: Extracting in-play scenes in sport videos is essential for
quantitative analysis and effective video browsing of the sport
activities. Game analysis of badminton as of the other racket sports
requires detecting the start and end of each rally period in an
automated manner. This paper describes an automatic serve scene
detection method employing cubic higher-order local auto-correlation
(CHLAC) and multiple regression analysis (MRA). CHLAC can
extract features of postures and motions of multiple persons without
segmenting and tracking each person by virtue of shift-invariance and
additivity, and necessitate no prior knowledge. Then, the specific
scenes, such as serve, are detected by linear regression (MRA) from
the CHLAC features. To demonstrate the effectiveness of our method,
the experiment was conducted on video sequences of five badminton
matches captured by a single ceiling camera. The averaged precision
and recall rates for the serve scene detection were 95.1% and 96.3%,
respectively.
Abstract: The general idea behind the filter is to average a pixel
using other pixel values from its neighborhood, but simultaneously to
take care of important image structures such as edges. The main
concern of the proposed filter is to distinguish between any variations
of the captured digital image due to noise and due to image structure.
The edges give the image the appearance depth and sharpness. A
loss of edges makes the image appear blurred or unfocused.
However, noise smoothing and edge enhancement are traditionally
conflicting tasks. Since most noise filtering behaves like a low pass
filter, the blurring of edges and loss of detail seems a natural
consequence. Techniques to remedy this inherent conflict often
encompass generation of new noise due to enhancement.
In this work a new fuzzy filter is presented for the noise reduction
of images corrupted with additive noise. The filter consists of three
stages. (1) Define fuzzy sets in the input space to computes a fuzzy
derivative for eight different directions (2) construct a set of IFTHEN
rules by to perform fuzzy smoothing according to
contributions of neighboring pixel values and (3) define fuzzy sets in
the output space to get the filtered and edged image.
Experimental results are obtained to show the feasibility of the
proposed approach with two dimensional objects.
Abstract: We propose a novel graphical technique (SVision) for
intrusion detection, which pictures the network as a community of
hosts independently roaming in a 3D space defined by the set of
services that they use. The aim of SVision is to graphically cluster
the hosts into normal and abnormal ones, highlighting only the ones
that are considered as a threat to the network. Our experimental
results using DARPA 1999 and 2000 intrusion detection and
evaluation datasets show the proposed technique as a good candidate
for the detection of various threats of the network such as vertical
and horizontal scanning, Denial of Service (DoS), and Distributed
DoS (DDoS) attacks.
Abstract: Wireless sensor network is formed with the combination of sensor nodes and sink nodes. Recently Wireless sensor network has attracted attention of the research community. The main application of wireless sensor network is security from different attacks both for mass public and military. However securing these networks, by itself is a critical issue due to many constraints like limited energy, computational power and lower memory. Researchers working in this area have proposed a number of security techniques for this purpose. Still, more work needs to be done.In this paper we provide a detailed discussion on security in wireless sensor networks. This paper will help to identify different obstacles and requirements for security of wireless sensor networks as well as highlight weaknesses of existing techniques.
Abstract: Discourse pronominal anaphora resolution must be part of any efficient information processing systems, since the reference of a pronoun is dependent on an antecedent located in the discourse. Contrary to knowledge-poor approaches, this paper shows that syntax-semantic relations are basic in pronominal anaphora resolution. The identification of quantified expressions to which pronouns can be anaphorically related provides further evidence that pronominal anaphora is based on domains of interpretation where asymmetric agreement holds.
Abstract: Wireless sensor networks (WSNs) consist of number
of tiny, low cost and low power sensor nodes to monitor some physical phenomenon. The major limitation in these networks is the use of non-rechargeable battery having limited power supply. The
main cause of energy consumption in such networks is
communication subsystem. This paper presents an energy efficient
Cluster Cooperative Caching at Sensor (C3S) based upon grid type clustering. Sensor nodes belonging to the same cluster/grid form a
cooperative cache system for the node since the cost for
communication with them is low both in terms of energy
consumption and message exchanges. The proposed scheme uses
cache admission control and utility based data replacement policy to
ensure that more useful data is retained in the local cache of a node.
Simulation results demonstrate that C3S scheme performs better in
various performance metrics than NICoCa which is existing
cooperative caching protocol for WSNs.
Abstract: This paper presents a novel approach for representing
the spatio-temporal topology of the camera network with overlapping
and non-overlapping fields of view (FOVs). The topology is
determined by tracking moving objects and establishing object
correspondence across multiple cameras. To track people successfully
in multiple camera views, we used the Merge-Split (MS) approach for
object occlusion in a single camera and the grid-based approach for
extracting the accurate object feature. In addition, we considered the
appearance of people and the transition time between entry and exit
zones for tracking objects across blind regions of multiple cameras
with non-overlapping FOVs. The main contribution of this paper is to
estimate transition times between various entry and exit zones, and to
graphically represent the camera topology as an undirected weighted
graph using the transition probabilities.