Abstract: Because of excellent properties, people has paid more
attention to SPIHI algorithm, which is based on the traditional wavelet
transformation theory, but it also has its shortcomings. Combined the
progress in the present wavelet domain and the human's visual
characteristics, we propose an improved algorithm based on human
visual characteristics of SPIHT in the base of analysis of SPIHI
algorithm. The experiment indicated that the coding speed and quality
has been enhanced well compared to the original SPIHT algorithm,
moreover improved the quality of the transmission cut off.
Abstract: In this paper a new method is suggested for
distributed data-mining by the probability patterns. These patterns
use decision trees and decision graphs. The patterns are cared to be
valid, novel, useful, and understandable. Considering a set of
functions, the system reaches to a good pattern or better objectives.
By using the suggested method we will be able to extract the useful
information from massive and multi-relational data bases.
Abstract: In this paper we present a new approach to deal with
image segmentation. The fact that a single segmentation result do not
generally allow a higher level process to take into account all the
elements included in the image has motivated the consideration of
image segmentation as a multiobjective optimization problem. The
proposed algorithm adopts a split/merge strategy that uses the result
of the k-means algorithm as input for a quantum evolutionary
algorithm to establish a set of non-dominated solutions. The
evaluation is made simultaneously according to two distinct features:
intra-region homogeneity and inter-region heterogeneity. The
experimentation of the new approach on natural images has proved
its efficiency and usefulness.
Abstract: As mobile service's subscriber is increasing; mobile
contents services are getting more and more variables. So, mobile
contents development needs not only contents design but also
guideline for just mobile. And when mobile contents are developed, it
is important to pass the limit and restriction of the mobile. The
restrictions of mobile are small browser and screen size, limited
download size and uncomfortable navigation. So each contents of
mobile guideline will be presented for user's usability, easy of
development and consistency of rule. This paper will be proposed
methodology which is each contents of mobile guideline. Mobile web
will be developed by mobile guideline which I proposed.
Abstract: This paper describes an effective solution to the task
of a remote monitoring of super-extended objects (oil and gas
pipeline, railways, national frontier). The suggested solution is based
on the principle of simultaneously monitoring of seismoacoustic and
optical/infrared physical fields. The principle of simultaneous
monitoring of those fields is not new but in contrast to the known
solutions the suggested approach allows to control super-extended
objects with very limited operational costs. So-called C-OTDR
(Coherent Optical Time Domain Reflectometer) systems are used to
monitor the seismoacoustic field. Far-CCTV systems are used to
monitor the optical/infrared field. A simultaneous data processing
provided by both systems allows effectively detecting and classifying
target activities, which appear in the monitored objects vicinity. The
results of practical usage had shown high effectiveness of the
suggested approach.
Abstract: Cellular communication is being widely used by all
over the world. The users of handsets are increasing due to the
request from marketing sector. The important aspect that has to be
touch in this paper is about the security system of cellular
communication. It is important to provide users with a secure channel
for communication. A brief description of the new GSM cellular
network architecture will be provided. Limitations of cellular
networks, their security issues and the different types of attacks will
be discussed. The paper will go over some new security mechanisms
that have been proposed by researchers. Overall, this paper clarifies
the security system or services of cellular communication using
GSM. Three Malaysian Communication Companies were taken as
Case study in this paper.
Abstract: Motor imagery classification provides an important basis for designing Brain Machine Interfaces [BMI]. A BMI captures and decodes brain EEG signals and transforms human thought into actions. The ability of an individual to control his EEG through imaginary mental tasks enables him to control devices through the BMI. This paper presents a method to design a four state BMI using EEG signals recorded from the C3 and C4 locations. Principle features extracted through principle component analysis of the segmented EEG are analyzed using two novel classification algorithms using Elman recurrent neural network and functional link neural network. Performance of both classifiers is evaluated using a particle swarm optimization training algorithm; results are also compared with the conventional back propagation training algorithm. EEG motor imagery recorded from two subjects is used in the offline analysis. From overall classification performance it is observed that the BP algorithm has higher average classification of 93.5%, while the PSO algorithm has better training time and maximum classification. The proposed methods promises to provide a useful alternative general procedure for motor imagery classification
Abstract: System testing is actually done to the entire system
against the Functional Requirement Specification and/or the System
Requirement Specification. Moreover, it is an investigatory testing
phase, where the focus is to have almost a destructive attitude and
test not only the design, but also the behavior and even the believed
expectations of the customer. It is also intended to test up to and
beyond the bounds defined in the software/hardware requirements
specifications. In Motorola®, Automated Testing is one of the testing
methodologies uses by GSG-iSGT (Global Software Group - iDEN
TM
Subcriber Group-Test) to increase the testing volume, productivity
and reduce test cycle-time in iDEN
TM
phones testing. Testing is able
to produce more robust products before release to the market. In this
paper, iHopper is proposed as a tool to perform stress test on iDEN
TM
phonse. We will discuss the value that automation has brought to
iDEN
TM
Phone testing such as improving software quality in the
iDEN
TM
phone together with some metrics. We will also look into
the advantages of the proposed system and some discussion of the
future work as well.
Abstract: Optimal load shedding (LS) design as an emergency plan is one of the main control challenges posed by emerging new uncertainties and numerous distributed generators including renewable energy sources in a modern power system. This paper presents an overview of the key issues and new challenges on optimal LS synthesis concerning the integration of wind turbine units into the power systems. Following a brief survey on the existing LS methods, the impact of power fluctuation produced by wind powers on system frequency and voltage performance is presented. The most LS schemas proposed so far used voltage or frequency parameter via under-frequency or under-voltage LS schemes. Here, the necessity of considering both voltage and frequency indices to achieve a more effective and comprehensive LS strategy is emphasized. Then it is clarified that this problem will be more dominated in the presence of wind turbines.
Abstract: A new deployment of the multiple criteria decision
making (MCDM) techniques: the Simple Additive Weighting
(SAW), and the Technique for Order Preference by Similarity to
Ideal Solution (TOPSIS) for portfolio allocation, is demonstrated in
this paper. Rather than exclusive reference to mean and variance as in
the traditional mean-variance method, the criteria used in this
demonstration are the first four moments of the portfolio distribution.
Each asset is evaluated based on its marginal impacts to portfolio
higher moments that are characterized by trapezoidal fuzzy numbers.
Then centroid-based defuzzification is applied to convert fuzzy
numbers to the crisp numbers by which SAW and TOPSIS can be
deployed. Experimental results suggest the similar efficiency of these
MCDM approaches to selecting dominant assets for an optimal
portfolio under higher moments. The proposed approaches allow
investors flexibly adjust their risk preferences regarding higher
moments via different schemes adapting to various (from
conservative to risky) kinds of investors. The other significant
advantage is that, compared to the mean-variance analysis, the
portfolio weights obtained by SAW and TOPSIS are consistently
well-diversified.
Abstract: The self-organizing map (SOM) model is a well-known neural network model with wide spread of applications. The main characteristics of SOM are two-fold, namely dimension reduction and topology preservation. Using SOM, a high-dimensional data space will be mapped to some low-dimensional space. Meanwhile, the topological relations among data will be preserved. With such characteristics, the SOM was usually applied on data clustering and visualization tasks. However, the SOM has main disadvantage of the need to know the number and structure of neurons prior to training, which are difficult to be determined. Several schemes have been proposed to tackle such deficiency. Examples are growing/expandable SOM, hierarchical SOM, and growing hierarchical SOM. These schemes could dynamically expand the map, even generate hierarchical maps, during training. Encouraging results were reported. Basically, these schemes adapt the size and structure of the map according to the distribution of training data. That is, they are data-driven or dataoriented SOM schemes. In this work, a topic-oriented SOM scheme which is suitable for document clustering and organization will be developed. The proposed SOM will automatically adapt the number as well as the structure of the map according to identified topics. Unlike other data-oriented SOMs, our approach expands the map and generates the hierarchies both according to the topics and their characteristics of the neurons. The preliminary experiments give promising result and demonstrate the plausibility of the method.
Abstract: This article describes Uruk, the virtual museum of
Iraq that we developed for visual exploration and retrieval of image
collections. The system largely exploits the loosely-structured
hierarchy of XML documents that provides a useful representation
method to store semi-structured or unstructured data, which does not
easily fit into existing database. The system offers users the
capability to mine and manage the XML-based image collections
through a web-based Graphical User Interface (GUI). Typically, at an
interactive session with the system, the user can browse a visual
structural summary of the XML database in order to select interesting
elements. Using this intermediate result, queries combining structure
and textual references can be composed and presented to the system.
After query evaluation, the full set of answers is presented in a visual
and structured way.
Abstract: A chord of a simple polygon P is a line segment [xy]
that intersects the boundary of P only at both endpoints x and y. A
chord of P is called an interior chord provided the interior of [xy] lies
in the interior of P. P is weakly visible from [xy] if for every point v
in P there exists a point w in [xy] such that [vw] lies in P. In this
paper star-shaped, L-convex, and convex polygons are characterized
in terms of weak visibility properties from internal chords and starshaped
subsets of P. A new Krasnoselskii-type characterization of
isothetic star-shaped polygons is also presented.
Abstract: This paper proposes a method that predicts attractive
evaluation objects. In the learning phase, the method inductively
acquires trend rules from complex sequential data. The data is
composed of two types of data. One is numerical sequential data.
Each evaluation object has respective numerical sequential data. The
other is text sequential data. Each evaluation object is described in
texts. The trend rules represent changes of numerical values related
to evaluation objects. In the prediction phase, the method applies
new text sequential data to the trend rules and evaluates which
evaluation objects are attractive. This paper verifies the effect of the
proposed method by using stock price sequences and news headline
sequences. In these sequences, each stock brand corresponds to an
evaluation object. This paper discusses validity of predicted attractive
evaluation objects, the process time of each phase, and the possibility
of application tasks.
Abstract: The purpose of this paper is to detect human in images.
This paper proposes a method for extracting human body feature descriptors consisting of projected edge component series. The feature descriptor can express appearances and shapes of human with local
and global distribution of edges. Our method evaluated with a linear SVM classifier on Daimler-Chrysler pedestrian dataset, and test with
various sub-region size. The result shows that the accuracy level of
proposed method similar to Histogram of Oriented Gradients(HOG)
feature descriptor and feature extraction process is simple and faster than existing methods.
Abstract: Recently the use of data mining to scientific bibliographic data bases has been implemented to analyze the pathways of the knowledge or the core scientific relevances of a laureated novel or a country. This specific case of data mining has been named citation mining, and it is the integration of citation bibliometrics and text mining. In this paper we present an improved WEB implementation of statistical physics algorithms to perform the text mining component of citation mining. In particular we use an entropic like distance between the compression of text as an indicator of the similarity between them. Finally, we have included the recently proposed index h to characterize the scientific production. We have used this web implementation to identify users, applications and impact of the Mexican scientific institutions located in the State of Morelos.
Abstract: User-based Collaborative filtering (CF), one of the
most prevailing and efficient recommendation techniques, provides
personalized recommendations to users based on the opinions of other
users. Although the CF technique has been successfully applied in
various applications, it suffers from serious sparsity problems. The
cloud-model approach addresses the sparsity problems by
constructing the user-s global preference represented by a cloud
eigenvector. The user-based CF approach works well with dense
datasets while the cloud-model CF approach has a greater
performance when the dataset is sparse. In this paper, we present a
hybrid approach that integrates the predictions from both the
user-based CF and the cloud-model CF approaches. The experimental
results show that the proposed hybrid approach can ameliorate the
sparsity problem and provide an improved prediction quality.
Abstract: The Spiral development model has been used
successfully in many commercial systems and in a good number of
defense systems. This is due to the fact that cost-effective
incremental commitment of funds, via an analogy of the spiral model
to stud poker and also can be used to develop hardware or integrate
software, hardware, and systems. To support adaptive, semantic
collaboration between domain experts and knowledge engineers, a
new knowledge engineering process, called Spiral_OWL is proposed.
This model is based on the idea of iterative refinement, annotation
and structuring of knowledge base. The Spiral_OWL model is
generated base on spiral model and knowledge engineering
methodology. A central paradigm for Spiral_OWL model is the
concentration on risk-driven determination of knowledge engineering
process. The collaboration aspect comes into play during knowledge
acquisition and knowledge validation phase. Design rationales for the
Spiral_OWL model are to be easy-to-implement, well-organized, and
iterative development cycle as an expanding spiral.
Abstract: In mobile environments, unspecified numbers of transactions
arrive in continuous streams. To prove correctness of their
concurrent execution a method of modelling an infinite number of
transactions is needed. Standard database techniques model fixed
finite schedules of transactions. Lately, techniques based on temporal
logic have been proposed as suitable for modelling infinite schedules.
The drawback of these techniques is that proving the basic
serializability correctness condition is impractical, as encoding (the
absence of) conflict cyclicity within large sets of transactions results
in prohibitively large temporal logic formulae. In this paper, we show
that, under certain common assumptions on the graph structure of
data items accessed by the transactions, conflict cyclicity need only
be checked within all possible pairs of transactions. This results in
formulae of considerably reduced size in any temporal-logic-based
approach to proving serializability, and scales to arbitrary numbers
of transactions.
Abstract: This paper presents the design and implementation of
the WebGD, a CORBA-based document classification and retrieval
system on Internet. The WebGD makes use of such techniques as Web,
CORBA, Java, NLP, fuzzy technique, knowledge-based processing
and database technology. Unified classification and retrieval model,
classifying and retrieving with one reasoning engine and flexible
working mode configuration are some of its main features. The
architecture of WebGD, the unified classification and retrieval model,
the components of the WebGD server and the fuzzy inference engine
are discussed in this paper in detail.