Abstract: This research focus on the intrusion detection system (IDS) development which using artificial immune system (AIS) with population based incremental learning (PBIL). AIS have powerful distinguished capability to extirpate antigen when the antigen intrude into human body. The PBIL is based on past learning experience to adjust new learning. Therefore we propose an intrusion detection system call PBIL-AIS which combine two approaches of PBIL and AIS to evolution computing. In AIS part we design three mechanisms such as clonal selection, negative selection and antibody level to intensify AIS performance. In experimental result, our PBIL-AIS IDS can capture high accuracy when an intrusion connection attacks.
Abstract: In this work, we present an automatic vehicle detection
system for airborne videos using combined features. We propose a
pixel-wise classification method for vehicle detection using Dynamic
Bayesian Networks. In spite of performing pixel-wise classification,
relations among neighboring pixels in a region are preserved in the
feature extraction process. The main novelty of the detection scheme is
that the extracted combined features comprise not only pixel-level
information but also region-level information. Afterwards, tracking is
performed on the detected vehicles. Tracking is performed using
efficient Kalman filter with dynamic particle sampling. Experiments
were conducted on a wide variety of airborne videos. We do not
assume prior information of camera heights, orientation, and target
object sizes in the proposed framework. The results demonstrate
flexibility and good generalization abilities of the proposed method on
a challenging dataset.
Abstract: Decision Support System (DSS) are interactive
software systems that are built to assist the management of an
organization in the decision making process when faced with nonroutine
problems in a specific application domain. Non-functional
requirements (NFRs) for a DSS deal with the desirable qualities and
restrictions that the DSS functionalities must satisfy. Unlike the
functional requirements, which are tangible functionalities provided
by the DSS, NFRs are often hidden and transparent to DSS users but
affect the quality of the provided functionalities. NFRs are often
overlooked or added later to the system in an ad hoc manner, leading
to a poor overall quality of the system. In this paper, we discuss the
development of NFRs as part of the requirements engineering phase
of the system development life cycle of DSSs. To help eliciting
NFRs, we provide a comprehensive taxonomy of NFRs for DSSs.
Abstract: Researchers have been applying tional intelligence (AI/CI) methods to computer games. In this research field, further researchesare required to compare AI/CI
methods with respect to each game application. In th
our experimental result on the comparison of three evolutionary algorithms – evolution strategy, genetic algorithm, and their hybrid
applied to evolving controller agents for the CIG 2007 Simulated Car Racing competition. Our experimental result shows that, premature
convergence of solutions was observed in the case of ES, and GA outperformed ES in the last half of generations. Besides, a hybrid
which uses GA first and ES next evolved the best solution among the whole solutions being generated. This result shows the ability of GA in
globally searching promising areas in the early stage and the ability of ES in locally searching the focused area (fine-tuning solutions).
Abstract: In this paper, we propose a robust scheme to work face alignment and recognition under various influences. For face representation, illumination influence and variable expressions are the important factors, especially the accuracy of facial localization and face recognition. In order to solve those of factors, we propose a robust approach to overcome these problems. This approach consists of two phases. One phase is preprocessed for face images by means of the proposed illumination normalization method. The location of facial features can fit more efficient and fast based on the proposed image blending. On the other hand, based on template matching, we further improve the active shape models (called as IASM) to locate the face shape more precise which can gain the recognized rate in the next phase. The other phase is to process feature extraction by using principal component analysis and face recognition by using support vector machine classifiers. The results show that this proposed method can obtain good facial localization and face recognition with varied illumination and local distortion.
Abstract: This paper examines the problem of designing robust H controllers for for HIV/AIDS infection system with dual drug dosages described by a Takagi-Sugeno (S) fuzzy model. Based on a linear matrix inequality (LMI) approach, we develop an H controller which guarantees the L2-gain of the mapping from the exogenous input noise to the regulated output to be less than some prescribed value for the system. A sufficient condition of the controller for this system is given in term of Linear Matrix Inequalities (LMIs). The effectiveness of the proposed controller design methodology is finally demonstrated through simulation results. It has been shown that the anti-HIV vaccines are critically important in reducing the infected cells.
Abstract: Cyclic delay diversity (CDD) is a simple technique to
intentionally increase frequency selectivity of channels for orthogonal
frequency division multiplexing (OFDM).This paper proposes a residual
carrier frequency offset (RFO) estimation scheme for OFDMbased
broadcasting system using CDD. In order to improve the RFO
estimation, this paper addresses a decision scheme of the amount of
cyclic delay and pilot pattern used to estimate the RFO. By computer
simulation, the proposed estimator is shown to benefit form propoerly
chosen delay parameter and perform robustly.
Abstract: This research focus on developing a new segmentation method for improving forecasting model which is call trend based segmentation method (TBSM). Generally, the piece-wise linear representation (PLR) can finds some of pair of trading points is well for time series data, but in the complicated stock environment it is not well for stock forecasting because of the stock has more trends of trading. If we consider the trends of trading in stock price for the trading signal which it will improve the precision of forecasting model. Therefore, a TBSM with SVR model used to detect the trading points for various stocks of Taiwanese and America under different trend tendencies. The experimental results show our trading system is more profitable and can be implemented in real time of stock market
Abstract: In present days the area of data migration is very topical. Current tools for data migration in the area of relational database have several disadvantages that are presented in this paper. We propose a methodology for data migration of the database tables and their data between various types of relational database systems (RDBMS). The proposed methodology contains an expert system. The expert system contains a knowledge base that is composed of IFTHEN rules and based on the input data suggests appropriate data types of columns of database tables. The proposed tool, which contains an expert system, also includes the possibility of optimizing the data types in the target RDBMS database tables based on processed data of the source RDBMS database tables. The proposed expert system is shown on data migration of selected database of the source RDBMS to the target RDBMS.
Abstract: In this paper, we propose a high capacity image hiding
technology based on pixel prediction and the difference of modified histogram. This approach is used the pixel prediction and the
difference of modified histogram to calculate the best embedding point.
This approach can improve the predictive accuracy and increase the pixel difference to advance the hiding capacity. We also use the
histogram modification to prevent the overflow and underflow. Experimental results demonstrate that our proposed method within the
same average hiding capacity can still keep high quality of image and low distortion
Abstract: This paper applies fuzzy AHP to evaluate the service
quality of online auction. Service quality is a composition of various
criteria. Among them many intangible attributes are difficult to
measure. This characteristic introduces the obstacles for respondents
on reply in the survey. So as to overcome this problem, we invite
fuzzy set theory into the measurement of performance and use AHP in
obtaining criteria. We found the most concerned dimension of service
quality is Transaction Safety Mechanism and the least is Charge Item.
Other criteria such as information security, accuracy and information
are too vital.
Abstract: In the era of great competition, understanding and satisfying
customers- requirements are the critical tasks for a company
to make a profits. Customer relationship management (CRM) thus
becomes an important business issue at present. With the help of
the data mining techniques, the manager can explore and analyze
from a large quantity of data to discover meaningful patterns and
rules. Among all methods, well-known association rule is most
commonly seen. This paper is based on Apriori algorithm and uses
genetic algorithms combining a data mining method to discover fuzzy
classification rules. The mined results can be applied in CRM to
help decision marker make correct business decisions for marketing
strategies.
Abstract: Social, mobility and information aggregation inside
business environment need to converge to reach the next step of
collaboration to enhance interaction and innovation. The following
article is based on the “Assemblage" concept seen as a framework to
formalize new user interfaces and applications. The area of research
is the Energy Social Business Environment, especially the Energy
Smart Grids, which are considered as functional and technical
foundations of the revolution of the Energy Sector of tomorrow. The
assemblages are modelized by means of mereology and simplicial
complexes. Its objective is to offer new central attention and
decision-making tools to end-users.
Abstract: The rapid improvement of the microprocessor and network has made it possible for the PC cluster to compete with conventional supercomputers. Lots of high throughput type of applications can be satisfied by using the current desktop PCs, especially for those in PC classrooms, and leave the supercomputers for the demands from large scale high performance parallel computations. This paper presents our development on enabling an automated deployment mechanism for cluster computing to utilize the computing power of PCs such as reside in PC classroom. After well deployment, these PCs can be transformed into a pre-configured cluster computing resource immediately without touching the existing education/training environment installed on these PCs. Thus, the training activities will not be affected by this additional activity to harvest idle computing cycles. The time and manpower required to build and manage a computing platform in geographically distributed PC classrooms also can be reduced by this development.
Abstract: The paper provides the basic overview of simulation optimization. The procedure of its practical using is demonstrated on the real example in simulator Witness. The simulation optimization is presented as a good tool for solving many problems in real praxis especially in production systems. The authors also characterize their own experiences and they mention the strengths and weakness of simulation optimization.
Abstract: A new and highly efficient architecture for elliptic curve scalar point multiplication which is optimized for a binary field recommended by NIST and is well-suited for elliptic curve cryptographic (ECC) applications is presented. To achieve the maximum architectural and timing improvements we have reorganized and reordered the critical path of the Lopez-Dahab scalar point multiplication architecture such that logic structures are implemented in parallel and operations in the critical path are diverted to noncritical paths. With G=41, the proposed design is capable of performing a field multiplication over the extension field with degree 163 in 11.92 s with the maximum achievable frequency of 251 MHz on Xilinx Virtex-4 (XC4VLX200) while 22% of the chip area is occupied, where G is the digit size of the underlying digit-serial finite field multiplier.
Abstract: The balanced Hamiltonian cycle problemis a quiet new topic of graph theorem. Given a graph G = (V, E), whose edge set can be partitioned into k dimensions, for positive integer k and a Hamiltonian cycle C on G. The set of all i-dimensional edge of C, which is a subset by E(C), is denoted as Ei(C).
Abstract: This paper suggests an improved integer frequency
offset (IFO) estimation scheme using P1 symbol for orthogonal
frequency division multiplexing (OFDM) based the second generation
terrestrial digital video broadcasting (DVB-T2) system. Proposed
IFO estimator is designed by a low-complexity blind IFO estimation
scheme, which is implemented with complex additions. Also, we
propose active carriers (ACs) selection scheme in order to prevent
performance degradation in blind IFO estimation. The simulation
results show that under the AWGN and TU6 channels, the proposed
method has low complexity than conventional method and almost
similar performance in comparison with the conventional method.
Abstract: Fair share objective has been included into the goaloriented
parallel computer job scheduling policy recently. However,
the previous work only presented the overall scheduling performance.
Thus, the per-user performance of the policy is still lacking. In this
work, the details of per-user fair share performance under the
Tradeoff-fs(Tx:avgX) policy will be further evaluated. A basic fair
share priority backfill policy namely RelShare(1d) is also studied.
The performance of all policies is collected using an event-driven
simulator with three real job traces as input. The experimental results
show that the high demand users are usually benefited under most
policies because their jobs are large or they have a lot of jobs. In the
large job case, one job executed may result in over-share during that
period. In the other case, the jobs may be backfilled for
performances. However, the users with a mixture of jobs may suffer
because if the smaller jobs are executing the priority of the remaining
jobs from the same user will be lower. Further analysis does not show
any significant impact of users with a lot of jobs or users with a large
runtime approximation error.