Abstract: This paper presents an idea to improve the efficiency
of security checks in airports through the active tracking and
monitoring of passengers and staff using OFDM modulation
technique and Finger print authentication. The details of the
passenger are multiplexed using OFDM .To authenticate the
passenger, the fingerprint along with important identification
information is collected. The details of the passenger can be
transmitted after necessary modulation, and received using various
transceivers placed within the premises of the airport, and checked at
the appropriate check points, thereby increasing the efficiency of
checking. OFDM has been employed for spectral efficiency.
Abstract: This research focuses on the use of a recommender
system in decision support by means of a used car dealer case study
in Bangkok Metropolitan. The goal is to develop an effective used car
purchasing system for dealers based on the above premise. The
underlying principle rests on content-based recommendation from a
set of usability surveys. A prototype was developed to conduct
buyers- survey selected from 5 experts and 95 general public. The
responses were analyzed to determine the mean and standard
deviation of buyers- preference. The results revealed that both groups
were in favor of using the proposed system to assist their buying
decision. This indicates that the proposed system is meritorious to
used car dealers.
Abstract: The paper proposes an approach for design of modular
systems based on original technique for modeling and formulation of
combinatorial optimization problems. The proposed approach is
described on the example of personal computer configuration design.
It takes into account the existing compatibility restrictions between
the modules and can be extended and modified to reflect different
functional and users- requirements. The developed design modeling
technique is used to formulate single objective nonlinear mixedinteger
optimization tasks. The practical applicability of the
developed approach is numerically tested on the basis of real modules
data. Solutions of the formulated optimization tasks define the
optimal configuration of the system that satisfies all compatibility
restrictions and user requirements.
Abstract: In this paper we introduce three watermarking methods that can be used to count the number of times that a user has played some content. The proposed methods are tested with audio content in our experimental system using the most common signal processing attacks. The test results show that the watermarking methods used enable the watermark to be extracted under the most common attacks with a low bit error rate.
Abstract: Standards for learning objects focus primarily on
content presentation. They were already extended to support automatic evaluation but it is limited to exercises with a predefined
set of answers. The existing standards lack the metadata required by specialized evaluators to handle types of exercises with an indefinite
set of solutions. To address this issue existing learning object standards were extended to the particular requirements of a
specialized domain. A definition of programming problems as learning objects, compatible both with Learning Management Systems and with systems performing automatic evaluation of
programs, is presented in this paper. The proposed definition includes
metadata that cannot be conveniently represented using existing standards, such as: the type of automatic evaluation; the requirements
of the evaluation engine; and the roles of different assets - tests cases, program solutions, etc. The EduJudge project and its main services
are also presented as a case study on the use of the proposed definition of programming problems as learning objects.
Abstract: In this paper we propose a framework for
multisensor intrusion detection called Fuzzy Agent-Based Intrusion
Detection System. A unique feature of this model is that the agent
uses data from multiple sensors and the fuzzy logic to process log
files. Use of this feature reduces the overhead in a distributed
intrusion detection system. We have developed an agent
communication architecture that provides a prototype
implementation. This paper discusses also the issues of combining
intelligent agent technology with the intrusion detection domain.
Abstract: We demonstrate through a sample application, Ebanking,
that the Web Service Modelling Language Ontology component
can be used as a very powerful object-oriented database design
language with logic capabilities. Its conceptual syntax allows the
definition of class hierarchies, and logic syntax allows the definition
of constraints in the database. Relations, which are available for
modelling relations of three or more concepts, can be connected to
logical expressions, allowing the implicit specification of database
content. Using a reasoning tool, logic queries can also be made
against the database in simulation mode.
Abstract: In general, class complexity is measured based on any
one of these factors such as Line of Codes (LOC), Functional points
(FP), Number of Methods (NOM), Number of Attributes (NOA) and so on. There are several new techniques, methods and metrics with
the different factors that are to be developed by the researchers for calculating the complexity of the class in Object Oriented (OO)
software. Earlier, Arockiam et.al has proposed a new complexity measure namely Extended Weighted Class Complexity (EWCC)
which is an extension of Weighted Class Complexity which is proposed by Mishra et.al. EWCC is the sum of cognitive weights of
attributes and methods of the class and that of the classes derived. In EWCC, a cognitive weight of each attribute is considered to be 1.
The main problem in EWCC metric is that, every attribute holds the
same value but in general, cognitive load in understanding the
different types of attributes cannot be the same. So here, we are proposing a new metric namely Attribute Weighted Class Complexity
(AWCC). In AWCC, the cognitive weights have to be assigned for the attributes which are derived from the effort needed to understand
their data types. The proposed metric has been proved to be a better
measure of complexity of class with attributes through the case studies and experiments
Abstract: Interactive public displays give access as an
innovative media to promote enhanced communication between
people and information. However, digital public displays are subject
to a few constraints, such as content presentation. Content
presentation needs to be developed to be more interesting to attract
people’s attention and motivate people to interact with the display. In
this paper, we proposed idea to implement contents with interaction
elements for vision-based digital public display. Vision-based
techniques are applied as a sensor to detect passers-by and theme
contents are suggested to attract their attention for encouraging them
to interact with the announcement content. Virtual object, gesture
detection and projection installation are applied for attracting
attention from passers-by. Preliminary study showed positive
feedback of interactive content designing towards the public display.
This new trend would be a valuable innovation as delivery of
announcement content and information communication through this
media is proven to be more engaging.
Abstract: This paper describes a method to improve the robustness of a face recognition system based on the combination of two compensating classifiers. The face images are preprocessed by the appearance-based statistical approaches such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). LDA features of the face image are taken as the input of the Radial Basis Function Network (RBFN). The proposed approach has been tested on the ORL database. The experimental results show that the LDA+RBFN algorithm has achieved a recognition rate of 93.5%
Abstract: It is a challenge to provide a wide range of queries to
database query systems for small mobile devices, such as the PDAs
and cell phones. Currently, due to the physical and resource
limitations of these devices, most reported database querying systems
developed for them are only offering a small set of pre-determined
queries for users to possibly pose. The above can be resolved by
allowing free-form queries to be entered on the devices. Hence, a
query language that does not restrict the combination of query terms
entered by users is proposed. This paper presents the free-form query
language and the method used in translating free-form queries to
their equivalent SQL statements.
Abstract: Gradual patterns have been studied for many years as
they contain precious information. They have been integrated in
many expert systems and rule-based systems, for instance to reason
on knowledge such as “the greater the number of turns, the greater
the number of car crashes”. In many cases, this knowledge has been
considered as a rule “the greater the number of turns → the greater
the number of car crashes” Historically, works have thus been
focused on the representation of such rules, studying how implication
could be defined, especially fuzzy implication. These rules were
defined by experts who were in charge to describe the systems they
were working on in order to turn them to operate automatically. More
recently, approaches have been proposed in order to mine databases
for automatically discovering such knowledge. Several approaches
have been studied, the main scientific topics being: how to determine
what is an relevant gradual pattern, and how to discover them as
efficiently as possible (in terms of both memory and CPU usage).
However, in some cases, end-users are not interested in raw level
knowledge, and are rather interested in trends. Moreover, it may be
the case that no relevant pattern can be discovered at a low level of
granularity (e.g. city), whereas some can be discovered at a higher
level (e.g. county). In this paper, we thus extend gradual pattern
approaches in order to consider multiple level gradual patterns. For
this purpose, we consider two aggregation policies, namely
horizontal and vertical.
Abstract: This paper presents the development techniques
for a complete autonomous design model of an advanced train
control system and gives a new approach for the
implementation of multi-agents based system. This research
work proposes to develop a novel control system to enhance
the efficiency of the vehicles under constraints of various
conditions, and contributes in stability and controllability
issues, considering relevant safety and operational
requirements with command control communication and
various sensors to avoid accidents. The approach of speed
scheduling, management and control in local and distributed
environment is given to fulfill the dire needs of modern trend
and enhance the vehicles control systems in automation. These
techniques suggest the state of the art microelectronic
technology with accuracy and stability as forefront goals.
Abstract: Color categorization is shared among members in a
society. This allows communication of color, especially when using
natural language such as English. Hence sociable robot, to live
coexist with human in human society, must also have the shared
color categorization. To achieve this, many works have been done
relying on modeling of human color perception and mathematical
complexities. In contrast, in this work, the computer as brain of the
robot learns color categorization through interaction with humans
without much mathematical complexities.
Abstract: Named Entity Recognition (NER) aims to classify each word of a document into predefined target named entity classes and is now-a-days considered to be fundamental for many Natural Language Processing (NLP) tasks such as information retrieval, machine translation, information extraction, question answering systems and others. This paper reports about the development of a NER system for Bengali and Hindi using Support Vector Machine (SVM). Though this state of the art machine learning technique has been widely applied to NER in several well-studied languages, the use of this technique to Indian languages (ILs) is very new. The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the four different named (NE) classes, such as Person name, Location name, Organization name and Miscellaneous name. We have used the annotated corpora of 122,467 tokens of Bengali and 502,974 tokens of Hindi tagged with the twelve different NE classes 1, defined as part of the IJCNLP-08 NER Shared Task for South and South East Asian Languages (SSEAL) 2. In addition, we have manually annotated 150K wordforms of the Bengali news corpus, developed from the web-archive of a leading Bengali newspaper. We have also developed an unsupervised algorithm in order to generate the lexical context patterns from a part of the unlabeled Bengali news corpus. Lexical patterns have been used as the features of SVM in order to improve the system performance. The NER system has been tested with the gold standard test sets of 35K, and 60K tokens for Bengali, and Hindi, respectively. Evaluation results have demonstrated the recall, precision, and f-score values of 88.61%, 80.12%, and 84.15%, respectively, for Bengali and 80.23%, 74.34%, and 77.17%, respectively, for Hindi. Results show the improvement in the f-score by 5.13% with the use of context patterns. Statistical analysis, ANOVA is also performed to compare the performance of the proposed NER system with that of the existing HMM based system for both the languages.
Abstract: In this article an evolutionary technique has been used
for the solution of nonlinear Riccati differential equations of fractional order. In this method, genetic algorithm is used as a tool for
the competent global search method hybridized with active-set algorithm for efficient local search. The proposed method has been
successfully applied to solve the different forms of Riccati
differential equations. The strength of proposed method has in its
equal applicability for the integer order case, as well as, fractional
order case. Comparison of the method has been made with standard
numerical techniques as well as the analytic solutions. It is found
that the designed method can provide the solution to the equation
with better accuracy than its counterpart deterministic approaches.
Another advantage of the given approach is to provide results on
entire finite continuous domain unlike other numerical methods
which provide solutions only on discrete grid of points.
Abstract: Surface metrology with image processing is a challenging task having wide applications in industry. Surface roughness can be evaluated using texture classification approach. Important aspect here is appropriate selection of features that characterize the surface. We propose an effective combination of features for multi-scale and multi-directional analysis of engineering surfaces. The features include standard deviation, kurtosis and the Canny edge detector. We apply the method by analyzing the surfaces with Discrete Wavelet Transform (DWT) and Dual-Tree Complex Wavelet Transform (DT-CWT). We used Canberra distance metric for similarity comparison between the surface classes. Our database includes the surface textures manufactured by three machining processes namely Milling, Casting and Shaping. The comparative study shows that DT-CWT outperforms DWT giving correct classification performance of 91.27% with Canberra distance metric.
Abstract: This paper proposes a copyright protection scheme for color images using secret sharing and wavelet transform. The scheme contains two phases: the share image generation phase and the watermark retrieval phase. In the generation phase, the proposed scheme first converts the image into the YCbCr color space and creates a special sampling plane from the color space. Next, the scheme extracts the features from the sampling plane using the discrete wavelet transform. Then, the scheme employs the features and the watermark to generate a principal share image. In the retrieval phase, an expanded watermark is first reconstructed using the features of the suspect image and the principal share image. Next, the scheme reduces the additional noise to obtain the recovered watermark, which is then verified against the original watermark to examine the copyright. The experimental results show that the proposed scheme can resist several attacks such as JPEG compression, blurring, sharpening, noise addition, and cropping. The accuracy rates are all higher than 97%.
Abstract: A new paradigm for software design and development models software by its business process, translates the model into a process execution language, and has it run by a supporting execution engine. This process-oriented paradigm promotes modeling of software by less technical users or business analysts as well as rapid development. Since business process models may be shared by different organizations and sometimes even by different business domains, it is interesting to apply a technique used in traditional software component technology to design reusable business processes. This paper discusses an approach to apply a technique for software component fabrication to the design of process-oriented software units, called process components. These process components result from decomposing a business process of a particular application domain into subprocesses with an aim that the process components can be reusable in different process-based software models. The approach is quantitative because the quality of process component design is measured from technical features of the process components. The approach is also strategic because the measured quality is determined against business-oriented component management goals. A software tool has been developed to measure how good a process component design is, according to the required managerial goals and comparing to other designs. We also discuss how we benefit from reusable process components.
Abstract: The recent developments in computing and
communication technology permit to users to access multimedia
documents with variety of devices (PCs, PDAs, mobile phones...)
having heterogeneous capabilities. This diversification of supports
has trained the need to adapt multimedia documents according to
their execution contexts. A semantic framework for multimedia
document adaptation based on the conceptual neighborhood graphs
was proposed. In this framework, adapting consists on finding
another specification that satisfies the target constraints and which is
as close as possible from the initial document. In this paper, we
propose a new way of building the conceptual neighborhood graphs
to best preserve the proximity between the adapted and the original
documents and to deal with more elaborated relations models by
integrating the relations relaxation graphs that permit to handle the
delays and the distances defined within the relations.