Abstract: This research proposes a methodology for patent-citation-based technology input-output analysis by applying the patent information to input-output analysis developed for the dependencies among different industries. For this analysis, a technology relationship matrix and its components, as well as input and technology inducement coefficients, are constructed using patent information. Then, a technology inducement coefficient is calculated by normalizing the degree of citation from certain IPCs to the different IPCs (International patent classification) or to the same IPCs. Finally, we construct a Dependency Structure Matrix (DSM) based on the technology inducement coefficient to suggest a useful application for this methodology.
Abstract: The recent developments in computing and
communication technology permit to users to access multimedia
documents with variety of devices (PCs, PDAs, mobile phones...)
having heterogeneous capabilities. This diversification of supports
has trained the need to adapt multimedia documents according to
their execution contexts. A semantic framework for multimedia
document adaptation based on the conceptual neighborhood graphs
was proposed. In this framework, adapting consists on finding
another specification that satisfies the target constraints and which is
as close as possible from the initial document. In this paper, we
propose a new way of building the conceptual neighborhood graphs
to best preserve the proximity between the adapted and the original
documents and to deal with more elaborated relations models by
integrating the relations relaxation graphs that permit to handle the
delays and the distances defined within the relations.
Abstract: This paper proposes a copyright protection scheme for color images using secret sharing and wavelet transform. The scheme contains two phases: the share image generation phase and the watermark retrieval phase. In the generation phase, the proposed scheme first converts the image into the YCbCr color space and creates a special sampling plane from the color space. Next, the scheme extracts the features from the sampling plane using the discrete wavelet transform. Then, the scheme employs the features and the watermark to generate a principal share image. In the retrieval phase, an expanded watermark is first reconstructed using the features of the suspect image and the principal share image. Next, the scheme reduces the additional noise to obtain the recovered watermark, which is then verified against the original watermark to examine the copyright. The experimental results show that the proposed scheme can resist several attacks such as JPEG compression, blurring, sharpening, noise addition, and cropping. The accuracy rates are all higher than 97%.
Abstract: Surface metrology with image processing is a challenging task having wide applications in industry. Surface roughness can be evaluated using texture classification approach. Important aspect here is appropriate selection of features that characterize the surface. We propose an effective combination of features for multi-scale and multi-directional analysis of engineering surfaces. The features include standard deviation, kurtosis and the Canny edge detector. We apply the method by analyzing the surfaces with Discrete Wavelet Transform (DWT) and Dual-Tree Complex Wavelet Transform (DT-CWT). We used Canberra distance metric for similarity comparison between the surface classes. Our database includes the surface textures manufactured by three machining processes namely Milling, Casting and Shaping. The comparative study shows that DT-CWT outperforms DWT giving correct classification performance of 91.27% with Canberra distance metric.
Abstract: In this article an evolutionary technique has been used
for the solution of nonlinear Riccati differential equations of fractional order. In this method, genetic algorithm is used as a tool for
the competent global search method hybridized with active-set algorithm for efficient local search. The proposed method has been
successfully applied to solve the different forms of Riccati
differential equations. The strength of proposed method has in its
equal applicability for the integer order case, as well as, fractional
order case. Comparison of the method has been made with standard
numerical techniques as well as the analytic solutions. It is found
that the designed method can provide the solution to the equation
with better accuracy than its counterpart deterministic approaches.
Another advantage of the given approach is to provide results on
entire finite continuous domain unlike other numerical methods
which provide solutions only on discrete grid of points.
Abstract: Color categorization is shared among members in a
society. This allows communication of color, especially when using
natural language such as English. Hence sociable robot, to live
coexist with human in human society, must also have the shared
color categorization. To achieve this, many works have been done
relying on modeling of human color perception and mathematical
complexities. In contrast, in this work, the computer as brain of the
robot learns color categorization through interaction with humans
without much mathematical complexities.
Abstract: This paper presents the development techniques
for a complete autonomous design model of an advanced train
control system and gives a new approach for the
implementation of multi-agents based system. This research
work proposes to develop a novel control system to enhance
the efficiency of the vehicles under constraints of various
conditions, and contributes in stability and controllability
issues, considering relevant safety and operational
requirements with command control communication and
various sensors to avoid accidents. The approach of speed
scheduling, management and control in local and distributed
environment is given to fulfill the dire needs of modern trend
and enhance the vehicles control systems in automation. These
techniques suggest the state of the art microelectronic
technology with accuracy and stability as forefront goals.
Abstract: Gradual patterns have been studied for many years as
they contain precious information. They have been integrated in
many expert systems and rule-based systems, for instance to reason
on knowledge such as “the greater the number of turns, the greater
the number of car crashes”. In many cases, this knowledge has been
considered as a rule “the greater the number of turns → the greater
the number of car crashes” Historically, works have thus been
focused on the representation of such rules, studying how implication
could be defined, especially fuzzy implication. These rules were
defined by experts who were in charge to describe the systems they
were working on in order to turn them to operate automatically. More
recently, approaches have been proposed in order to mine databases
for automatically discovering such knowledge. Several approaches
have been studied, the main scientific topics being: how to determine
what is an relevant gradual pattern, and how to discover them as
efficiently as possible (in terms of both memory and CPU usage).
However, in some cases, end-users are not interested in raw level
knowledge, and are rather interested in trends. Moreover, it may be
the case that no relevant pattern can be discovered at a low level of
granularity (e.g. city), whereas some can be discovered at a higher
level (e.g. county). In this paper, we thus extend gradual pattern
approaches in order to consider multiple level gradual patterns. For
this purpose, we consider two aggregation policies, namely
horizontal and vertical.
Abstract: It is a challenge to provide a wide range of queries to
database query systems for small mobile devices, such as the PDAs
and cell phones. Currently, due to the physical and resource
limitations of these devices, most reported database querying systems
developed for them are only offering a small set of pre-determined
queries for users to possibly pose. The above can be resolved by
allowing free-form queries to be entered on the devices. Hence, a
query language that does not restrict the combination of query terms
entered by users is proposed. This paper presents the free-form query
language and the method used in translating free-form queries to
their equivalent SQL statements.
Abstract: This paper describes a method to improve the robustness of a face recognition system based on the combination of two compensating classifiers. The face images are preprocessed by the appearance-based statistical approaches such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). LDA features of the face image are taken as the input of the Radial Basis Function Network (RBFN). The proposed approach has been tested on the ORL database. The experimental results show that the LDA+RBFN algorithm has achieved a recognition rate of 93.5%
Abstract: In general, class complexity is measured based on any
one of these factors such as Line of Codes (LOC), Functional points
(FP), Number of Methods (NOM), Number of Attributes (NOA) and so on. There are several new techniques, methods and metrics with
the different factors that are to be developed by the researchers for calculating the complexity of the class in Object Oriented (OO)
software. Earlier, Arockiam et.al has proposed a new complexity measure namely Extended Weighted Class Complexity (EWCC)
which is an extension of Weighted Class Complexity which is proposed by Mishra et.al. EWCC is the sum of cognitive weights of
attributes and methods of the class and that of the classes derived. In EWCC, a cognitive weight of each attribute is considered to be 1.
The main problem in EWCC metric is that, every attribute holds the
same value but in general, cognitive load in understanding the
different types of attributes cannot be the same. So here, we are proposing a new metric namely Attribute Weighted Class Complexity
(AWCC). In AWCC, the cognitive weights have to be assigned for the attributes which are derived from the effort needed to understand
their data types. The proposed metric has been proved to be a better
measure of complexity of class with attributes through the case studies and experiments
Abstract: We demonstrate through a sample application, Ebanking,
that the Web Service Modelling Language Ontology component
can be used as a very powerful object-oriented database design
language with logic capabilities. Its conceptual syntax allows the
definition of class hierarchies, and logic syntax allows the definition
of constraints in the database. Relations, which are available for
modelling relations of three or more concepts, can be connected to
logical expressions, allowing the implicit specification of database
content. Using a reasoning tool, logic queries can also be made
against the database in simulation mode.
Abstract: In this paper we propose a framework for
multisensor intrusion detection called Fuzzy Agent-Based Intrusion
Detection System. A unique feature of this model is that the agent
uses data from multiple sensors and the fuzzy logic to process log
files. Use of this feature reduces the overhead in a distributed
intrusion detection system. We have developed an agent
communication architecture that provides a prototype
implementation. This paper discusses also the issues of combining
intelligent agent technology with the intrusion detection domain.
Abstract: Standards for learning objects focus primarily on
content presentation. They were already extended to support automatic evaluation but it is limited to exercises with a predefined
set of answers. The existing standards lack the metadata required by specialized evaluators to handle types of exercises with an indefinite
set of solutions. To address this issue existing learning object standards were extended to the particular requirements of a
specialized domain. A definition of programming problems as learning objects, compatible both with Learning Management Systems and with systems performing automatic evaluation of
programs, is presented in this paper. The proposed definition includes
metadata that cannot be conveniently represented using existing standards, such as: the type of automatic evaluation; the requirements
of the evaluation engine; and the roles of different assets - tests cases, program solutions, etc. The EduJudge project and its main services
are also presented as a case study on the use of the proposed definition of programming problems as learning objects.
Abstract: In this paper we introduce three watermarking methods that can be used to count the number of times that a user has played some content. The proposed methods are tested with audio content in our experimental system using the most common signal processing attacks. The test results show that the watermarking methods used enable the watermark to be extracted under the most common attacks with a low bit error rate.
Abstract: The paper proposes an approach for design of modular
systems based on original technique for modeling and formulation of
combinatorial optimization problems. The proposed approach is
described on the example of personal computer configuration design.
It takes into account the existing compatibility restrictions between
the modules and can be extended and modified to reflect different
functional and users- requirements. The developed design modeling
technique is used to formulate single objective nonlinear mixedinteger
optimization tasks. The practical applicability of the
developed approach is numerically tested on the basis of real modules
data. Solutions of the formulated optimization tasks define the
optimal configuration of the system that satisfies all compatibility
restrictions and user requirements.
Abstract: This research focuses on the use of a recommender
system in decision support by means of a used car dealer case study
in Bangkok Metropolitan. The goal is to develop an effective used car
purchasing system for dealers based on the above premise. The
underlying principle rests on content-based recommendation from a
set of usability surveys. A prototype was developed to conduct
buyers- survey selected from 5 experts and 95 general public. The
responses were analyzed to determine the mean and standard
deviation of buyers- preference. The results revealed that both groups
were in favor of using the proposed system to assist their buying
decision. This indicates that the proposed system is meritorious to
used car dealers.
Abstract: In the past years a lot of effort has been made in the
field of face detection. The human face contains important features
that can be used by vision-based automated systems in order to
identify and recognize individuals. Face location, the primary step of
the vision-based automated systems, finds the face area in the input
image. An accurate location of the face is still a challenging task.
Viola-Jones framework has been widely used by researchers in order
to detect the location of faces and objects in a given image. Face
detection classifiers are shared by public communities, such as
OpenCV. An evaluation of these classifiers will help researchers to
choose the best classifier for their particular need. This work focuses
of the evaluation of face detection classifiers minding facial
landmarks.
Abstract: With the advantage of wireless network technology,
there are a variety of mobile applications which make the issue of
wireless sensor networks as a popular research area in recent years.
As the wireless sensor network nodes move arbitrarily with the
topology fast change feature, mobile nodes are often confronted with
the void issue which will initiate packet losing, retransmitting,
rerouting, additional transmission cost and power consumption.
When transmitting packets, we would not predict void problem
occurring in advance. Thus, how to improve geographic routing with
void avoidance in wireless networks becomes an important issue. In
this paper, we proposed a greedy geographical void routing algorithm
to solve the void problem for wireless sensor networks. We use the
information of source node and void area to draw two tangents to
form a fan range of the existence void which can announce voidavoiding
message. Then we use source and destination nodes to draw
a line with an angle of the fan range to select the next forwarding
neighbor node for routing. In a dynamic wireless sensor network
environment, the proposed greedy void avoiding algorithm can be
more time-saving and more efficient to forward packets, and improve
current geographical void problem of wireless sensor networks.
Abstract: This paper presents a semi-supervised learning algorithm called Iterative-Cross Training (ICT) to solve the Web pages classification problems. We apply Inductive logic programming (ILP) as a strong learner in ICT. The objective of this research is to evaluate the potential of the strong learner in order to boost the performance of the weak learner of ICT. We compare the result with the supervised Naive Bayes, which is the well-known algorithm for the text classification problem. The performance of our learning algorithm is also compare with other semi-supervised learning algorithms which are Co-Training and EM. The experimental results show that ICT algorithm outperforms those algorithms and the performance of the weak learner can be enhanced by ILP system.