Abstract: The paper presents an approach for handling uncertain
information in deductive databases using multivalued logics. Uncertainty
means that database facts may be assigned logical values other
than the conventional ones - true and false. The logical values represent
various degrees of truth, which may be combined and propagated
by applying the database rules. A corresponding multivalued database
semantics is defined. We show that it extends successful conventional
semantics as the well-founded semantics, and has a polynomial time
data complexity.
Abstract: RFID (Radio Frequency IDentification) system has
been widely used in our life, such as transport systems, passports,
automotive, animal tracking, human implants, library, and so on.
However, the RFID authentication protocols between RF (Radio
Frequency) tags and the RF readers have been bring about various
privacy problems that anonymity of the tags, tracking, eavesdropping,
and so on. Many researchers have proposed the solution of the
problems. However, they still have the problem, such as location
privacy, mutual authentication. In this paper, we show the problems of
the previous protocols, and then we propose a more secure and
efficient RFID authentication protocol.
Abstract: In this paper, the implementation of a rule-based
intuitive reasoner is presented. The implementation included two
parts: the rule induction module and the intuitive reasoner. A large
weather database was acquired as the data source. Twelve weather
variables from those data were chosen as the “target variables"
whose values were predicted by the intuitive reasoner. A “complex"
situation was simulated by making only subsets of the data available
to the rule induction module. As a result, the rules induced were
based on incomplete information with variable levels of certainty.
The certainty level was modeled by a metric called "Strength of
Belief", which was assigned to each rule or datum as ancillary
information about the confidence in its accuracy. Two techniques
were employed to induce rules from the data subsets: decision tree
and multi-polynomial regression, respectively for the discrete and the
continuous type of target variables. The intuitive reasoner was tested
for its ability to use the induced rules to predict the classes of the
discrete target variables and the values of the continuous target
variables. The intuitive reasoner implemented two types of
reasoning: fast and broad where, by analogy to human thought, the
former corresponds to fast decision making and the latter to deeper
contemplation. . For reference, a weather data analysis approach
which had been applied on similar tasks was adopted to analyze the
complete database and create predictive models for the same 12
target variables. The values predicted by the intuitive reasoner and
the reference approach were compared with actual data. The intuitive
reasoner reached near-100% accuracy for two continuous target
variables. For the discrete target variables, the intuitive reasoner
predicted at least 70% as accurately as the reference reasoner. Since
the intuitive reasoner operated on rules derived from only about 10%
of the total data, it demonstrated the potential advantages in dealing
with sparse data sets as compared with conventional methods.
Abstract: This paper focuses on testing database of existing
information system. At the beginning we describe the basic problems
of implemented databases, such as data redundancy, poor design of
database logical structure or inappropriate data types in columns of
database tables. These problems are often the result of incorrect
understanding of the primary requirements for a database of an
information system. Then we propose an algorithm to compare the
conceptual model created from vague requirements for a database
with a conceptual model reconstructed from implemented database.
An algorithm also suggests steps leading to optimization of
implemented database. The proposed algorithm is verified by an
implemented prototype. The paper also describes a fuzzy system
which works with the vague requirements for a database of an
information system, procedure for creating conceptual from vague
requirements and an algorithm for reconstructing a conceptual model
from implemented database.
Abstract: In this paper, we present user pattern learning
algorithm based MDSS (Medical Decision support system) under
ubiquitous. Most of researches are focus on hardware system, hospital
management and whole concept of ubiquitous environment even
though it is hard to implement. Our objective of this paper is to design
a MDSS framework. It helps to patient for medical treatment and
prevention of the high risk patient (COPD, heart disease, Diabetes).
This framework consist database, CAD (Computer Aided diagnosis
support system) and CAP (computer aided user vital sign prediction
system). It can be applied to develop user pattern learning algorithm
based MDSS for homecare and silver town service. Especially this
CAD has wise decision making competency. It compares current vital
sign with user-s normal condition pattern data. In addition, the CAP
computes user vital sign prediction using past data of the patient. The
novel approach is using neural network method, wireless vital sign
acquisition devices and personal computer DB system. An intelligent
agent based MDSS will help elder people and high risk patients to
prevent sudden death and disease, the physician to get the online
access to patients- data, the plan of medication service priority (e.g.
emergency case).
Abstract: System MEMORI automatically detects and recognizes rotated and/or rescaled versions of the objects of a database within digital color images with cluttered background. This task is accomplished by means of a region grouping algorithm guided by heuristic rules, whose parameters concern some geometrical properties and the recognition score of the database objects. This paper focuses on the strategies implemented in MEMORI for the estimation of the heuristic rule parameters. This estimation, being automatic, makes the system a highly user-friendly tool.
Abstract: Photovoltaic power generation forecasting is an
important task in renewable energy power system planning and
operating. This paper explores the application of neural networks
(NN) to study the design of photovoltaic power generation
forecasting systems for one week ahead using weather databases
include the global irradiance, and temperature of Ghardaia city
(south of Algeria) using a data acquisition system. Simulations were
run and the results are discussed showing that neural networks
Technique is capable to decrease the photovoltaic power generation
forecasting error.
Abstract: Nowadays, Gene Ontology has been used widely by many researchers for biological data mining and information retrieval, integration of biological databases, finding genes, and incorporating knowledge in the Gene Ontology for gene clustering. However, the increase in size of the Gene Ontology has caused problems in maintaining and processing them. One way to obtain their accessibility is by clustering them into fragmented groups. Clustering the Gene Ontology is a difficult combinatorial problem and can be modeled as a graph partitioning problem. Additionally, deciding the number k of clusters to use is not easily perceived and is a hard algorithmic problem. Therefore, an approach for solving the automatic clustering of the Gene Ontology is proposed by incorporating cohesion-and-coupling metric into a hybrid algorithm consisting of a genetic algorithm and a split-and-merge algorithm. Experimental results and an example of modularized Gene Ontology in RDF/XML format are given to illustrate the effectiveness of the algorithm.
Abstract: This paper is to investigate the impplementation of security
mechanism in object oriented database system. Formal methods
plays an essential role in computer security due to its powerful expressiveness
and concise syntax and semantics. In this paper, both issues
of specification and implementation in database security environment
will be considered; and the database security is achieved through
the development of an efficient implementation of the specification
without compromising its originality and expressiveness.
Abstract: Electrocardiogram (ECG) segmentation is necessary to help reduce the time consuming task of manually annotating ECG's. Several algorithms have been developed to segment the ECG automatically. We first review several of such methods, and then present a new single lead segmentation method based on Adaptive piecewise constant approximation (APCA) and Piecewise derivative dynamic time warping (PDDTW). The results are tested on the QT database. We compared our results to Laguna's two lead method. Our proposed approach has a comparable mean error, but yields a slightly higher standard deviation than Laguna's method.
Abstract: Rule Discovery is an important technique for mining knowledge from large databases. Use of objective measures for discovering interesting rules lead to another data mining problem, although of reduced complexity. Data mining researchers have studied subjective measures of interestingness to reduce the volume of discovered rules to ultimately improve the overall efficiency of KDD process. In this paper we study novelty of the discovered rules as a subjective measure of interestingness. We propose a hybrid approach that uses objective and subjective measures to quantify novelty of the discovered rules in terms of their deviations from the known rules. We analyze the types of deviation that can arise between two rules and categorize the discovered rules according to the user specified threshold. We implement the proposed framework and experiment with some public datasets. The experimental results are quite promising.
Abstract: Since the one-to-one word translator does not have the
facility to translate pragmatic aspects of Javanese, the parallel text
alignment model described uses a phrase pair combination. The
algorithm aligns the parallel text automatically from the beginning to
the end of each sentence. Even though the results of the phrase pair
combination outperform the previous algorithm, it is still inefficient.
Recording all possible combinations consume more space in the
database and time consuming. The original algorithm is modified by
applying the edit distance coefficient to improve the data-storage
efficiency. As a result, the data-storage consumption is 90% reduced
as well as its learning period (42s).
Abstract: According to the statistics, the prevalence of congenital hearing loss in Taiwan is approximately six thousandths; furthermore, one thousandths of infants have severe hearing impairment. Hearing ability during infancy has significant impact in the development of children-s oral expressions, language maturity, cognitive performance, education ability and social behaviors in the future. Although most children born with hearing impairment have sensorineural hearing loss, almost every child more or less still retains some residual hearing. If provided with a hearing aid or cochlear implant (a bionic ear) timely in addition to hearing speech training, even severely hearing-impaired children can still learn to talk. On the other hand, those who failed to be diagnosed and thus unable to begin hearing and speech rehabilitations on a timely manner might lose an important opportunity to live a complete and healthy life. Eventually, the lack of hearing and speaking ability will affect the development of both mental and physical functions, intelligence, and social adaptability. Not only will this problem result in an irreparable regret to the hearing-impaired child for the life time, but also create a heavy burden for the family and society. Therefore, it is necessary to establish a set of computer-assisted predictive model that can accurately detect and help diagnose newborn hearing loss so that early interventions can be provided timely to eliminate waste of medical resources. This study uses information from the neonatal database of the case hospital as the subjects, adopting two different analysis methods of using support vector machine (SVM) for model predictions and using logistic regression to conduct factor screening prior to model predictions in SVM to examine the results. The results indicate that prediction accuracy is as high as 96.43% when the factors are screened and selected through logistic regression. Hence, the model constructed in this study will have real help in clinical diagnosis for the physicians and actually beneficial to the early interventions of newborn hearing impairment.
Abstract: One of the main research directions in CAD/CAM
machining area is the reducing of machining time.
The feedrate scheduling is one of the advanced techniques that
allows keeping constant the uncut chip area and as sequel to keep
constant the main cutting force. They are two main ways for feedrate
optimization. The first consists in the cutting force monitoring, which
presumes to use complex equipment for the force measurement and
after this, to set the feedrate regarding the cutting force variation. The
second way is to optimize the feedrate by keeping constant the
material removal rate regarding the cutting conditions.
In this paper there is proposed a new approach using an extended
database that replaces the system model.
The feedrate scheduling is determined based on the identification
of the reconfigurable machine tool, and the feed value determination
regarding the uncut chip section area, the contact length between tool
and blank and also regarding the geometrical roughness.
The first stage consists in the blank and tool monitoring for the
determination of actual profiles. The next stage is the determination
of programmed tool path that allows obtaining the piece target
profile.
The graphic representation environment models the tool and blank
regions and, after this, the tool model is positioned regarding the
blank model according to the programmed tool path. For each of
these positions the geometrical roughness value, the uncut chip area
and the contact length between tool and blank are calculated. Each of
these parameters are compared with the admissible values and
according to the result the feed value is established.
We can consider that this approach has the following advantages:
in case of complex cutting processes the prediction of cutting force is
possible; there is considered the real cutting profile which has
deviations from the theoretical profile; the blank-tool contact length
limitation is possible; it is possible to correct the programmed tool
path so that the target profile can be obtained.
Applying this method, there are obtained data sets which allow the
feedrate scheduling so that the uncut chip area is constant and, as a
result, the cutting force is constant, which allows to use more
efficiently the machine tool and to obtain the reduction of machining
time.
Abstract: Currently, there is no database or local norms for the
physical performance of Malaysian rugby players. This database or
norms are vital for Malaysian-s sports development as programs can
be setup to improve the current status. This pilot study was
conducted to evaluate the status of our semi professional rugby
players. The rugby players were randomly selected from the
Malaysian National team and several clubs in the Klang valley, Kuala
Lumpur Malaysia. 54 male rugby players (Age: 24.41 ± 4.06 years)
were selected for this pilot study. Height, bodyweight, percentage
body fat and body mass index (BMI) and several other physical tests
were performed. Results from the BLEEP test revealed an average of
level 9, shuttle 2 for the players. Interestingly, forwards were taller,
heavier, and had lower maximal aerobic power than backs in the
same team. In conclusion, the physical characteristics of the rugby
players were much lower when compared to international players
from other countries. From this pilot study, the physical performance
of the Malaysian team must be improved in order to further develop
the sports.
Abstract: The paper shows the necessity to increase the security
level for paper management in the cadastral field by using specific
graphical watermarks. Using the graphical watermarking will
increase the security in the cadastral content management;
furthermore any altered document will be validated afterwards of its
originality by checking the graphic watermark. If, by any reasons the
document is changed for counterfeiting, it is invalidated and found
that is an illegal copy due to the graphic check of the watermarking,
check made at pixel level
Abstract: This paper presents data annotation models at five levels of granularity (database, relation, column, tuple, and cell) of relational data to address the problem of unsuitability of most relational databases to express annotations. These models do not require any structural and schematic changes to the underlying database. These models are also flexible, extensible, customizable, database-neutral, and platform-independent. This paper also presents an SQL-like query language, named Annotation Query Language (AnQL), to query annotation documents. AnQL is simple to understand and exploits the already-existent wide knowledge and skill set of SQL.
Abstract: Organization of video databases is becoming difficult
task as the amount of video content increases. Video classification
based on the content of videos can significantly increase the speed of
tasks such as browsing and searching for a particular video in a
database. In this paper, a content-based videos classification system
for the classes indoor and outdoor is presented. The system is
intended to be used on a mobile platform with modest resources. The
algorithm makes use of the temporal redundancy in videos, which
allows using an uncomplicated classification model while still
achieving reasonable accuracy. The training and evaluation was done
on a video database of 443 videos downloaded from a video sharing
service. A total accuracy of 87.36% was achieved.
Abstract: Large volumes of fingerprints are collected and stored
every day in a wide range of applications, including forensics, access
control etc. It is evident from the database of Federal Bureau of
Investigation (FBI) which contains more than 70 million finger
prints. Compression of this database is very important because of this
high Volume. The performance of existing image coding standards
generally degrades at low bit-rates because of the underlying block
based Discrete Cosine Transform (DCT) scheme. Over the past
decade, the success of wavelets in solving many different problems
has contributed to its unprecedented popularity. Due to
implementation constraints scalar wavelets do not posses all the
properties which are needed for better performance in compression.
New class of wavelets called 'Multiwavelets' which posses more
than one scaling filters overcomes this problem. The objective of this
paper is to develop an efficient compression scheme and to obtain
better quality and higher compression ratio through multiwavelet
transform and embedded coding of multiwavelet coefficients through
Set Partitioning In Hierarchical Trees algorithm (SPIHT) algorithm.
A comparison of the best known multiwavelets is made to the best
known scalar wavelets. Both quantitative and qualitative measures of
performance are examined for Fingerprints.
Abstract: Identity verification of authentic persons by their multiview faces is a real valued problem in machine vision. Multiview faces are having difficulties due to non-linear representation in the feature space. This paper illustrates the usability of the generalization of LDA in the form of canonical covariate for face recognition to multiview faces. In the proposed work, the Gabor filter bank is used to extract facial features that characterized by spatial frequency, spatial locality and orientation. Gabor face representation captures substantial amount of variations of the face instances that often occurs due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images of rotated profile views produce Gabor faces with high dimensional features vectors. Canonical covariate is then used to Gabor faces to reduce the high dimensional feature spaces into low dimensional subspaces. Finally, support vector machines are trained with canonical sub-spaces that contain reduced set of features and perform recognition task. The proposed system is evaluated with UMIST face database. The experiment results demonstrate the efficiency and robustness of the proposed system with high recognition rates.