Abstract: In this paper, we present a new and effective image indexing technique that extracts features directly from DCT domain. Our proposed approach is an object-based image indexing. For each block of size 8*8 in DCT domain a feature vector is extracted. Then, feature vectors of all blocks of image using a k-means algorithm is clustered into groups. Each cluster represents a special object of the image. Then we select some clusters that have largest members after clustering. The centroids of the selected clusters are taken as image feature vectors and indexed into the database. Also, we propose an approach for using of proposed image indexing method in automatic image classification. Experimental results on a database of 800 images from 8 semantic groups in automatic image classification are reported.
Abstract: Business rules and data warehouse are concepts and
technologies that impact a wide variety of organizational tasks. In
general, each area has evolved independently, impacting application
development and decision-making. Generating knowledge from data
warehouse is a complex process. This paper outlines an approach to
ease import of information and knowledge from a data warehouse
star schema through an inference class of business rules. The paper
utilizes the Oracle database for illustrating the working of the
concepts. The star schema structure and the business rules are stored
within a relational database. The approach is explained through a
prototype in Oracle-s PL/SQL Server Pages.
Abstract: The main mission of Ezilla is to provide a friendly
interface to access the virtual machine and quickly deploy the high
performance computing environment. Ezilla has been developed by
Pervasive Computing Team at National Center for High-performance
Computing (NCHC). Ezilla integrates the Cloud middleware,
virtualization technology, and Web-based Operating System (WebOS)
to form a virtual computer in distributed computing environment. In
order to upgrade the dataset and speedup, we proposed the sensor
observation system to deal with a huge amount of data in the
Cassandra database. The sensor observation system is based on the
Ezilla to store sensor raw data into distributed database. We adopt the
Ezilla Cloud service to create virtual machines and login into virtual
machine to deploy the sensor observation system. Integrating the
sensor observation system with Ezilla is to quickly deploy experiment
environment and access a huge amount of data with distributed
database that support the replication mechanism to protect the data
security.
Abstract: The draw solute separation process in Forward
Osmosis desalination was simulated in Aspen Plus chemical process
modeling software, to estimate the energy consumption and compare
it with other desalination processes, mainly the Reverse Osmosis
process which is currently most prevalent. The electrolytic chemistry
for the system was retrieved using the Elec – NRTL property method
in the Aspen Plus database. Electrical equivalent of energy required
in the Forward Osmosis desalination technique was estimated and
compared with the prevalent desalination techniques.
Abstract: In this paper, we present a new method for
incorporating global shift invariance in support vector machines.
Unlike other approaches which incorporate a feature extraction stage,
we first scale the image and then classify it by using the modified
support vector machines classifier. Shift invariance is achieved by
replacing dot products between patterns used by the SVM classifier
with the maximum cross-correlation value between them. Unlike the
normal approach, in which the patterns are treated as vectors, in our
approach the patterns are treated as matrices (or images). Crosscorrelation
is computed by using computationally efficient
techniques such as the fast Fourier transform. The method has been
tested on the ORL face database. The tests indicate that this method
can improve the recognition rate of an SVM classifier.
Abstract: The network traffic data provided for the design of
intrusion detection always are large with ineffective information and
enclose limited and ambiguous information about users- activities.
We study the problems and propose a two phases approach in our
intrusion detection design. In the first phase, we develop a
correlation-based feature selection algorithm to remove the worthless
information from the original high dimensional database. Next, we
design an intrusion detection method to solve the problems of
uncertainty caused by limited and ambiguous information. In the
experiments, we choose six UCI databases and DARPA KDD99
intrusion detection data set as our evaluation tools. Empirical studies
indicate that our feature selection algorithm is capable of reducing the
size of data set. Our intrusion detection method achieves a better
performance than those of participating intrusion detectors.
Abstract: The uses of road map in daily activities are numerous
but it is a hassle to construct and update a road map whenever there
are changes. In Universiti Malaysia Sarawak, research on Automatic
Road Extraction (ARE) was explored to solve the difficulties in
updating road map. The research started with using Satellite Image
(SI), or in short, the ARE-SI project. A Hybrid Simple Colour Space
Segmentation & Edge Detection (Hybrid SCSS-EDGE) algorithm
was developed to extract roads automatically from satellite-taken
images. In order to extract the road network accurately, the satellite
image must be analyzed prior to the extraction process. The
characteristics of these elements are analyzed and consequently the
relationships among them are determined. In this study, the road
regions are extracted based on colour space elements and edge details
of roads. Besides, edge detection method is applied to further filter
out the non-road regions. The extracted road regions are validated by
using a segmentation method. These results are valuable for building
road map and detecting the changes of the existing road database.
The proposed Hybrid Simple Colour Space Segmentation and Edge
Detection (Hybrid SCSS-EDGE) algorithm can perform the tasks
fully automatic, where the user only needs to input a high-resolution
satellite image and wait for the result. Moreover, this system can
work on complex road network and generate the extraction result in
seconds.
Abstract: In this paper, we propose an architecture for easily
constructing a robot controller. The architecture is a multi-agent
system which has eight agents: the Man-machine interface, Task
planner, Task teaching editor, Motion planner, Arm controller,
Vehicle controller, Vision system and CG display. The controller has
three databases: the Task knowledge database, the Robot database and
the Environment database. Based on this controller architecture, we
are constructing an experimental power distribution line maintenance
robot system and are doing the experiment for the maintenance tasks,
for example, “Bolt insertion task".
Abstract: Mining Sequential Patterns in large databases has become
an important data mining task with broad applications. It is
an important task in data mining field, which describes potential
sequenced relationships among items in a database. There are many
different algorithms introduced for this task. Conventional algorithms
can find the exact optimal Sequential Pattern rule but it takes a
long time, particularly when they are applied on large databases.
Nowadays, some evolutionary algorithms, such as Particle Swarm
Optimization and Genetic Algorithm, were proposed and have been
applied to solve this problem. This paper will introduce a new kind
of hybrid evolutionary algorithm that combines Genetic Algorithm
(GA) with Particle Swarm Optimization (PSO) to mine Sequential
Pattern, in order to improve the speed of evolutionary algorithms
convergence. This algorithm is referred to as SP-GAPSO.
Abstract: In this paper, we present a system for content-based
retrieval of large database of classified satellite images, based on
user's relevance feedback (RF).Through our proposed system, we
divide each satellite image scene into small subimages, which stored
in the database. The modified radial basis functions neural network
has important role in clustering the subimages of database according
to the Euclidean distance between the query feature vector and the
other subimages feature vectors. The advantage of using RF
technique in such queries is demonstrated by analyzing the database
retrieval results.
Abstract: We demonstrate through a sample application, Ebanking,
that the Web Service Modelling Language Ontology component
can be used as a very powerful object-oriented database design
language with logic capabilities. Its conceptual syntax allows the
definition of class hierarchies, and logic syntax allows the definition
of constraints in the database. Relations, which are available for
modelling relations of three or more concepts, can be connected to
logical expressions, allowing the implicit specification of database
content. Using a reasoning tool, logic queries can also be made
against the database in simulation mode.
Abstract: This paper describes a method to improve the robustness of a face recognition system based on the combination of two compensating classifiers. The face images are preprocessed by the appearance-based statistical approaches such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). LDA features of the face image are taken as the input of the Radial Basis Function Network (RBFN). The proposed approach has been tested on the ORL database. The experimental results show that the LDA+RBFN algorithm has achieved a recognition rate of 93.5%
Abstract: Mining sequential patterns from large customer transaction databases has been recognized as a key research topic in database systems. However, the previous works more focused on mining sequential patterns at a single concept level. In this study, we introduced concept hierarchies into this problem and present several algorithms for discovering multiple-level sequential patterns based on the hierarchies. An experiment was conducted to assess the performance of the proposed algorithms. The performances of the algorithms were measured by the relative time spent on completing the mining tasks on two different datasets. The experimental results showed that the performance depends on the characteristics of the datasets and the pre-defined threshold of minimal support for each level of the concept hierarchy. Based on the experimental results, some suggestions were also given for how to select appropriate algorithm for a certain datasets.
Abstract: The proposed system identifies the species of the wood
using the textural features present in its barks. Each species of a wood
has its own unique patterns in its bark, which enabled the proposed
system to identify it accurately. Automatic wood recognition system
has not yet been well established mainly due to lack of research in this
area and the difficulty in obtaining the wood database. In our work, a
wood recognition system has been designed based on pre-processing
techniques, feature extraction and by correlating the features of those
wood species for their classification. Texture classification is a problem
that has been studied and tested using different methods due to its
valuable usage in various pattern recognition problems, such as wood
recognition, rock classification. The most popular technique used
for the textural classification is Gray-level Co-occurrence Matrices
(GLCM). The features from the enhanced images are thus extracted
using the GLCM is correlated, which determines the classification
between the various wood species. The result thus obtained shows a
high rate of recognition accuracy proving that the techniques used in
suitable to be implemented for commercial purposes.
Abstract: The main problems of data centric and open source
project are large number of developers and changes of core
framework. Model-View-Control (MVC) design pattern significantly
improved the development and adjustments of complex projects.
Entity framework as a Model layer in MVC architecture has
simplified communication with the database. How often are the new
technologies used and whether they have potentials for designing
more efficient Enterprise Resource Planning (ERP) system that will
be more suited to accountants?
Abstract: Different methods containing biometric algorithms are
presented for the representation of eigenfaces detection including
face recognition, are identification and verification. Our theme of this
research is to manage the critical processing stages (accuracy, speed,
security and monitoring) of face activities with the flexibility of
searching and edit the secure authorized database. In this paper we
implement different techniques such as eigenfaces vector reduction
by using texture and shape vector phenomenon for complexity
removal, while density matching score with Face Boundary Fixation
(FBF) extracted the most likelihood characteristics in this media
processing contents. We examine the development and performance
efficiency of the database by applying our creative algorithms in both
recognition and detection phenomenon. Our results show the
performance accuracy and security gain with better achievement than
a number of previous approaches in all the above processes in an
encouraging mode.
Abstract: In this paper we explore the application of a formal proof system to verification problems in cryptography. Cryptographic properties concerning correctness or security of some cryptographic algorithms are of great interest. Beside some basic lemmata, we explore an implementation of a complex function that is used in cryptography. More precisely, we describe formal properties of this implementation that we computer prove. We describe formalized probability distributions (o--algebras, probability spaces and condi¬tional probabilities). These are given in the formal language of the formal proof system Isabelle/HOL. Moreover, we computer prove Bayes' Formula. Besides we describe an application of the presented formalized probability distributions to cryptography. Furthermore, this paper shows that computer proofs of complex cryptographic functions are possible by presenting an implementation of the Miller- Rabin primality test that admits formal verification. Our achievements are a step towards computer verification of cryptographic primitives. They describe a basis for computer verification in cryptography. Computer verification can be applied to further problems in crypto-graphic research, if the corresponding basic mathematical knowledge is available in a database.
Abstract: Object Relational Databases (ORDB) are complex in
nature than traditional relational databases because they combine the
characteristics of both object oriented concepts and relational
features of conventional databases. Design of an ORDB demands
efficient and quality schema considering the structural, functional
and componential traits. This internal quality of the schema is
assured by metrics that measure the relevant attributes. This is
extended to substantiate the understandability, usability and
reliability of the schema, thus assuring external quality of the
schema. This work institutes a formalization of ORDB metrics;
metric definition, evaluation methodology and the calibration of the
metric. Three ORDB schemas were used to conduct the evaluation
and the formalization of the metrics. The metrics are calibrated using
content and criteria related validity based on the measurability,
consistency and reliability of the metrics. Nominal and summative
scales are derived based on the evaluated metric values and are
standardized. Future works pertaining to ORDB metrics forms the
concluding note.
Abstract: An automatic method for the extraction of feature points for face based applications is proposed. The system is based upon volumetric feature descriptors, which in this paper has been extended to incorporate scale space. The method is robust to noise and has the ability to extract local and holistic features simultaneously from faces stored in a database. Extracted features are stable over a range of faces, with results indicating that in terms of intra-ID variability, the technique has the ability to outperform manual landmarking.
Abstract: There is an urgent need to develop novel
Mycobacterium tuberculosis (Mtb) drugs that are active against drug
resistant bacteria but, more importantly, kill persistent bacteria. Our
study structured based on integrated analysis of metabolic pathways,
small molecule screening and similarity Search in PubChem
Database. Metabolic analysis approaches based on Unified weighted
used for potent target selection. Our results suggest that pantothenate
synthetase (panC) and and 3-methyl-2-oxobutanoate hydroxymethyl
transferase (panB) as a appropriate drug targets. In our study, we
used pantothenate synthetase because of existence inhibitors. We
have reported the discovery of new antitubercular compounds
through ligand based approaches using computational tools.