Abstract: Twelve lactating Etawah Crossedbred goats were used
in this study. Goat feed consisted of Cally andra callothyrsus,
Pennisetum purpureum, wheat bran and dried fermented cassava
peel. The cassava peels were fermented with a traditional culture
called “ragi tape" (mixed culture of Saccharomyces cerevisae,
Aspergillus sp, Candida, Hasnula and Acetobacter). The goats were
divided into 2 groups (Control and Treated) of six does. The
experimental diet of the Control group consisted of 70% of roughage
(fresh Callyandra callothyrsus and Pennisetum purpureum 60:40)
and 30% of wheat bran on dry matter (DM) base. In the Treated
group 30% of wheat bran was replaced with dried fermented cassava
peels. Data were statistically analyzed using analysis of variance
followed SPSS program. The concentration of HCN in fermented
cassava peel decreased to non toxic level. Nutrient composition of
dried fermented cassava peel consisted of 85.75% dry matter;
5.80% crude protein and 82.51% total digestible nutrien (TDN).
Substitution of 30% of wheat bran with dried fermented cassava peel
in the diet had no effect on dry matter and organic matter intake but
significantly (P< 0.05) decreased crude protein and TDN
consumption as well as milk yields and milk composition. The study
recommended to reduced the level of substitution to less than 30% of
concentrates in the diet in order to avoid low nutrient intake and milk
production of goats.
Abstract: The network traffic data provided for the design of
intrusion detection always are large with ineffective information and
enclose limited and ambiguous information about users- activities.
We study the problems and propose a two phases approach in our
intrusion detection design. In the first phase, we develop a
correlation-based feature selection algorithm to remove the worthless
information from the original high dimensional database. Next, we
design an intrusion detection method to solve the problems of
uncertainty caused by limited and ambiguous information. In the
experiments, we choose six UCI databases and DARPA KDD99
intrusion detection data set as our evaluation tools. Empirical studies
indicate that our feature selection algorithm is capable of reducing the
size of data set. Our intrusion detection method achieves a better
performance than those of participating intrusion detectors.
Abstract: The uses of road map in daily activities are numerous
but it is a hassle to construct and update a road map whenever there
are changes. In Universiti Malaysia Sarawak, research on Automatic
Road Extraction (ARE) was explored to solve the difficulties in
updating road map. The research started with using Satellite Image
(SI), or in short, the ARE-SI project. A Hybrid Simple Colour Space
Segmentation & Edge Detection (Hybrid SCSS-EDGE) algorithm
was developed to extract roads automatically from satellite-taken
images. In order to extract the road network accurately, the satellite
image must be analyzed prior to the extraction process. The
characteristics of these elements are analyzed and consequently the
relationships among them are determined. In this study, the road
regions are extracted based on colour space elements and edge details
of roads. Besides, edge detection method is applied to further filter
out the non-road regions. The extracted road regions are validated by
using a segmentation method. These results are valuable for building
road map and detecting the changes of the existing road database.
The proposed Hybrid Simple Colour Space Segmentation and Edge
Detection (Hybrid SCSS-EDGE) algorithm can perform the tasks
fully automatic, where the user only needs to input a high-resolution
satellite image and wait for the result. Moreover, this system can
work on complex road network and generate the extraction result in
seconds.
Abstract: Machine-understandable data when strongly
interlinked constitutes the basis for the SemanticWeb. Annotating
web documents is one of the major techniques for creating metadata
on the Web. Annotating websitexs defines the containing data in a
form which is suitable for interpretation by machines. In this paper,
we present a better and improved approach than previous [1] to
annotate the texts of the websites depends on the knowledge base.
Abstract: The original idea for a feature film may come from a
writer, director or a producer. Director is the person responsible for
the creative aspects, both interpretive and technical, of a motion
picture production in a film. Director may be shot discussing his
project with his or her cowriters, members of production staff, and
producer, and director may be shown selecting locales or
constructing sets. All these activities provide, of course, ways of
externalizing director-s ideas about the film. A director sometimes
pushes both the film image and techniques of narration to new artistic
limits, but main responsibility of director is take the spectator to an
original opinion in his philosophical approach. Director tries to find
an artistic angle in every scene and change screenplay into an
effective story and sets his film on a spiritual and philosophical base.
Abstract: In this paper, we propose an architecture for easily
constructing a robot controller. The architecture is a multi-agent
system which has eight agents: the Man-machine interface, Task
planner, Task teaching editor, Motion planner, Arm controller,
Vehicle controller, Vision system and CG display. The controller has
three databases: the Task knowledge database, the Robot database and
the Environment database. Based on this controller architecture, we
are constructing an experimental power distribution line maintenance
robot system and are doing the experiment for the maintenance tasks,
for example, “Bolt insertion task".
Abstract: Mining Sequential Patterns in large databases has become
an important data mining task with broad applications. It is
an important task in data mining field, which describes potential
sequenced relationships among items in a database. There are many
different algorithms introduced for this task. Conventional algorithms
can find the exact optimal Sequential Pattern rule but it takes a
long time, particularly when they are applied on large databases.
Nowadays, some evolutionary algorithms, such as Particle Swarm
Optimization and Genetic Algorithm, were proposed and have been
applied to solve this problem. This paper will introduce a new kind
of hybrid evolutionary algorithm that combines Genetic Algorithm
(GA) with Particle Swarm Optimization (PSO) to mine Sequential
Pattern, in order to improve the speed of evolutionary algorithms
convergence. This algorithm is referred to as SP-GAPSO.
Abstract: A neurofuzzy approach for a given set of input-output training data is proposed in two phases. Firstly, the data set is partitioned automatically into a set of clusters. Then a fuzzy if-then rule is extracted from each cluster to form a fuzzy rule base. Secondly, a fuzzy neural network is constructed accordingly and parameters are tuned to increase the precision of the fuzzy rule base. This network is able to learn and optimize the rule base of a Sugeno like Fuzzy inference system using Hybrid learning algorithm, which combines gradient descent, and least mean square algorithm. This proposed neurofuzzy system has the advantage of determining the number of rules automatically and also reduce the number of rules, decrease computational time, learns faster and consumes less memory. The authors also investigate that how neurofuzzy techniques can be applied in the area of control theory to design a fuzzy controller for linear and nonlinear dynamic systems modelling from a set of input/output data. The simulation analysis on a wide range of processes, to identify nonlinear components on-linely in a control system and a benchmark problem involving the prediction of a chaotic time series is carried out. Furthermore, the well-known examples of linear and nonlinear systems are also simulated under the Matlab/Simulink environment. The above combination is also illustrated in modeling the relationship between automobile trips and demographic factors.
Abstract: In this paper, we present a system for content-based
retrieval of large database of classified satellite images, based on
user's relevance feedback (RF).Through our proposed system, we
divide each satellite image scene into small subimages, which stored
in the database. The modified radial basis functions neural network
has important role in clustering the subimages of database according
to the Euclidean distance between the query feature vector and the
other subimages feature vectors. The advantage of using RF
technique in such queries is demonstrated by analyzing the database
retrieval results.
Abstract: We demonstrate through a sample application, Ebanking,
that the Web Service Modelling Language Ontology component
can be used as a very powerful object-oriented database design
language with logic capabilities. Its conceptual syntax allows the
definition of class hierarchies, and logic syntax allows the definition
of constraints in the database. Relations, which are available for
modelling relations of three or more concepts, can be connected to
logical expressions, allowing the implicit specification of database
content. Using a reasoning tool, logic queries can also be made
against the database in simulation mode.
Abstract: This paper describes a method to improve the robustness of a face recognition system based on the combination of two compensating classifiers. The face images are preprocessed by the appearance-based statistical approaches such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). LDA features of the face image are taken as the input of the Radial Basis Function Network (RBFN). The proposed approach has been tested on the ORL database. The experimental results show that the LDA+RBFN algorithm has achieved a recognition rate of 93.5%
Abstract: Mining sequential patterns from large customer transaction databases has been recognized as a key research topic in database systems. However, the previous works more focused on mining sequential patterns at a single concept level. In this study, we introduced concept hierarchies into this problem and present several algorithms for discovering multiple-level sequential patterns based on the hierarchies. An experiment was conducted to assess the performance of the proposed algorithms. The performances of the algorithms were measured by the relative time spent on completing the mining tasks on two different datasets. The experimental results showed that the performance depends on the characteristics of the datasets and the pre-defined threshold of minimal support for each level of the concept hierarchy. Based on the experimental results, some suggestions were also given for how to select appropriate algorithm for a certain datasets.
Abstract: The proposed system identifies the species of the wood
using the textural features present in its barks. Each species of a wood
has its own unique patterns in its bark, which enabled the proposed
system to identify it accurately. Automatic wood recognition system
has not yet been well established mainly due to lack of research in this
area and the difficulty in obtaining the wood database. In our work, a
wood recognition system has been designed based on pre-processing
techniques, feature extraction and by correlating the features of those
wood species for their classification. Texture classification is a problem
that has been studied and tested using different methods due to its
valuable usage in various pattern recognition problems, such as wood
recognition, rock classification. The most popular technique used
for the textural classification is Gray-level Co-occurrence Matrices
(GLCM). The features from the enhanced images are thus extracted
using the GLCM is correlated, which determines the classification
between the various wood species. The result thus obtained shows a
high rate of recognition accuracy proving that the techniques used in
suitable to be implemented for commercial purposes.
Abstract: The main problems of data centric and open source
project are large number of developers and changes of core
framework. Model-View-Control (MVC) design pattern significantly
improved the development and adjustments of complex projects.
Entity framework as a Model layer in MVC architecture has
simplified communication with the database. How often are the new
technologies used and whether they have potentials for designing
more efficient Enterprise Resource Planning (ERP) system that will
be more suited to accountants?
Abstract: Different methods containing biometric algorithms are
presented for the representation of eigenfaces detection including
face recognition, are identification and verification. Our theme of this
research is to manage the critical processing stages (accuracy, speed,
security and monitoring) of face activities with the flexibility of
searching and edit the secure authorized database. In this paper we
implement different techniques such as eigenfaces vector reduction
by using texture and shape vector phenomenon for complexity
removal, while density matching score with Face Boundary Fixation
(FBF) extracted the most likelihood characteristics in this media
processing contents. We examine the development and performance
efficiency of the database by applying our creative algorithms in both
recognition and detection phenomenon. Our results show the
performance accuracy and security gain with better achievement than
a number of previous approaches in all the above processes in an
encouraging mode.
Abstract: In this paper we explore the application of a formal proof system to verification problems in cryptography. Cryptographic properties concerning correctness or security of some cryptographic algorithms are of great interest. Beside some basic lemmata, we explore an implementation of a complex function that is used in cryptography. More precisely, we describe formal properties of this implementation that we computer prove. We describe formalized probability distributions (o--algebras, probability spaces and condi¬tional probabilities). These are given in the formal language of the formal proof system Isabelle/HOL. Moreover, we computer prove Bayes' Formula. Besides we describe an application of the presented formalized probability distributions to cryptography. Furthermore, this paper shows that computer proofs of complex cryptographic functions are possible by presenting an implementation of the Miller- Rabin primality test that admits formal verification. Our achievements are a step towards computer verification of cryptographic primitives. They describe a basis for computer verification in cryptography. Computer verification can be applied to further problems in crypto-graphic research, if the corresponding basic mathematical knowledge is available in a database.
Abstract: Object Relational Databases (ORDB) are complex in
nature than traditional relational databases because they combine the
characteristics of both object oriented concepts and relational
features of conventional databases. Design of an ORDB demands
efficient and quality schema considering the structural, functional
and componential traits. This internal quality of the schema is
assured by metrics that measure the relevant attributes. This is
extended to substantiate the understandability, usability and
reliability of the schema, thus assuring external quality of the
schema. This work institutes a formalization of ORDB metrics;
metric definition, evaluation methodology and the calibration of the
metric. Three ORDB schemas were used to conduct the evaluation
and the formalization of the metrics. The metrics are calibrated using
content and criteria related validity based on the measurability,
consistency and reliability of the metrics. Nominal and summative
scales are derived based on the evaluated metric values and are
standardized. Future works pertaining to ORDB metrics forms the
concluding note.
Abstract: The third phase of web means semantic web requires many web pages which are annotated with metadata. Thus, a crucial question is where to acquire these metadata. In this paper we propose our approach, a semi-automatic method to annotate the texts of documents and web pages and employs with a quite comprehensive knowledge base to categorize instances with regard to ontology. The approach is evaluated against the manual annotations and one of the most popular annotation tools which works the same as our tool. The approach is implemented in .net framework and uses the WordNet for knowledge base, an annotation tool for the Semantic Web.
Abstract: Dual motor drives fed by single inverter is
purposely designed to reduced size and cost with respect to
single motor drives fed by single inverter. Previous researches
on dual motor drives only focus on the modulation and the
averaging techniques. Only a few of them, study the
performance of the drives based on different speed controller
other than Proportional and Integrator (PI) controller. This
paper presents a detailed comparative study on fuzzy rule-base
in Fuzzy Logic speed Controller (FLC) for Dual Permanent
Magnet Synchronous Motor (PMSM) drives. Two fuzzy speed
controllers which are standard and simplified fuzzy speed
controllers are designed and the results are compared and
evaluated. The standard fuzzy controller consists of 49 rules
while the proposed controller consists of 9 rules determined by
selecting the most dominant rules only. Both designs are
compared for wide range of speed and the robustness of both
controllers over load disturbance changes is tested to
demonstrate the effectiveness of the simplified/reduced rulebase.
Abstract: An automatic method for the extraction of feature points for face based applications is proposed. The system is based upon volumetric feature descriptors, which in this paper has been extended to incorporate scale space. The method is robust to noise and has the ability to extract local and holistic features simultaneously from faces stored in a database. Extracted features are stable over a range of faces, with results indicating that in terms of intra-ID variability, the technique has the ability to outperform manual landmarking.