The Application of Adaptive Tabu Search Algorithm and Averaging Model to the Optimal Controller Design of Buck Converters

The paper presents the applications of artificial intelligence technique called adaptive tabu search to design the controller of a buck converter. The averaging model derived from the DQ and generalized state-space averaging methods is applied to simulate the system during a searching process. The simulations using such averaging model require the faster computational time compared with that of the full topology model from the software packages. The reported model is suitable for the work in the paper in which the repeating calculation is needed for searching the best solution. The results will show that the proposed design technique can provide the better output waveforms compared with those designed from the classical method.

Decision Maturity Framework: Introducing Maturity In Heuristic Search

Heuristics-based search methodologies normally work on searching a problem space of possible solutions toward finding a “satisfactory" solution based on “hints" estimated from the problem-specific knowledge. Research communities use different types of methodologies. Unfortunately, most of the times, these hints are immature and can lead toward hindering these methodologies by a premature convergence. This is due to a decrease of diversity in search space that leads to a total implosion and ultimately fitness stagnation of the population. In this paper, a novel Decision Maturity framework (DMF) is introduced as a solution to this problem. The framework simply improves the decision on the direction of the search by materializing hints enough before using them. Ideas from this framework are injected into the particle swarm optimization methodology. Results were obtained under both static and dynamic environment. The results show that decision maturity prevents premature converges to a high degree.

An Experimental Multi-Agent Robot System for Operating in Hazardous Environments

In this paper, a multi-agent robot system is presented. The system consists of four robots. The developed robots are able to automatically enter and patrol a harmful environment, such as the building infected with virus or the factory with leaking hazardous gas. Further, every robot is able to perform obstacle avoidance and search for the victims. Several operation modes are designed: remote control, obstacle avoidance, automatic searching, and so on.

Utilizing Ontologies Using Ontology Editor for Creating Initial Unified Modeling Language (UML)Object Model

One of object oriented software developing problem is the difficulty of searching the appropriate and suitable objects for starting the system. In this work, ontologies appear in the part of supporting the object discovering in the initial of object oriented software developing. There are many researches try to demonstrate that there is a great potential between object model and ontologies. Constructing ontology from object model is called ontology engineering can be done; On the other hand, this research is aiming to support the idea of building object model from ontology is also promising and practical. Ontology classes are available online in any specific areas, which can be searched by semantic search engine. There are also many helping tools to do so; one of them which are used in this research is Protégé ontology editor and Visual Paradigm. To put them together give a great outcome. This research will be shown how it works efficiently with the real case study by using ontology classes in travel/tourism domain area. It needs to combine classes, properties, and relationships from more than two ontologies in order to generate the object model. In this paper presents a simple methodology framework which explains the process of discovering objects. The results show that this framework has great value while there is possible for expansion. Reusing of existing ontologies offers a much cheaper alternative than building new ones from scratch. More ontologies are becoming available on the web, and online ontologies libraries for storing and indexing ontologies are increasing in number and demand. Semantic and Ontologies search engines have also started to appear, to facilitate search and retrieval of online ontologies.

On the Verification of Power Nap Associated with Stage 2 Sleep and Its Application

One of the most important causes of accidents is driver fatigue. To reduce the accidental rate, the driver needs a quick nap when feeling sleepy. Hence, searching for the minimum time period of nap is a very challenging problem. The purpose of this paper is twofold, i.e. to investigate the possible fastest time period for nap and its relationship with stage 2 sleep, and to develop an automatic stage 2 sleep detection and alarm device. The experiment for this investigation is designed with 21 subjects. It yields the result that waking up the subjects after getting into stage 2 sleep for 3-5 minutes can efficiently reduce the sleepiness. Furthermore, the automatic stage 2 sleep detection and alarm device yields the real-time detection accuracy of approximately 85% which is comparable with the commercial sleep lab system.

Dynamic Inverted Index Maintenance

The majority of today's IR systems base the IR task on two main processes: indexing and searching. There exists a special group of dynamic IR systems where both processes (indexing and searching) happen simultaneously; such a system discards obsolete information, simultaneously dealing with the insertion of new in¬formation, while still answering user queries. In these dynamic, time critical text document databases, it is often important to modify index structures quickly, as documents arrive. This paper presents a method for dynamization which may be used for this task. Experimental results show that the dynamization process is possible and that it guarantees the response time for the query operation and index actualization.

An Evaluation Model for Semantic Enablement of Virtual Research Environments

The Tropical Data Hub (TDH) is a virtual research environment that provides researchers with an e-research infrastructure to congregate significant tropical data sets for data reuse, integration, searching, and correlation. However, researchers often require data and metadata synthesis across disciplines for crossdomain analyses and knowledge discovery. A triplestore offers a semantic layer to achieve a more intelligent method of search to support the synthesis requirements by automating latent linkages in the data and metadata. Presently, the benchmarks to aid the decision of which triplestore is best suited for use in an application environment like the TDH are limited to performance. This paper describes a new evaluation tool developed to analyze both features and performance. The tool comprises a weighted decision matrix to evaluate the interoperability, functionality, performance, and support availability of a range of integrated and native triplestores to rank them according to requirements of the TDH.

Equivalent Transformation for Heterogeneous Traffic Cellular Automata

Understanding driving behavior is a complicated researching topic. To describe accurate speed, flow and density of a multiclass users traffic flow, an adequate model is needed. In this study, we propose the concept of standard passenger car equivalent (SPCE) instead of passenger car equivalent (PCE) to estimate the influence of heavy vehicles and slow cars. Traffic cellular automata model is employed to calibrate and validate the results. According to the simulated results, the SPCE transformations present good accuracy.

Exploiting Query Feedback for Efficient Query Routing in Unstructured Peer-to-peer Networks

Unstructured peer-to-peer networks are popular due to its robustness and scalability. Query schemes that are being used in unstructured peer-to-peer such as the flooding and interest-based shortcuts suffer various problems such as using large communication overhead long delay response. The use of routing indices has been a popular approach for peer-to-peer query routing. It helps the query routing processes to learn the routing based on the feedbacks collected. In an unstructured network where there is no global information available, efficient and low cost routing approach is needed for routing efficiency. In this paper, we propose a novel mechanism for query-feedback oriented routing indices to achieve routing efficiency in unstructured network at a minimal cost. The approach also applied information retrieval technique to make sure the content of the query is understandable and will make the routing process not just based to the query hits but also related to the query content. Experiments have shown that the proposed mechanism performs more efficient than flood-based routing.

A Proposed Information Extraction Technique in Engineering Drawing for Reuse Design

The extensive number of engineering drawing will be referred for planning process and the changes will produce a good engineering design to meet the demand in producing a new model. The advantage in reuse of engineering designs is to allow continuous product development to further improve the quality of product development, thus reduce the development costs. However, to retrieve the existing engineering drawing, it is time consuming, a complex process and are expose to errors. Engineering drawing file searching system will be proposed to solve this problem. It is essential for engineer and designer to have some sort of medium to enable them to search for drawing in the most effective way. This paper lays out the proposed research project under the area of information extraction in engineering drawing.

A Novel Genetic Algorithm Designed for Hardware Implementation

A new genetic algorithm, termed the 'optimum individual monogenetic genetic algorithm' (OIMGA), is presented whose properties have been deliberately designed to be well suited to hardware implementation. Specific design criteria were to ensure fast access to the individuals in the population, to keep the required silicon area for hardware implementation to a minimum and to incorporate flexibility in the structure for the targeting of a range of applications. The first two criteria are met by retaining only the current optimum individual, thereby guaranteeing a small memory requirement that can easily be stored in fast on-chip memory. Also, OIMGA can be easily reconfigured to allow the investigation of problems that normally warrant either large GA populations or individuals many genes in length. Local convergence is achieved in OIMGA by retaining elite individuals, while population diversity is ensured by continually searching for the best individuals in fresh regions of the search space. The results given in this paper demonstrate that both the performance of OIMGA and its convergence time are superior to those of a range of existing hardware GA implementations.

Simulation of Enhanced Biomass Gasification for Hydrogen Production using iCON

Due to the environmental and price issues of current energy crisis, scientists and technologists around the globe are intensively searching for new environmentally less-impact form of clean energy that will reduce the high dependency on fossil fuel. Particularly hydrogen can be produced from biomass via thermochemical processes including pyrolysis and gasification due to the economic advantage and can be further enhanced through in-situ carbon dioxide removal using calcium oxide. This work focuses on the synthesis and development of the flowsheet for the enhanced biomass gasification process in PETRONAS-s iCON process simulation software. This hydrogen prediction model is conducted at operating temperature between 600 to 1000oC at atmospheric pressure. Effects of temperature, steam-to-biomass ratio and adsorbent-to-biomass ratio were studied and 0.85 mol fraction of hydrogen is predicted in the product gas. Comparisons of the results are also made with experimental data from literature. The preliminary economic potential of developed system is RM 12.57 x 106 which equivalent to USD 3.77 x 106 annually shows economic viability of this process.

The Negative Effect of Traditional Loops Style on the Performance of Algorithms

A new algorithm called Character-Comparison to Character-Access (CCCA) is developed to test the effect of both: 1) converting character-comparison and number-comparison into character-access and 2) the starting point of checking on the performance of the checking operation in string searching. An experiment is performed using both English text and DNA text with different sizes. The results are compared with five algorithms, namely, Naive, BM, Inf_Suf_Pref, Raita, and Cycle. With the CCCA algorithm, the results suggest that the evaluation criteria of the average number of total comparisons are improved up to 35%. Furthermore, the results suggest that the clock time required by the other algorithms is improved in range from 22.13% to 42.33% by the new CCCA algorithm.

Analyzing the Relation of Community Group for Research Paper Bookmarking by Using Association Rule

Currently searching through internet is very popular especially in a field of academic. A huge of educational information such as research papers are overload for user. So community-base web sites have been developed to help user search information more easily from process of customizing a web site to need each specifies user or set of user. In this paper propose to use association rule analyze the community group on research paper bookmarking. A set of design goals for community group frameworks is developed and discussed. Additionally Researcher analyzes the initial relation by using association rule discovery between the antecedent and the consequent of a rule in the groups of user for generate the idea to improve ranking search result and development recommender system.

Image Search by Features of Sorted Gray level Histogram Polynomial Curve

Image Searching was always a problem specially when these images are not properly managed or these are distributed over different locations. Currently different techniques are used for image search. On one end, more features of the image are captured and stored to get better results. Storing and management of such features is itself a time consuming job. While on the other extreme if fewer features are stored the accuracy rate is not satisfactory. Same image stored with different visual properties can further reduce the rate of accuracy. In this paper we present a new concept of using polynomials of sorted histogram of the image. This approach need less overhead and can cope with the difference in visual features of image.

Fast Database Indexing for Large Protein Sequence Collections Using Parallel N-Gram Transformation Algorithm

With the rapid development in the field of life sciences and the flooding of genomic information, the need for faster and scalable searching methods has become urgent. One of the approaches that were investigated is indexing. The indexing methods have been categorized into three categories which are the lengthbased index algorithms, transformation-based algorithms and mixed techniques-based algorithms. In this research, we focused on the transformation based methods. We embedded the N-gram method into the transformation-based method to build an inverted index table. We then applied the parallel methods to speed up the index building time and to reduce the overall retrieval time when querying the genomic database. Our experiments show that the use of N-Gram transformation algorithm is an economical solution; it saves time and space too. The result shows that the size of the index is smaller than the size of the dataset when the size of N-Gram is 5 and 6. The parallel N-Gram transformation algorithm-s results indicate that the uses of parallel programming with large dataset are promising which can be improved further.

An Improved Data Mining Method Applied to the Search of Relationship between Metabolic Syndrome and Lifestyles

A data cutting and sorting method (DCSM) is proposed to optimize the performance of data mining. DCSM reduces the calculation time by getting rid of redundant data during the data mining process. In addition, DCSM minimizes the computational units by splitting the database and by sorting data with support counts. In the process of searching for the relationship between metabolic syndrome and lifestyles with the health examination database of an electronics manufacturing company, DCSM demonstrates higher search efficiency than the traditional Apriori algorithm in tests with different support counts.

A Semantic Assistant Agent for Digital Libraries

In this paper we present semantic assistant agent (SAA), an open source digital library agent which takes user query for finding information in the digital library and takes resources- metadata and stores it semantically. SAA uses Semantic Web to improve browsing and searching for resources in digital library. All metadata stored in the library are available in RDF format for querying and processing by SemanSreach which is a part of SAA architecture. The architecture includes a generic RDF-based model that represents relationships among objects and their components. Queries against these relationships are supported by an RDF triple store.

Supply Chain Management: After Business Process Re-Engineering

This paper is prepared to provide a review of how an automotive manufacturer, ISUZU HICOM Malaysia Co. Ltd. sustained the supply chain management after business process reengineering in 2007. One of the authors is currently undergoing industrial attachment and has spent almost 6 months researching in the production and operation management system of the company. This study was carried out as part of the tasks in the attachment program. The result shows that delivery lateness and outsourcing are the main barriers that affected productivity. From the gap analysis, the authors found that new business process operation had improved suppliers delivery performance.

Optimizing Spatial Trend Detection By Artificial Immune Systems

Spatial trends are one of the valuable patterns in geo databases. They play an important role in data analysis and knowledge discovery from spatial data. A spatial trend is a regular change of one or more non spatial attributes when spatially moving away from a start object. Spatial trend detection is a graph search problem therefore heuristic methods can be good solution. Artificial immune system (AIS) is a special method for searching and optimizing. AIS is a novel evolutionary paradigm inspired by the biological immune system. The models based on immune system principles, such as the clonal selection theory, the immune network model or the negative selection algorithm, have been finding increasing applications in fields of science and engineering. In this paper, we develop a novel immunological algorithm based on clonal selection algorithm (CSA) for spatial trend detection. We are created neighborhood graph and neighborhood path, then select spatial trends that their affinity is high for antibody. In an evolutionary process with artificial immune algorithm, affinity of low trends is increased with mutation until stop condition is satisfied.