Abstract: Medical services are usually provided in hospitals; however, in developing country, some rural residences have fewer opportunities to access in healthcare services due to the limitation of transportation communication. Therefore, in Thailand, there are charitable organizations operating to provide medical treatments to these people by shifting the medical services to operation sites; this is commonly known as mobile medical service. Operation routing is important for the organization to reduce its transportation cost in order to focus more on other important activities; for instance, the development of medical apparatus. VRP is applied to solve the problem of high transportation cost of the studied organization with the searching techniques of saving algorithm to find the minimum total distance of operation route and satisfy available time constraints of voluntary medical staffs.
Abstract: This paper proposes the method combining artificial neural network with particle swarm optimization (PSO) to implement the maximum power point tracking (MPPT) by controlling the rotor speed of the wind generator. With the measurements of wind speed, rotor speed of wind generator and output power, the artificial neural network can be trained and the wind speed can be estimated. The proposed control system in this paper provides a manner for searching the maximum output power of wind generator even under the conditions of varying wind speed and load impedance.
Abstract: Quantitative researching on the degree of incidence between the logistics industry and relevant macroscopic system elements is the basis of reasonable and scientific policy on industrial development. In the light of the macro-level, the logistics industry system is consisted of multiple macroscopic agents such as macro-economic, infrastructure, social environment, market demanding, the traditional industry, industry life cycle, policy , system and so on. This paper studies the grey incidence among the macroscopic agents in the logistics industry system. It is demonstrated that the releasing of the logistics services from the logistics outsourcing enterprises determines the growth of the logistics size. Although the information and communication technology is able to promote the formation of the modern logistics industry to some extent, the development of the modern logistics industry depends more on the development of national economy and the investment in the capital assets of the logistics industry.
Abstract: The problem of frequent pattern discovery is defined
as the process of searching for patterns such as sets of features or items that appear in data frequently. Finding such frequent patterns
has become an important data mining task because it reveals associations, correlations, and many other interesting relationships
hidden in a database. Most of the proposed frequent pattern mining
algorithms have been implemented with imperative programming
languages. Such paradigm is inefficient when set of patterns is large
and the frequent pattern is long. We suggest a high-level declarative
style of programming apply to the problem of frequent pattern
discovery. We consider two languages: Haskell and Prolog. Our
intuitive idea is that the problem of finding frequent patterns should
be efficiently and concisely implemented via a declarative paradigm
since pattern matching is a fundamental feature supported by most
functional languages and Prolog. Our frequent pattern mining
implementation using the Haskell and Prolog languages confirms our
hypothesis about conciseness of the program. The comparative
performance studies on line-of-code, speed and memory usage of
declarative versus imperative programming have been reported in the
paper.
Abstract: This paper investigates the use of mobile phones and
tablets for learning purposes among university students in Saudi
Arabia. For this purpose, an extended Technology Acceptance Model
(TAM) is proposed to analyze the adoption of mobile devices and
smart phones by Saudi university students for accessing course
materials, searching the web for information related to their
discipline, sharing knowledge, conducting assignments etc.
Abstract: This paper presents an approach for the design of
fuzzy logic power system stabilizers using genetic algorithms. In the
proposed fuzzy expert system, speed deviation and its derivative
have been selected as fuzzy inputs. In this approach the parameters of
the fuzzy logic controllers have been tuned using genetic algorithm.
Incorporation of GA in the design of fuzzy logic power system
stabilizer will add an intelligent dimension to the stabilizer and
significantly reduces computational time in the design process. It is
shown in this paper that the system dynamic performance can be
improved significantly by incorporating a genetic-based searching
mechanism. To demonstrate the robustness of the genetic based
fuzzy logic power system stabilizer (GFLPSS), simulation studies on
multimachine system subjected to small perturbation and three-phase
fault have been carried out. Simulation results show the superiority
and robustness of GA based power system stabilizer as compare to
conventionally tuned controller to enhance system dynamic
performance over a wide range of operating conditions.
Abstract: Neural processors have shown good results for
detecting a certain character in a given input matrix. In this paper, a
new idead to speed up the operation of neural processors for character
detection is presented. Such processors are designed based on cross
correlation in the frequency domain between the input matrix and the
weights of neural networks. This approach is developed to reduce the
computation steps required by these faster neural networks for the
searching process. The principle of divide and conquer strategy is
applied through image decomposition. Each image is divided into
small in size sub-images and then each one is tested separately by
using a single faster neural processor. Furthermore, faster character
detection is obtained by using parallel processing techniques to test the
resulting sub-images at the same time using the same number of faster
neural networks. In contrast to using only faster neural processors, the
speed up ratio is increased with the size of the input image when using
faster neural processors and image decomposition. Moreover, the
problem of local subimage normalization in the frequency domain is
solved. The effect of image normalization on the speed up ratio of
character detection is discussed. Simulation results show that local
subimage normalization through weight normalization is faster than
subimage normalization in the spatial domain. The overall speed up
ratio of the detection process is increased as the normalization of
weights is done off line.
Abstract: In this paper, we propose an efficient hierarchical DNA
sequence search method to improve the search speed while the
accuracy is being kept constant. For a given query DNA sequence,
firstly, a fast local search method using histogram features is used as a
filtering mechanism before scanning the sequences in the database.
An overlapping processing is newly added to improve the robustness
of the algorithm. A large number of DNA sequences with low
similarity will be excluded for latter searching. The Smith-Waterman
algorithm is then applied to each remainder sequences. Experimental
results using GenBank sequence data show the proposed method
combining histogram information and Smith-Waterman algorithm is
more efficient for DNA sequence search.
Abstract: This paper aims to study the methodology of building the knowledge of planning adequate punches in order to complete the task of strip layout for shearing processes, using progressive dies. The proposed methodology uses die design rules and characteristics of different types of punches to classify them into five groups: prior use (the punches must be used first), posterior use (must be used last), compatible use (may be used together), sequential use (certain punches must precede some others) and simultaneous use (must be used together). With these five groups of punches, the searching space of feasible designs will be greatly reduced, and superimposition becomes a more effective method of punch layout. The superimposition scheme will generate many feasible solutions, an evaluation function based on number of stages, moment balancing and strip stability is developed for helping designers to find better solutions.
Abstract: This paper proposes a new model to support user
queries on postgraduate research information at Universiti Tenaga
Nasional. The ontology to be developed will contribute towards
shareable and reusable domain knowledge that makes knowledge
assets intelligently accessible to both people and software. This work
adapts a methodology for ontology development based on the
framework proposed by Uschold and King. The concepts and
relations in this domain are represented in a class diagram using the
Protégé software. The ontology will be used to support a menudriven
query system for assisting students in searching for
information related to postgraduate research at the university.
Abstract: The paper proposes a unified model for multimedia data retrieval which includes data representatives, content representatives, index structure, and search algorithms. The multimedia data are defined as k-dimensional signals indexed in a multidimensional k-tree structure. The benefits of using the k-tree unified model were demonstrated by running the data retrieval application on a six networked nodes test bed cluster. The tests were performed with two retrieval algorithms, one that allows parallel searching using a single feature, the second that performs a weighted cascade search for multiple features querying. The experiments show a significant reduction of retrieval time while maintaining the quality of results.
Abstract: Number of documents being created increases at an
increasing pace while most of them being in already known topics
and little of them introducing new concepts. This fact has started a
new era in information retrieval discipline where the requirements
have their own specialties. That is digging into topics and concepts
and finding out subtopics or relations between topics. Up to now IR
researches were interested in retrieving documents about a general
topic or clustering documents under generic subjects. However these
conventional approaches can-t go deep into content of documents
which makes it difficult for people to reach to right documents they
were searching. So we need new ways of mining document sets
where the critic point is to know much about the contents of the
documents. As a solution we are proposing to enhance LSI, one of
the proven IR techniques by supporting its vector space with n-gram
forms of words. Positive results we have obtained are shown in two
different application area of IR domain; querying a document
database, clustering documents in the document database.
Abstract: Application of Information Technology (IT) has
revolutionized the functioning of business all over the world. Its
impact has been felt mostly among the information of dependent
industries. Tourism is one of such industry. The conceptual
framework in this study represents an innovation of travel
information searching system on mobile devices which is used as
tools to deliver travel information (such as hotels, restaurants, tourist
attractions and souvenir shops) for each user by travelers
segmentation based on data mining technique to segment the tourists-
behavior patterns then match them with tourism products and
services. This system innovation is designed to be a knowledge
incremental learning. It is a marketing strategy to support business to
respond traveler-s demand effectively.
Abstract: This paper sets forth the possibility and importance about applying Data Mining in Web logs mining and shows some problems in the conventional searching engines. Then it offers an improved algorithm based on the original AprioriAll algorithm which has been used in Web logs mining widely. The new algorithm adds the property of the User ID during the every step of producing the candidate set and every step of scanning the database by which to decide whether an item in the candidate set should be put into the large set which will be used to produce next candidate set. At the meantime, in order to reduce the number of the database scanning, the new algorithm, by using the property of the Apriori algorithm, limits the size of the candidate set in time whenever it is produced. Test results show the improved algorithm has a more lower complexity of time and space, better restrain noise and fit the capacity of memory.
Abstract: The latest Geographic Information System (GIS)
technology makes it possible to administer the spatial components of
daily “business object," in the corporate database, and apply suitable
geographic analysis efficiently in a desktop-focused application. We
can use wireless internet technology for transfer process in spatial
data from server to client or vice versa. However, the problem in
wireless Internet is system bottlenecks that can make the process of
transferring data not efficient. The reason is large amount of spatial
data. Optimization in the process of transferring and retrieving data,
however, is an essential issue that must be considered. Appropriate
decision to choose between R-tree and Quadtree spatial data indexing
method can optimize the process. With the rapid proliferation of
these databases in the past decade, extensive research has been
conducted on the design of efficient data structures to enable fast
spatial searching. Commercial database vendors like Oracle have also
started implementing these spatial indexing to cater to the large and
diverse GIS. This paper focuses on the decisions to choose R-tree
and quadtree spatial indexing using Oracle spatial database in mobile
GIS application. From our research condition, the result of using
Quadtree and R-tree spatial data indexing method in one single
spatial database can save the time until 42.5%.
Abstract: Various intelligences and inspirations have been
adopted into the iterative searching process called as meta-heuristics.
They intelligently perform the exploration and exploitation in the
solution domain space aiming to efficiently seek near optimal
solutions. In this work, the bee algorithm, inspired by the natural
foraging behaviour of honey bees, was adapted to find the near
optimal solutions of the transportation management system, dynamic
multi-zone dispatching. This problem prepares for an uncertainty and
changing customers- demand. In striving to remain competitive,
transportation system should therefore be flexible in order to cope
with the changes of customers- demand in terms of in-bound and outbound
goods and technological innovations. To remain higher service
level but lower cost management via the minimal imbalance scenario,
the rearrangement penalty of the area, in each zone, including time
periods are also included. However, the performance of the algorithm
depends on the appropriate parameters- setting and need to be
determined and analysed before its implementation. BEE parameters
are determined through the linear constrained response surface
optimisation or LCRSOM and weighted centroid modified simplex
methods or WCMSM. Experimental results were analysed in terms
of best solutions found so far, mean and standard deviation on the
imbalance values including the convergence of the solutions
obtained. It was found that the results obtained from the LCRSOM
were better than those using the WCMSM. However, the average
execution time of experimental run using the LCRSOM was longer
than those using the WCMSM. Finally a recommendation of proper
level settings of BEE parameters for some selected problem sizes is
given as a guideline for future applications.
Abstract: In order to give high expertise the computer aided
design of mechanical systems involves specific activities focused on
processing two type of information: knowledge and data. Expert rule
based knowledge is generally processing qualitative information and
involves searching for proper solutions and their combination into
synthetic variant. Data processing is based on computational models
and it is supposed to be inter-related with reasoning in the knowledge
processing. In this paper an Intelligent Integrated System is proposed,
for the objective of choosing the adequate material. The software is
developed in Prolog – Flex software and takes into account various
constraints that appear in the accurate operation of gears.
Abstract: Knowledge bases are basic components of expert
systems or intelligent computational programs. Knowledge bases
provide knowledge, events that serve deduction activity,
computation and control. Therefore, researching and developing of
models for knowledge representation play an important role in
computer science, especially in Artificial Intelligence Science and
intelligent educational software. In this paper, the extensive
deduction computational model is proposed to design knowledge
bases whose attributes are able to be real values or functional values.
The system can also solve problems based on knowledge bases.
Moreover, the models and algorithms are applied to produce the
educational software for solving alternating current problems or
solving set of equations automatically.
Abstract: This paper presents a novel genetic algorithm, termed
the Optimum Individual Monogenetic Algorithm (OIMGA) and
describes its hardware implementation. As the monogenetic strategy
retains only the optimum individual, the memory requirement is
dramatically reduced and no crossover circuitry is needed, thereby
ensuring the requisite silicon area is kept to a minimum.
Consequently, depending on application requirements, OIMGA
allows the investigation of solutions that warrant either larger GA
populations or individuals of greater length. The results given in this
paper demonstrate that both the performance of OIMGA and its
convergence time are superior to those of existing hardware GA
implementations. Local convergence is achieved in OIMGA by
retaining elite individuals, while population diversity is ensured by
continually searching for the best individuals in fresh regions of the
search space.
Abstract: One of the essential requirements for the human
beings is the house for living. This is necessary to make the place of
satisfaction for contemporary houses residents by attention to their
culture. In this article represented the relevant theoretical literature
on cultural symbols by use the architecture semiotic to construct the
houses as a better place for living. In fact, make a place for everyday
life with changing the house to the home is one of the most
challengeable subject for architects all around the world. The target
of this article is to find Cypriot houses cultural symbols that assist
architect to design and build contemporary houses, to make more
satisfaction for its residents according to Cypriot life style and their
culture. This paper is based on researching the effect of cultural
symbols on housing, would require various types of methods.
However, this study focuses on two methods, which are quantitative
and qualitative. The purpose of the case-specific study is to finding
the symbols that used in contemporary houses by attention to the
Cypriot cultural symbols in Famagusta houses.