Abstract: Short-Term Load Forecasting (STLF) plays an important role for the economic and secure operation of power systems. In this paper, Continuous Genetic Algorithm (CGA) is employed to evolve the optimum large neural networks structure and connecting weights for one-day ahead electric load forecasting problem. This study describes the process of developing three layer feed-forward large neural networks for load forecasting and then presents a heuristic search algorithm for performing an important task of this process, i.e. optimal networks structure design. The proposed method is applied to STLF of the local utility. Data are clustered due to the differences in their characteristics. Special days are extracted from the normal training sets and handled separately. In this way, a solution is provided for all load types, including working days and weekends and special days. We find good performance for the large neural networks. The proposed methodology gives lower percent errors all the time. Thus, it can be applied to automatically design an optimal load forecaster based on historical data.
Abstract: In this paper a new robust digital image watermarking
algorithm based on the Complex Wavelet Transform is proposed. This
technique embeds different parts of a watermark into different blocks
of an image under the complex wavelet domain. To increase security
of the method, two chaotic maps are employed, one map is used to
determine the blocks of the host image for watermark embedding,
and another map is used to encrypt the watermark image. Simulation
results are presented to demonstrate the effectiveness of the proposed
algorithm.
Abstract: The Genetic Algorithm (GA) is one of the most important methods used to solve many combinatorial optimization problems. Therefore, many researchers have tried to improve the GA by using different methods and operations in order to find the optimal solution within reasonable time. This paper proposes an improved GA (IGA), where the new crossover operation, population reformulates operation, multi mutation operation, partial local optimal mutation operation, and rearrangement operation are used to solve the Traveling Salesman Problem. The proposed IGA was then compared with three GAs, which use different crossover operations and mutations. The results of this comparison show that the IGA can achieve better results for the solutions in a faster time.
Abstract: This paper presents an efficient approach to feeder
reconfiguration for power loss reduction and voltage profile
imprvement in unbalanced radial distribution systems (URDS). In
this paper Genetic Algorithm (GA) is used to obtain solution for
reconfiguration of radial distribution systems to minimize the losses.
A forward and backward algorithm is used to calculate load flows in
unbalanced distribution systems. By simulating the survival of the
fittest among the strings, the optimum string is searched by
randomized information exchange between strings by performing
crossover and mutation. Results have shown that proposed algorithm
has advantages over previous algorithms The proposed method is
effectively tested on 19 node and 25 node unbalanced radial
distribution systems.
Abstract: The quality of short term load forecasting can improve the efficiency of planning and operation of electric utilities. Artificial Neural Networks (ANNs) are employed for nonlinear short term load forecasting owing to their powerful nonlinear mapping capabilities. At present, there is no systematic methodology for optimal design and training of an artificial neural network. One has often to resort to the trial and error approach. This paper describes the process of developing three layer feed-forward large neural networks for short-term load forecasting and then presents a heuristic search algorithm for performing an important task of this process, i.e. optimal networks structure design. Particle Swarm Optimization (PSO) is used to develop the optimum large neural network structure and connecting weights for one-day ahead electric load forecasting problem. PSO is a novel random optimization method based on swarm intelligence, which has more powerful ability of global optimization. Employing PSO algorithms on the design and training of ANNs allows the ANN architecture and parameters to be easily optimized. The proposed method is applied to STLF of the local utility. Data are clustered due to the differences in their characteristics. Special days are extracted from the normal training sets and handled separately. In this way, a solution is provided for all load types, including working days and weekends and special days. The experimental results show that the proposed method optimized by PSO can quicken the learning speed of the network and improve the forecasting precision compared with the conventional Back Propagation (BP) method. Moreover, it is not only simple to calculate, but also practical and effective. Also, it provides a greater degree of accuracy in many cases and gives lower percent errors all the time for STLF problem compared to BP method. Thus, it can be applied to automatically design an optimal load forecaster based on historical data.
Abstract: This paper presents a comparative study of Ant Colony and Genetic Algorithms for VLSI circuit bi-partitioning. Ant colony optimization is an optimization method based on behaviour of social insects [27] whereas Genetic algorithm is an evolutionary optimization technique based on Darwinian Theory of natural evolution and its concept of survival of the fittest [19]. Both the methods are stochastic in nature and have been successfully applied to solve many Non Polynomial hard problems. Results obtained show that Genetic algorithms out perform Ant Colony optimization technique when tested on the VLSI circuit bi-partitioning problem.
Abstract: E-Learning systems are used by many learners and
teachers. The developer is developing the e-Learning system. However,
the developer cannot do system construction to satisfy all of
users- demands. We discuss a method of constructing e-Learning
systems where learners and teachers can design, try to use, and share
extending system functions that they want to use; which may be nally
added to the system by system managers.
Abstract: An ontology is a data model that represents a set of
concepts in a given field and the relationships among those concepts.
As the emphasis on achieving a semantic web continues to escalate,
ontologies for all types of domains increasingly will be developed.
These ontologies may become large and complex, and as their size
and complexity grows, so will the need for multi-user interfaces for
ontology curation. Herein a functionally comprehensive, generic
approach to maintaining an ontology as a relational database is
presented. Unlike many other ontology editors that utilize a database,
this approach is entirely domain-generic and fully supports Webbased,
collaborative editing including the designation of different
levels of authorization for users.
Abstract: The current research paper is an implementation of
Eigen Faces and Karhunen-Loeve Algorithm for face recognition.
The designed program works in a manner where a unique
identification number is given to each face under trial. These faces
are kept in a database from where any particular face can be matched
and found out of the available test faces. The Karhunen –Loeve
Algorithm has been implemented to find out the appropriate right
face (with same features) with respect to given input image as test
data image having unique identification number. The procedure
involves usage of Eigen faces for the recognition of faces.
Abstract: Today, building automation is advancing from simple
monitoring and control tasks of lightning and heating towards more
and more complex applications that require a dynamic perception
and interpretation of different scenes occurring in a building. Current
approaches cannot handle these newly upcoming demands. In this
article, a bionically inspired approach for multimodal, dynamic scene
perception and interpretation is presented, which is based on neuroscientific
and neuro-psychological research findings about the perceptual
system of the human brain. This approach bases on data from diverse
sensory modalities being processed in a so-called neuro-symbolic
network. With its parallel structure and with its basic elements being
information processing and storing units at the same time, a very
efficient method for scene perception is provided overcoming the
problems and bottlenecks of classical dynamic scene interpretation
systems.
Abstract: The paper presents the method developed to assess
rating points of objects with qualitative indexes. The novelty of the
method lies in the fact that the authors use linguistic scales that allow
to formalize the values of the indexes with the help of fuzzy sets. As
a result it is possible to operate correctly with dissimilar indexes on
the unified basis and to get stable final results. The obtained rating
points are used in decision making based on fuzzy expert opinions.
Abstract: Development of levels of service in municipal context
is a flexible vehicle to assist in performing quality-cost trade-off
analysis for municipal services. This trade-off depends on the
willingness of a community to pay as well as on the condition of the
assets. Community perspective of the performance of an asset from
service point of view may be quite different from the municipality
perspective of the performance of the same asset from condition
point of view. This paper presents a three phased level of service
based methodology for water mains that consists of :1)development
of an Analytical Hierarchy model of level of service 2) development
of Fuzzy Weighted Sum model of water main condition index and 3)
deriving a Fuzzy logic based function that maps level of service to
asset condition index. This mapping will assist asset managers in
quantifying condition improvement requirement to meet service
goals and to make more informed decisions on interventions and
relayed priorities.
Abstract: Clustering unstructured text documents is an
important issue in data mining community and has a number of
applications such as document archive filtering, document
organization and topic detection and subject tracing. In the real
world, some of the already clustered documents may not be of
importance while new documents of more significance may evolve.
Most of the work done so far in clustering unstructured text
documents overlooks this aspect of clustering. This paper, addresses
this issue by using the Fading Function. The unstructured text
documents are clustered. And for each cluster a statistics structure
called Cluster Profile (CP) is implemented. The cluster profile
incorporates the Fading Function. This Fading Function keeps an
account of the time-dependent importance of the cluster. The work
proposes a novel algorithm Clustering n-ary Merge Algorithm
(CnMA) for unstructured text documents, that uses Cluster Profile
and Fading Function. Experimental results illustrating the
effectiveness of the proposed technique are also included.
Abstract: An approach is offered for more precise definition of base lines- borders in handwritten cursive text and general problems of handwritten text segmentation have also been analyzed. An offered method tries to solve problems arose in handwritten recognition with specific slant or in other words, where the letters of the words are not on the same vertical line. As an informative features, some recognition systems use ascending and descending parts of the letters, found after the word-s baseline detection. In such recognition systems, problems in baseline detection, impacts the quality of the recognition and decreases the rate of the recognition. Despite other methods, here borders are found by small pieces containing segmentation elements and defined as a set of linear functions. In this method, separate borders for top and bottom border lines are found. At the end of the paper, as a result, azerbaijani cursive handwritten texts written in Latin alphabet by different authors has been analyzed.
Abstract: The study of proteomics reached unexpected levels of
interest, as a direct consequence of its discovered influence over some
complex biological phenomena, such as problematic diseases like
cancer. This paper presents the latest authors- achievements regarding
the analysis of the networks of proteins (interactome networks), by
computing more efficiently the betweenness centrality measure. The
paper introduces the concept of betweenness centrality, and then
describes how betweenness computation can help the interactome net-
work analysis. Current sequential implementations for the between-
ness computation do not perform satisfactory in terms of execution
times. The paper-s main contribution is centered towards introducing
a speedup technique for the betweenness computation, based on
modified shortest path algorithms for sparse graphs. Three optimized
generic algorithms for betweenness computation are described and
implemented, and their performance tested against real biological
data, which is part of the IntAct dataset.
Abstract: Biometric measures of one kind or another have been
used to identify people since ancient times, with handwritten
signatures, facial features, and fingerprints being the traditional
methods. Of late, Systems have been built that automate the task of
recognition, using these methods and newer ones, such as hand
geometry, voiceprints and iris patterns. These systems have different
strengths and weaknesses. This work is a two-section composition. In
the starting section, we present an analytical and comparative study
of common biometric techniques. The performance of each of them
has been viewed and then tabularized as a result. The latter section
involves the actual implementation of the techniques under
consideration that has been done using a state of the art tool called,
MATLAB. This tool aids to effectively portray the corresponding
results and effects.
Abstract: Pattern matching based on regular tree grammars have been widely used in many areas of computer science. In this paper, we propose a pattern matcher within the framework of code generation, based on a generic and a formalized approach. According to this approach, parsers for regular tree grammars are adapted to a general pattern matching solution, rather than adapting the pattern matching according to their parsing behavior. Hence, we first formalize the construction of the pattern matches respective to input trees drawn from a regular tree grammar in a form of the so-called match trees. Then, we adopt a recently developed generic parser and tightly couple its parsing behavior with such construction. In addition to its generality, the resulting pattern matcher is characterized by its soundness and efficient implementation. This is demonstrated by the proposed theory and by the derived algorithms for its implementation. A comparison with similar and well-known approaches, such as the ones based on tree automata and LR parsers, has shown that our pattern matcher can be applied to a broader class of grammars, and achieves better approximation of pattern matches in one pass. Furthermore, its use as a machine code selector is characterized by a minimized overhead, due to the balanced distribution of the cost computations into static ones, during parser generation time, and into dynamic ones, during parsing time.
Abstract: Deciding the numerous parameters involved in
designing a competent artificial neural network is a complicated task.
The existence of several options for selecting an appropriate
architecture for neural network adds to this complexity, especially
when different applications of heterogeneous natures are concerned.
Two completely different applications in engineering and medical
science were selected in the present study including prediction of
workpiece's surface roughness in ultrasonic-vibration assisted turning
and papilloma viruses oncogenicity. Several neural network
architectures with different parameters were developed for each
application and the results were compared. It was illustrated in this
paper that some applications such as the first one mentioned above
are apt to be modeled by a single network with sufficient accuracy,
whereas others such as the second application can be best modeled
by different expert networks for different ranges of output.
Development of knowledge about the essentials of neural networks
for different applications is regarded as the cornerstone of
multidisciplinary network design programs to be developed as a
means of reducing inconsistencies and the burden of the user
intervention.
Abstract: The purpose of this paper is to propose a framework for constructing correct parallel processing programs based on Equivalent Transformation Framework (ETF). ETF regards computation as In the framework, a problem-s domain knowledge and a query are described in definite clauses, and computation is regarded as transformation of the definite clauses. Its meaning is defined by a model of the set of definite clauses, and the transformation rules generated must preserve meaning. We have proposed a parallel processing method based on “specialization", a part of operation in the transformations, which resembles substitution in logic programming. The method requires “Memo-tree", a history of specialization to maintain correctness. In this paper we proposes the new method for the specialization-base parallel processing without Memo-tree.
Abstract: Fuzzy logic can be used when knowledge is
incomplete or when ambiguity of data exists. The purpose of
this paper is to propose a proactive fuzzy set- based model for
reacting to the risk inherent in investment activities relative to
a complete view of portfolio management. Fuzzy rules are
given where, depending on the antecedents, the portfolio size
may be slightly or significantly decreased or increased. The
decision maker considers acceptable bounds on the proportion
of acceptable risk and return. The Fuzzy Controller model
allows learning to be achieved as 1) the firing strength of each
rule is measured, 2) fuzzy output allows rules to be updated,
and 3) new actions are recommended as the system continues
to loop. An extension is given to the fuzzy controller that
evaluates potential financial loss before adjusting the
portfolio. An application is presented that illustrates the
algorithm and extension developed in the paper.