Abstract: The paper presents a complete discrete statistical framework, based on a novel vector quantization (VQ) front-end process. This new VQ approach performs an optimal distribution of VQ codebook components on HMM states. This technique that we named the distributed vector quantization (DVQ) of hidden Markov models, succeeds in unifying acoustic micro-structure and phonetic macro-structure, when the estimation of HMM parameters is performed. The DVQ technique is implemented through two variants. The first variant uses the K-means algorithm (K-means- DVQ) to optimize the VQ, while the second variant exploits the benefits of the classification behavior of neural networks (NN-DVQ) for the same purpose. The proposed variants are compared with the HMM-based baseline system by experiments of specific Arabic consonants recognition. The results show that the distributed vector quantization technique increase the performance of the discrete HMM system.
Abstract: There are various approaches to implement quality
improvements. Organizations aim for a management standard which
is capable of providing customers with quality assurance on their
product/service via continuous process improvement. Carefully
planned steps are necessary to ensure the right quality improvement
methodology (QIM) and business operations are consistent, reliable
and truly meet the customers' needs. This paper traces the evolution
of QIM in Malaysia-s Information Technology (IT) industry in the
past, current and future; and highlights some of the thought of
researchers who contributed to the science and practice of quality,
and identifies leading methodologies in use today. Some of the
misconceptions and mistakes leading to quality system failures will
also be examined and discussed. This paper aims to provide a general
overview of different types of QIMs available for IT businesses in
maximizing business advantages, enhancing product quality,
improving process routines and increasing performance earnings.
Abstract: This paper presents recent work on the improvement
of the robotics vision based control strategy for underwater pipeline
tracking system. The study focuses on developing image processing
algorithms and a fuzzy inference system for the analysis of the
terrain. The main goal is to implement the supervisory fuzzy learning
control technique to reduce the errors on navigation decision due to
the pipeline occlusion problem. The system developed is capable of
interpreting underwater images containing occluded pipeline, seabed
and other unwanted noise. The algorithm proposed in previous work
does not explore the cooperation between fuzzy controllers,
knowledge and learnt data to improve the outputs for underwater
pipeline tracking. Computer simulations and prototype simulations
demonstrate the effectiveness of this approach. The system accuracy
level has also been discussed.
Abstract: Early supplier involvement (ESI) benefits new
product development projects several ways. Nevertheless, many castuser
companies do not know the advantages of ESI and therefore do
not utilize it. This paper presents reasons why to utilize ESI in
casting industry and how that can be done. Further, this paper
presents advantages and challenges related to ESI in casting industry,
and introduces a Casting-Network Collaboration Model. The model
presents practices for companies to build advantageous collaborative
relationships. More detailed, the model describes three levels for
company-network relationships in casting industry with different
degrees of collaboration, and requirements for operating in each
level. In our research, ESI was found to influence, for example, on
project time, component cost, and quality. In addition, challenges
related to ESI, such as, a lack of mutual trust and unawareness about
the advantages were found. Our research approach was a case study
including four cases.
Abstract: Through the course of this paper we define Business Case Management and its characteristics, and highlight its link to knowledge workers. Business Case Management combines knowledge and process effectively, supporting the ad hoc and unpredictable nature of cases, and coordinate a range of other technologies to appropriately support knowledge-intensive processes. We emphasize the growing importance of knowledge workers and the current poor support for knowledge work automation. We also discuss the challenges in supporting this kind of knowledge work and propose a novel approach to overcome these challenges.
Abstract: Economic dispatch (ED) has been considered to be one of the key functions in electric power system operation which can help to build up effective generating management plans. The practical ED problem has non-smooth cost function with nonlinear constraints which make it difficult to be effectively solved. This paper presents a novel heuristic and efficient optimization approach based on the new Bat algorithm (BA) to solve the practical non-smooth economic dispatch problem. The proposed algorithm easily takes care of different constraints. In addition, two newly introduced modifications method is developed to improve the variety of the bat population when increasing the convergence speed simultaneously. The simulation results obtained by the proposed algorithms are compared with the results obtained using other recently develop methods available in the literature.
Abstract: This article addresses feature selection for breast
cancer diagnosis. The present process contains a wrapper approach
based on Genetic Algorithm (GA) and case-based reasoning (CBR).
GA is used for searching the problem space to find all of the possible
subsets of features and CBR is employed to estimate the evaluation
result of each subset. The results of experiment show that the
proposed model is comparable to the other models on Wisconsin
breast cancer (WDBC) dataset.
Abstract: With the rapid growth in business size, today's businesses orient towards electronic technologies. Amazon.com and e-bay.com are some of the major stakeholders in this regard. Unfortunately the enormous size and hugely unstructured data on the web, even for a single commodity, has become a cause of ambiguity for consumers. Extracting valuable information from such an everincreasing data is an extremely tedious task and is fast becoming critical towards the success of businesses. Web content mining can play a major role in solving these issues. It involves using efficient algorithmic techniques to search and retrieve the desired information from a seemingly impossible to search unstructured data on the Internet. Application of web content mining can be very encouraging in the areas of Customer Relations Modeling, billing records, logistics investigations, product cataloguing and quality management. In this paper we present a review of some very interesting, efficient yet implementable techniques from the field of web content mining and study their impact in the area specific to business user needs focusing both on the customer as well as the producer. The techniques we would be reviewing include, mining by developing a knowledge-base repository of the domain, iterative refinement of user queries for personalized search, using a graphbased approach for the development of a web-crawler and filtering information for personalized search using website captions. These techniques have been analyzed and compared on the basis of their execution time and relevance of the result they produced against a particular search.
Abstract: This paper is a description approach to predict
incoming and outgoing data rate in network system by using
association rule discover, which is one of the data mining
techniques. Information of incoming and outgoing data in each
times and network bandwidth are network performance
parameters, which needed to solve in the traffic problem. Since
congestion and data loss are important network problems. The result
of this technique can predicted future network traffic. In addition,
this research is useful for network routing selection and network
performance improvement.
Abstract: The importance of machining process in today-s
industry requires the establishment of more practical approaches to
clearly represent the intimate and severe contact on the tool-chipworkpiece
interfaces. Mathematical models are developed using the
measured force signals to relate each of the tool-chip friction
components on the rake face to the operating cutting parameters in
rough turning operation using multilayers coated carbide inserts.
Nonlinear modeling proved to have high capability to detect the
nonlinear functional variability embedded in the experimental data.
While feedrate is found to be the most influential parameter on the
friction coefficient and its related force components, both cutting
speed and depth of cut are found to have slight influence. Greater
deformed chip thickness is found to lower the value of friction
coefficient as the sliding length on the tool-chip interface is reduced.
Abstract: The PAX6, a transcription factor, is essential for the morphogenesis of the eyes, brain, pituitary and pancreatic islets. In rodents, the loss of Pax6 function leads to central nervous system defects, anophthalmia, and nasal hypoplasia. The haplo-insufficiency of Pax6 causes microphthalmia, aggression and other behavioral abnormalities. It is also required in brain patterning and neuronal plasticity. In human, heterozygous mutation of Pax6 causes loss of iris [aniridia], mental retardation and glucose intolerance. The 3- deletion in Pax6 leads to autism and aniridia. The phenotypes are variable in peneterance and expressivity. However, mechanism of function and interaction of PAX6 with other proteins during development and associated disease are not clear. It is intended to explore interactors of PAX6 to elucidated biology of PAX6 function in the tissues where it is expressed and also in the central regulatory pathway. This report describes In-silico approaches to explore interacting proteins of PAX6. The models show several possible proteins interacting with PAX6 like MITF, SIX3, SOX2, SOX3, IPO13, TRIM, and OGT. Since the Pax6 is a critical transcriptional regulator and master control gene of eye and brain development it might be interacting with other protein involved in morphogenesis [TGIF, TGF, Ras etc]. It is also presumed that matricelluar proteins [SPARC, thrombospondin-1 and osteonectin etc] are likely to interact during transport and processing of PAX6 and are somewhere its cascade. The proteins involved in cell survival and cell proliferation can also not be ignored.
Abstract: An application framework provides a reusable design
and implementation for a family of software systems. If the
framework contains defects, the defects will be passed on to the
applications developed from the framework. Framework defects are
hard to discover at the time the framework is instantiated. Therefore,
it is important to remove all defects before instantiating the
framework. In this paper, two measures for the adequacy of an
object-oriented system-based testing technique are introduced. The
measures assess the usefulness and uniqueness of the testing
technique. The two measures are applied to experimentally compare
the adequacy of two testing techniques introduced to test objectoriented
frameworks at the system level. The two considered testing
techniques are the New Framework Test Approach and Testing
Frameworks Through Hooks (TFTH). The techniques are also
compared analytically in terms of their coverage power of objectoriented
aspects. The comparison study results show that the TFTH
technique is better than the New Framework Test Approach in terms
of usefulness degree, uniqueness degree, and coverage power.
Abstract: Software complexity metrics are used to predict
critical information about reliability and maintainability of software
systems. Object oriented software development requires a different
approach to software complexity metrics. Object Oriented Software
Metrics can be broadly classified into static and dynamic metrics.
Static Metrics give information at the code level whereas dynamic
metrics provide information on the actual runtime. In this paper we
will discuss the various complexity metrics, and the comparison
between static and dynamic complexity.
Abstract: Chemical industry project management involves
complex decision making situations that require discerning abilities
and methods to make sound decisions. Project managers are faced
with decision environments and problems in projects that are
complex. In this work, case study is Research and Development
(R&D) project selection. R&D is an ongoing process for forward
thinking technology-based chemical industries. R&D project
selection is an important task for organizations with R&D project
management. It is a multi-criteria problem which includes both
tangible and intangible factors. The ability to make sound decisions
is very important to success of R&D projects. Multiple-criteria
decision making (MCDM) approaches are major parts of decision
theory and analysis. This paper presents all of MCDM approaches
for use in R&D project selection. It is hoped that this work will
provide a ready reference on MCDM and this will encourage the
application of the MCDM by chemical engineering management.
Abstract: In this paper we propose a mixture of two different
distributions such as Exponential-Gamma, Exponential-Weibull and
Gamma-Weibull to model heterogeneous survival data. Various
properties of the proposed mixture of two different distributions are
discussed. Maximum likelihood estimations of the parameters are
obtained by using the EM algorithm. Illustrative example based on
real data are also given.
Abstract: The literature reports a large number of approaches for
measuring the similarity between protein sequences. Most of these
approaches estimate this similarity using alignment-based techniques
that do not necessarily yield biologically plausible results, for two
reasons.
First, for the case of non-alignable (i.e., not yet definitively aligned
and biologically approved) sequences such as multi-domain, circular
permutation and tandem repeat protein sequences, alignment-based
approaches do not succeed in producing biologically plausible results.
This is due to the nature of the alignment, which is based on the
matching of subsequences in equivalent positions, while non-alignable
proteins often have similar and conserved domains in non-equivalent
positions.
Second, the alignment-based approaches lead to similarity measures
that depend heavily on the parameters set by the user for the alignment
(e.g., gap penalties and substitution matrices). For easily alignable
protein sequences, it's possible to supply a suitable combination of
input parameters that allows such an approach to yield biologically
plausible results. However, for difficult-to-align protein sequences,
supplying different combinations of input parameters yields different
results. Such variable results create ambiguities and complicate the
similarity measurement task.
To overcome these drawbacks, this paper describes a novel and
effective approach for measuring the similarity between protein
sequences, called SAF for Substitution and Alignment Free. Without
resorting either to the alignment of protein sequences or to substitution
relations between amino acids, SAF is able to efficiently detect the
significant subsequences that best represent the intrinsic properties of
protein sequences, those underlying the chronological dependencies of
structural features and biochemical activities of protein sequences.
Moreover, by using a new efficient subsequence matching scheme,
SAF more efficiently handles protein sequences that contain similar
structural features with significant meaning in chronologically
non-equivalent positions. To show the effectiveness of SAF, extensive
experiments were performed on protein datasets from different
databases, and the results were compared with those obtained by
several mainstream algorithms.
Abstract: In today's day and age, one of the important topics in
information security is authentication. There are several alternatives
to text-based authentication of which includes Graphical Password
(GP) or Graphical User Authentication (GUA). These methods stems
from the fact that humans recognized and remembers images better
than alphanumerical text characters. This paper will focus on the
security aspect of GP algorithms and what most researchers have
been working on trying to define these security features and
attributes. The goal of this study is to develop a fuzzy decision model
that allows automatic selection of available GP algorithms by taking
into considerations the subjective judgments of the decision makers
who are more than 50 postgraduate students of computer science. The
approach that is being proposed is based on the Fuzzy Analytic
Hierarchy Process (FAHP) which determines the criteria weight as a
linear formula.
Abstract: Palestinian cities face the challenges of land scarcity,
high population growth rates, rapid urbanization, uneven
development and territorial fragmentation. Due to geopolitical
constrains and the absence of an effective Palestinian planning
institution, urban development in Palestinian cities has not followed
any discernable planning scheme. This has led to a number of
internal contradictions in the structure of cities, and adversely
affected land use, the provision of urban services, and the quality of
the living environment.
This paper explores these challenges, and the potential that exists
for introducing a more sustainable urban development pattern in
Palestinian cities. It assesses alternative development approaches
with a particular focus on sustainable development, promoting ecodevelopment
imperatives, limiting random urbanization, and meeting
present and future challenges, including fulfilling the needs of the
people and conserving the scarce land and limited natural resources.
This paper concludes by offering conceptual proposals and guidelines
for promoting sustainable physical development in Palestinian cities.
Abstract: Power loss reduction is one of the main targets in power industry and so in this paper, the problem of finding the optimal configuration of a radial distribution system for loss reduction is considered. Optimal reconfiguration involves the selection of the best set of branches to be opened ,one each from each loop, for reducing resistive line losses , and reliving overloads on feeders by shifting the load to adjacent feeders. However ,since there are many candidate switching combinations in the system ,the feeder reconfiguration is a complicated problem. In this paper a new approach is proposed based on a simple optimum loss calculation by determining optimal trees of the given network. From graph theory a distribution network can be represented with a graph that consists a set of nodes and branches. In fact this problem can be viewed as a problem of determining an optimal tree of the graph which simultaneously ensure radial structure of each candidate topology .In this method the refined genetic algorithm is also set up and some improvements of algorithm are made on chromosome coding. In this paper an implementation of the algorithm presented by [7] is applied by modifying in load flow program and a comparison of this method with the proposed method is employed. In [7] an algorithm is proposed that the choice of the switches to be opened is based on simple heuristic rules. This algorithm reduce the number of load flow runs and also reduce the switching combinations to a fewer number and gives the optimum solution. To demonstrate the validity of these methods computer simulations with PSAT and MATLAB programs are carried out on 33-bus test system. The results show that the performance of the proposed method is better than [7] method and also other methods.
Abstract: Network reconfiguration in distribution system is realized by changing the status of sectionalizing switches to reduce the power loss in the system. This paper presents a new method which applies an artificial bee colony algorithm (ABC) for determining the sectionalizing switch to be operated in order to solve the distribution system loss minimization problem. The ABC algorithm is a new population based metaheuristic approach inspired by intelligent foraging behavior of honeybee swarm. The advantage of ABC algorithm is that it does not require external parameters such as cross over rate and mutation rate as in case of genetic algorithm and differential evolution and it is hard to determine these parameters in prior. The other advantage is that the global search ability in the algorithm is implemented by introducing neighborhood source production mechanism which is a similar to mutation process. To demonstrate the validity of the proposed algorithm, computer simulations are carried out on 14, 33, and 119-bus systems and compared with different approaches available in the literature. The proposed method has outperformed the other methods in terms of the quality of solution and computational efficiency.