Abstract: In this paper, mathematical models for permutation flow shop scheduling and job shop scheduling problems are proposed. The first problem is based on a mixed integer programming model. As the problem is NP-complete, this model can only be used for smaller instances where an optimal solution can be computed. For large instances, another model is proposed which is suitable for solving the problem by stochastic heuristic methods. For the job shop scheduling problem, a mathematical model and its main representation schemes are presented.
Abstract: Industrial robots become useless without end-effectors
that for many instances are in the form of friction grippers.
Commonly friction grippers apply frictional forces to different
objects on the basis of programmers- experiences. This puts a
limitation on the effectiveness of gripping force that may result in
damaging the object. This paper describes various stages of design
and development of a low cost sensor-based robotic gripper that
would facilitate the task of applying right gripping forces to different
objects. The gripper is also equipped with range sensors in order to
avoid collisions of the gripper with objects. It is a fully functional
automated pick and place gripper which can be used in many
industrial applications. Yet it can also be altered or further developed
in order to suit a larger number of industrial activities. The current
design of gripper could lead to designing completely automated robot
grippers able to improve the efficiency and productivity of industrial
robots.
Abstract: This paper presents a perturbation based search method
to solve the unconstrained binary quadratic programming problem.
The proposed algorithm was tested with some of the standard test
problems and the results are reported for 10 instances of 50, 100, 250,
& 500 variable problems. A comparison of the performance of the
proposed algorithm with other heuristics and optimization software is
made. Based on the results, it was found that the proposed algorithm
is computationally inexpensive and the solutions obtained match the
best known solutions for smaller sized problems. For larger instances,
the algorithm is capable of finding a solution within 0.11% of the
best known solution. Apart from being used as a stand-alone method,
this algorithm could also be incorporated with other heuristics to find
better solutions.
Abstract: Knowledge management (KM) is generally
considered to be a positive process in an organisation, facilitating
opportunities to achieve competitive advantage via better quality
information handling, compilation of expert know-how and rapid
response to fluctuations in the business environment. The KM
paradigm as portrayed in the literature informs the processes that can
increase intangible assets so that corporate knowledge is preserved.
However, in some instances, knowledge management exists in a
universe of dynamic tension among the conflicting needs to respect
privacy and intellectual property (IP), to guard against data theft, to
protect national security and to stay within the laws. While the
Knowledge Management literature focuses on the bright side of the
paradigm, there is also a different side in which knowledge is
distorted, suppressed or misappropriated due to personal or
organisational motives (the paradox). This paper describes the ethical
paradoxes that occur within the taxonomy and deontology of
knowledge management and suggests that recognising both the
promises and pitfalls of KM requires wisdom.
Abstract: This paper presents the development of a Bayesian
belief network classifier for prediction of graft status and survival
period in renal transplantation using the patient profile information
prior to the transplantation. The objective was to explore feasibility
of developing a decision making tool for identifying the most suitable
recipient among the candidate pool members. The dataset was
compiled from the University of Toledo Medical Center Hospital
patients as reported to the United Network Organ Sharing, and had
1228 patient records for the period covering 1987 through 2009. The
Bayes net classifiers were developed using the Weka machine
learning software workbench. Two separate classifiers were induced
from the data set, one to predict the status of the graft as either failed
or living, and a second classifier to predict the graft survival period.
The classifier for graft status prediction performed very well with a
prediction accuracy of 97.8% and true positive values of 0.967 and
0.988 for the living and failed classes, respectively. The second
classifier to predict the graft survival period yielded a prediction
accuracy of 68.2% and a true positive rate of 0.85 for the class
representing those instances with kidneys failing during the first year
following transplantation. Simulation results indicated that it is
feasible to develop a successful Bayesian belief network classifier for
prediction of graft status, but not the graft survival period, using the
information in UNOS database.
Abstract: Artificial Immune System is applied as a Heuristic
Algorithm for decades. Nevertheless, many of these applications
took advantage of the benefit of this algorithm but seldom proposed
approaches for enhancing the efficiency. In this paper, a
Self-evolving Artificial Immune System is proposed via developing
the T and B cell in Immune System and built a self-evolving
mechanism for the complexities of different problems. In this
research, it focuses on enhancing the efficiency of Clonal selection
which is responsible for producing Affinities to resist the invading of
Antigens. T and B cell are the main mechanisms for Clonal
Selection to produce different combinations of Antibodies.
Therefore, the development of T and B cell will influence the
efficiency of Clonal Selection for searching better solution.
Furthermore, for better cooperation of the two cells, a co-evolutional
strategy is applied to coordinate for more effective productions of
Antibodies. This work finally adopts Flow-shop scheduling
instances in OR-library to validate the proposed algorithm.
Abstract: The selection for plantation of a particular type of
mustard plant depending on its productivity (pod yield) at the stage
of maturity. The growth of mustard plant dependent on some
parameters of that plant, these are shoot length, number of leaves,
number of roots and roots length etc. As the plant is growing, some
leaves may be fall down and some new leaves may come, so it can
not gives the idea to develop the relationship with the seeds weight at
mature stage of that plant. It is not possible to find the number of
roots and root length of mustard plant at growing stage that will be
harmful of this plant as roots goes deeper to deeper inside the land.
Only the value of shoot length which increases in course of time can
be measured at different time instances. Weather parameters are
maximum and minimum humidity, rain fall, maximum and minimum
temperature may effect the growth of the plant. The parameters of
pollution, water, soil, distance and crop management may be
dominant factors of growth of plant and its productivity. Considering
all parameters, the growth of the plant is very uncertain, fuzzy
environment can be considered for the prediction of shoot length at
maturity of the plant. Fuzzification plays a greater role for
fuzzification of data, which is based on certain membership
functions. Here an effort has been made to fuzzify the original data
based on gaussian function, triangular function, s-function,
Trapezoidal and L –function. After that all fuzzified data are
defuzzified to get normal form. Finally the error analysis
(calculation of forecasting error and average error) indicates the
membership function appropriate for fuzzification of data and use to
predict the shoot length at maturity. The result is also verified using
residual (Absolute Residual, Maximum of Absolute Residual, Mean
Absolute Residual, Mean of Mean Absolute Residual, Median of
Absolute Residual and Standard Deviation) analysis.
Abstract: This paper presents a new heuristic algorithm for the classical symmetric traveling salesman problem (TSP). The idea of the algorithm is to cut a TSP tour into overlapped blocks and then each block is improved separately. It is conjectured that the chance of improving a good solution by moving a node to a position far away from its original one is small. By doing intensive search in each block, it is possible to further improve a TSP tour that cannot be improved by other local search methods. To test the performance of the proposed algorithm, computational experiments are carried out based on benchmark problem instances. The computational results show that algorithm proposed in this paper is efficient for solving the TSPs.
Abstract: In many applications there is a broad variety of
information relevant to a focal “object" of interest, and the fusion of such heterogeneous data types is desirable for classification and
categorization. While these various data types can sometimes be treated as orthogonal (such as the hull number, superstructure color,
and speed of an oil tanker), there are instances where the inference and the correlation between quantities can provide improved fusion
capabilities (such as the height, weight, and gender of a person). A
service-oriented architecture has been designed and prototyped to
support the fusion of information for such “object-centric" situations.
It is modular, scalable, and flexible, and designed to support new data sources, fusion algorithms, and computational resources without affecting existing services. The architecture is designed to simplify
the incorporation of legacy systems, support exact and probabilistic entity disambiguation, recognize and utilize multiple types of
uncertainties, and minimize network bandwidth requirements.
Abstract: Manufacturing Industries face a crucial change as products and processes are required to, easily and efficiently, be reconfigurable and reusable. In order to stay competitive and flexible, situations also demand distribution of enterprises globally, which requires implementation of efficient communication strategies. A prototype system called the “Broadcaster" has been developed with an assumption that the control environment description has been engineered using the Component-based system paradigm. This prototype distributes information to a number of globally distributed partners via an adoption of the circular-based data processing mechanism. The work highlighted in this paper includes the implementation of this mechanism in the domain of the manufacturing industry. The proposed solution enables real-time remote propagation of machine information to a number of distributed supply chain client resources such as a HMI, VRML-based 3D views and remote client instances regardless of their distribution nature and/ or their mechanisms. This approach is presented together with a set of evaluation results. Authors- main concentration surrounds the reliability and the performance metric of the adopted approach. Performance evaluation is carried out in terms of the response times taken to process the data in this domain and compared with an alternative data processing implementation such as the linear queue mechanism. Based on the evaluation results obtained, authors justify the benefits achieved from this proposed implementation and highlight any further research work that is to be carried out.
Abstract: This paper presents an overview of the multiobjective shortest path problem (MSPP) and a review of essential and recent issues regarding the methods to its solution. The paper further explores a multiobjective evolutionary algorithm as applied to the MSPP and describes its behavior in terms of diversity of solutions, computational complexity, and optimality of solutions. Results show that the evolutionary algorithm can find diverse solutions to the MSPP in polynomial time (based on several network instances) and can be an alternative when other methods are trapped by the tractability problem.
Abstract: In this paper, we study the knapsack sharing problem, a variant of the well-known NP-Hard single knapsack problem. We investigate the use of a tree search for optimally solving the problem. The used method combines two complementary phases: a reduction interval search phase and a branch and bound procedure one. First, the reduction phase applies a polynomial reduction strategy; that is used for decomposing the problem into a series of knapsack problems. Second, the tree search procedure is applied in order to attain a set of optimal capacities characterizing the knapsack problems. Finally, the performance of the proposed optimal algorithm is evaluated on a set of instances of the literature and its runtime is compared to the best exact algorithm of the literature.
Abstract: This paper presents an online method that learns the
corresponding points of an object from un-annotated grayscale images
containing instances of the object. In the first image being
processed, an ensemble of node points is automatically selected
which is matched in the subsequent images. A Bayesian posterior
distribution for the locations of the nodes in the images is formed.
The likelihood is formed from Gabor responses and the prior assumes
the mean shape of the node ensemble to be similar in a translation
and scale free space. An association model is applied for separating
the object nodes and background nodes. The posterior distribution is
sampled with Sequential Monte Carlo method. The matched object
nodes are inferred to be the corresponding points of the object
instances. The results show that our system matches the object nodes
as accurately as other methods that train the model with annotated
training images.
Abstract: In this paper the multi-mode resource-constrained project scheduling problem with discounted cash flows is considered. Minimizing the makespan and maximization the net present value (NPV) are the two common objectives that have been investigated in the literature. We apply one evolutionary algorithm named multiobjective particle swarm optimization (MOPSO) to find Pareto front solutions. We used standard sets of instances from the project scheduling problem library (PSPLIB). The results are computationally compared respect to different metrics taken from the literature on evolutionary multi-objective optimization.
Abstract: This paper presents a new approach for the protection
of Thyristor-Controlled Series Compensator (TCSC) line using
Support Vector Machine (SVM). One SVM is trained for fault
classification and another for section identification. This method use
three phase current measurement that results in better speed and
accuracy than other SVM based methods which used single phase
current measurement. This makes it suitable for real-time protection.
The method was tested on 10,000 data instances with a very wide
variation in system conditions such as compensation level, source
impedance, location of fault, fault inception angle, load angle at
source bus and fault resistance. The proposed method requires only
local current measurement.
Abstract: This paper presents a simple and effective method for approximate indexing of instances for instance based learning. The method uses an interval tree to determine a good starting search point for the nearest neighbor. The search stops when an early stopping criterion is met. The method proved to be very effective especially when only the first nearest neighbor is required.
Abstract: The paper focuses on the area of context modeling with respect to the specification of context-aware systems supporting ubiquitous applications. The proposed approach, followed within the SIMPLICITY IST project, uses a high-level system ontology to derive context models for system components which consequently are mapped to the system's physical entities. For the definition of user and device-related context models in particular, the paper suggests a standard-based process consisting of an analysis phase using the Common Information Model (CIM) methodology followed by an implementation phase that defines 3GPP based components. The benefits of this approach are further depicted by preliminary examples of XML grammars defining profiles and components, component instances, coupled with descriptions of respective ubiquitous applications.
Abstract: This paper presents a hybrid algorithm for solving a timetabling problem, which is commonly encountered in many universities. The problem combines both teacher assignment and course scheduling problems simultaneously, and is presented as a mathematical programming model. However, this problem becomes intractable and it is unlikely that a proven optimal solution can be obtained by an integer programming approach, especially for large problem instances. A hybrid algorithm that combines an integer programming approach, a greedy heuristic and a modified simulated annealing algorithm collaboratively is proposed to solve the problem. Several randomly generated data sets of sizes comparable to that of an institution in Indonesia are solved using the proposed algorithm. Computational results indicate that the algorithm can overcome difficulties of large problem sizes encountered in previous related works.
Abstract: Generalized Center String (GCS) problem are
generalized from Common Approximate Substring problem
and Common substring problems. GCS are known to be
NP-hard allowing the problems lies in the explosion of
potential candidates. Finding longest center string without
concerning the sequence that may not contain any motifs is
not known in advance in any particular biological gene
process. GCS solved by frequent pattern-mining techniques
and known to be fixed parameter tractable based on the
fixed input sequence length and symbol set size. Efficient
method known as Bpriori algorithms can solve GCS with
reasonable time/space complexities. Bpriori 2 and Bpriori
3-2 algorithm are been proposed of any length and any
positions of all their instances in input sequences. In this
paper, we reduced the time/space complexity of Bpriori
algorithm by Constrained Based Frequent Pattern mining
(CBFP) technique which integrates the idea of Constraint
Based Mining and FP-tree mining. CBFP mining technique
solves the GCS problem works for all center string of any
length, but also for the positions of all their mutated copies
of input sequence. CBFP mining technique construct TRIE
like with FP tree to represent the mutated copies of center
string of any length, along with constraints to restraint
growth of the consensus tree. The complexity analysis for
Constrained Based FP mining technique and Bpriori
algorithm is done based on the worst case and average case
approach. Algorithm's correctness compared with the
Bpriori algorithm using artificial data is shown.
Abstract: The paper proposes and validates a new method of solving instances of the vehicle routing problem (VRP). The approach is based on a multiple agent system paradigm. The paper contains the VRP formulation, an overview of the multiple agent environment used and a description of the proposed implementation. The approach is validated experimentally. The experiment plan and the discussion of experiment results follow.