Abstract: We address the balancing problem of transfer lines in
this paper to find the optimal line balancing that minimizes the nonproductive
time. We focus on the tool change time and face
orientation change time both of which influence the makespane. We
consider machine capacity limitations and technological constraints
associated with the manufacturing process of auto cylinder heads.
The problem is represented by a mixed integer programming model
that aims at distributing the design features to workstations and
sequencing the machining processes at a minimum non-productive
time. The proposed model is solved by an algorithm established using
linearization schemes and Benders- decomposition approach. The
experiments show the efficiency of the algorithm in reaching the
exact solution of small and medium problem instances at reasonable
time.
Abstract: The problem on the conservation programme of the Royal Thai Navy Sea Turtle Nursery, Phang-nga Province, Thailand is high mortality rate of juvenile green sea turtle (Cheloniamydas) on nursing period. So, during May to October 2012, postmortem examinations of juvenile green sea turtle were performed to determine the causes of dead. Fresh tissues of postmortem of 15 juvenile green sea turtles (1-3 months old) were investigated using paraffin section technique. The results showed normal ultrastructure of all tissue organs. These instances reviewed the health and stability of the environments in which juvenile green sea turtles live and concern for the survival rate. The present article also provides guidance for a review of the biology, guidelines for appropriate postmortem tissue, normal histology and sampling collection and procedures. The data also provides information for conservation of this endangered species in term of acknowledging and encouraging people to protect the animals and their habitats in nature.
Abstract: As the air traffic increases at a hub airport, some
flights cannot land or depart at their preferred target time. This event
happens because the airport runways become occupied to near their
capacity. It results in extra costs for both passengers and airlines
because of the loss of connecting flights or more waiting, more fuel
consumption, rescheduling crew members, etc. Hence, devising an
appropriate scheduling method that determines a suitable runway and
time for each flight in order to efficiently use the hub capacity and
minimize the related costs is of great importance. In this paper, we
present a mixed-integer zero-one model for scheduling a set of mixed
landing and departing flights (despite of most previous studies
considered only landings). According to the fact that the flight cost is
strongly affected by the level of airline, we consider different airline
categories in our model. This model presents a single objective
minimizing the total sum of three terms, namely 1) the weighted
deviation from targets, 2) the scheduled time of the last flight (i.e.,
makespan), and 3) the unbalancing the workload on runways. We
solve 10 simulated instances of different sizes up to 30 flights and 4
runways. Optimal solutions are obtained in a reasonable time, which
are satisfactory in comparison with the traditional rule, namely First-
Come-First-Serve (FCFS) that is far apart from optimality in most
cases.
Abstract: Renewable and non-renewable resource constraints have been vast studied in theoretical fields of project scheduling problems. However, although cumulative resources are widespread in practical cases, the literature on project scheduling problems subject to these resources is scant. So in order to study this type of resources more, in this paper we use the framework of a resource constrained project scheduling problem (RCPSP) with finish-start precedence relations between activities and subject to the cumulative resources in addition to the renewable resources. We develop a branch and bound algorithm for this problem customizing precedence tree algorithm of RCPSP. We perform extensive experimental analysis on the algorithm to check its effectiveness and performance for solving different instances of the problem in question.
Abstract: An ontology is widely used in many kinds of applications as a knowledge representation tool for domain knowledge. However, even though an ontology schema is well prepared by domain experts, it is tedious and cost-intensive to add instances into the ontology. The most confident and trust-worthy way to add instances into the ontology is to gather instances from tables in the related Web pages. In automatic populating of instances, the primary task is to find the most proper concept among all possible concepts within the ontology for a given table. This paper proposes a novel method for this problem by defining the similarity between the table and the concept using the overlap of their properties. According to a series of experiments, the proposed method achieves 76.98% of accuracy. This implies that the proposed method is a plausible way for automatic ontology population from Web tables.
Abstract: This paper investigates the issue of building decision
trees from data with imprecise class values where imprecision is
encoded in the form of possibility distributions. The Information
Affinity similarity measure is introduced into the well-known gain
ratio criterion in order to assess the homogeneity of a set of
possibility distributions representing instances-s classes belonging to
a given training partition. For the experimental study, we proposed an
information affinity based performance criterion which we have used
in order to show the performance of the approach on well-known
benchmarks.
Abstract: Ontology Matching is a task needed in various applica-tions, for example for comparison or merging purposes. In literature,many algorithms solving the matching problem can be found, butmost of them do not consider instances at all. Mappings are deter-mined by calculating the string-similarity of labels, by recognizinglinguistic word relations (synonyms, subsumptions etc.) or by ana-lyzing the (graph) structure. Due to the facts that instances are oftenmodeled within the ontology and that the set of instances describesthe meaning of the concepts better than their meta information,instances should definitely be incorporated into the matching process.In this paper several novel instance-based matching algorithms arepresented which enhance the quality of matching results obtainedwith common concept-based methods. Different kinds of formalismsare use to classify concepts on account of their instances and finallyto compare the concepts directly.KeywordsInstances, Ontology Matching, Semantic Web
Abstract: Classification is one of the primary themes in
computational biology. The accuracy of classification strongly
depends on quality of a dataset, and we need some method to
evaluate this quality. In this paper, we propose a new graphical
analysis method using 'Membership-Deviation Graph (MDG)' for
analyzing quality of a dataset. MDG represents degree of
membership and deviations for instances of a class in the dataset. The
result of MDG analysis is used for understanding specific feature and
for selecting best feature for classification.
Abstract: This study considers the problem of determining
operation and maintenance schedules for a containership equipped
with components during its sailing according to a pre-determined
navigation schedule. The operation schedule, which specifies work
time of each component, determines the due-date of each maintenance
activity, and the maintenance schedule specifies the actual start
time of each maintenance activity. The main constraints are component
requirements, workforce availability, working time limitation,
and inter-maintenance time. To represent the problem mathematically,
a mixed integer programming model is developed. Then,
due to the problem complexity, we suggest a heuristic for the objective
of minimizing the sum of earliness and tardiness between the
due-date and the starting time of each maintenance activity. Computational
experiments were done on various test instances and the
results are reported.
Abstract: Ants are fascinating creatures that demonstrate the
ability to find food and bring it back to their nest. Their ability as a
colony, to find paths to food sources has inspired the development of
algorithms known as Ant Colony Systems (ACS). The principle of
cooperation forms the backbone of such algorithms, commonly used
to find solutions to problems such as the Traveling Salesman
Problem (TSP). Ants communicate to each other through chemical
substances called pheromones. Modeling individual ants- ability to
manipulate this substance can help an ACS find the best solution.
This paper introduces a Dynamic Ant Colony System with threelevel
updates (DACS3) that enhance an existing ACS. Experiments
were conducted to observe single ant behavior in a colony of
Malaysian House Red Ants. Such behavior was incorporated into the
DACS3 algorithm. We benchmark the performance of DACS3 versus
DACS on TSP instances ranging from 14 to 100 cities. The result
shows that the DACS3 algorithm can achieve shorter distance in
most cases and also performs considerably faster than DACS.
Abstract: This paper considers the problem of scheduling maintenance actions for identical aircraft gas turbine engines. Each one of the turbines consists of parts which frequently require replacement. A finite inventory of spare parts is available and all parts are ready for replacement at any time. The inventory consists of both new and refurbished parts. Hence, these parts have different field lives. The goal is to find a replacement part sequencing that maximizes the time that the aircraft will keep functioning before the inventory is replenished. The problem is formulated as an identical parallel machine scheduling problem where the minimum completion time has to be maximized. Two models have been developed. The first one is an optimization model which is based on a 0-1 linear programming formulation, while the second one is an approximate procedure which consists in decomposing the problem into several two-machine subproblems. Each subproblem is optimally solved using the first model. Both models have been implemented using Lingo and have been tested on two sets of randomly generated data with up to 150 parts and 10 turbines. Experimental results show that the optimization model is able to solve only instances with no more than 4 turbines, while the decomposition procedure often provides near-optimal solutions within a maximum CPU time of 3 seconds.
Abstract: In recent times, the problem of Unsolicited Bulk
Email (UBE) or commonly known as Spam Email, has increased at a
tremendous growth rate. We present an analysis of survey based on
classifications of UBE in various research works. There are many
research instances for classification between spam and non-spam
emails but very few research instances are available for classification
of spam emails, per se. This paper does not intend to assert some
UBE classification to be better than the others nor does it propose
any new classification but it bemoans the lack of harmony on number
and definition of categories proposed by different researchers. The
paper also elaborates on factors like intent of spammer, content of
UBE and ambiguity in different categories as proposed in related
research works of classifications of UBE.
Abstract: Research into the problem of classification of sonar signals has been taken up as a challenging task for the neural networks. This paper investigates the design of an optimal classifier using a Multi layer Perceptron Neural Network (MLP NN) and Support Vector Machines (SVM). Results obtained using sonar data sets suggest that SVM classifier perform well in comparison with well-known MLP NN classifier. An average classification accuracy of 91.974% is achieved with SVM classifier and 90.3609% with MLP NN classifier, on the test instances. The area under the Receiver Operating Characteristics (ROC) curve for the proposed SVM classifier on test data set is found as 0.981183, which is very close to unity and this clearly confirms the excellent quality of the proposed classifier. The SVM classifier employed in this paper is implemented using kernel Adatron algorithm is seen to be robust and relatively insensitive to the parameter initialization in comparison to MLP NN.
Abstract: Speedups from mapping four real-life DSP
applications on an embedded system-on-chip that couples coarsegrained
reconfigurable logic with an instruction-set processor are
presented. The reconfigurable logic is realized by a 2-Dimensional
Array of Processing Elements. A design flow for improving
application-s performance is proposed. Critical software parts, called
kernels, are accelerated on the Coarse-Grained Reconfigurable
Array. The kernels are detected by profiling the source code. For
mapping the detected kernels on the reconfigurable logic a prioritybased
mapping algorithm has been developed. Two 4x4 array
architectures, which differ in their interconnection structure among
the Processing Elements, are considered. The experiments for eight
different instances of a generic system show that important overall
application speedups have been reported for the four applications.
The performance improvements range from 1.86 to 3.67, with an
average value of 2.53, compared with an all-software execution.
These speedups are quite close to the maximum theoretical speedups
imposed by Amdahl-s law.
Abstract: The heuristic decision rules used for project
scheduling will vary depending upon the project-s size, complexity,
duration, personnel, and owner requirements. The concept of project
complexity has received little detailed attention. The need to
differentiate between easy and hard problem instances and the
interest in isolating the fundamental factors that determine the
computing effort required by these procedures inspired a number of
researchers to develop various complexity measures.
In this study, the most common measures of project complexity are
presented. A new measure of project complexity is developed. The
main privilege of the proposed measure is that, it considers size,
shape and logic characteristics, time characteristics, resource
demands and availability characteristics as well as number of critical
activities and critical paths. The degree of sensitivity of the proposed
measure for complexity of project networks has been tested and
evaluated against the other measures of complexity of the considered
fifty project networks under consideration in the current study. The
developed measure showed more sensitivity to the changes in the
network data and gives accurate quantified results when comparing
the complexities of networks.
Abstract: Col is a classic combinatorial game played on graphs
and to solve a general instance is a PSPACE-complete problem.
However, winning strategies can be found for some specific graph
instances. In this paper, the solution of Col on complete k-ary trees
is presented.
Abstract: The behavior of Radial Basis Function (RBF) Networks greatly depends on how the center points of the basis functions are selected. In this work we investigate the use of instance reduction techniques, originally developed to reduce the storage requirements of instance based learners, for this purpose. Five Instance-Based Reduction Techniques were used to determine the set of center points, and RBF networks were trained using these sets of centers. The performance of the RBF networks is studied in terms of classification accuracy and training time. The results obtained were compared with two Radial Basis Function Networks: RBF networks that use all instances of the training set as center points (RBF-ALL) and Probabilistic Neural Networks (PNN). The former achieves high classification accuracies and the latter requires smaller training time. Results showed that RBF networks trained using sets of centers located by noise-filtering techniques (ALLKNN and ENN) rather than pure reduction techniques produce the best results in terms of classification accuracy. The results show that these networks require smaller training time than that of RBF-ALL and higher classification accuracy than that of PNN. Thus, using ALLKNN and ENN to select center points gives better combination of classification accuracy and training time. Our experiments also show that using the reduced sets to train the networks is beneficial especially in the presence of noise in the original training sets.
Abstract: The third phase of web means semantic web requires many web pages which are annotated with metadata. Thus, a crucial question is where to acquire these metadata. In this paper we propose our approach, a semi-automatic method to annotate the texts of documents and web pages and employs with a quite comprehensive knowledge base to categorize instances with regard to ontology. The approach is evaluated against the manual annotations and one of the most popular annotation tools which works the same as our tool. The approach is implemented in .net framework and uses the WordNet for knowledge base, an annotation tool for the Semantic Web.
Abstract: The vehicle routing problem (VRP) is a famous combinatorial optimization problem. Because of its well-known difficulty, metaheuristics are the most appropriate methods to tackle large and realistic instances. The goal of this paper is to highlight the key ideas for designing VRP metaheuristics according to the following criteria: efficiency, speed, robustness, and ability to take advantage of the problem structure. Such elements can obviously be used to build solution methods for other combinatorial optimization problems, at least in the deterministic field.
Abstract: The production of a plant can be measured in terms of
seeds. The generation of seeds plays a critical role in our social and
daily life. The fruit production which generates seeds, depends on the
various parameters of the plant, such as shoot length, leaf number,
root length, root number, etc When the plant is growing, some leaves
may be lost and some new leaves may appear. It is very difficult to
use the number of leaves of the tree to calculate the growth of the
plant.. It is also cumbersome to measure the number of roots and
length of growth of root in several time instances continuously after
certain initial period of time, because roots grow deeper and deeper
under ground in course of time. On the contrary, the shoot length of
the tree grows in course of time which can be measured in different
time instances. So the growth of the plant can be measured using the
data of shoot length which are measured at different time instances
after plantation. The environmental parameters like temperature, rain
fall, humidity and pollution are also play some role in production of
yield. The soil, crop and distance management are taken care to
produce maximum amount of yields of plant. The data of the growth
of shoot length of some mustard plant at the initial stage (7,14,21 &
28 days after plantation) is available from the statistical survey by a
group of scientists under the supervision of Prof. Dilip De. In this
paper, initial shoot length of Ken( one type of mustard plant) has
been used as an initial data. The statistical models, the methods of
fuzzy logic and neural network have been tested on this mustard
plant and based on error analysis (calculation of average error) that
model with minimum error has been selected and can be used for the
assessment of shoot length at maturity. Finally, all these methods
have been tested with other type of mustard plants and the particular
soft computing model with the minimum error of all types has been
selected for calculating the predicted data of growth of shoot length.
The shoot length at the stage of maturity of all types of mustard
plants has been calculated using the statistical method on the
predicted data of shoot length.