Abstract: The growing outsourcing of logistics services
resulting from the ongoing current in firms of costs
reduction/increased efficiency means that it is becoming more and
more important for the companies doing the outsourcing to carry out
a proper evaluation.
The multiple definitions and measures of logistics service
performance found in research on the topic create a certain degree of
confusion and do not clear the way towards the proper measurement
of their performance. Do a model and a specific set of indicators exist
that can be considered appropriate for measuring the performance of
logistics services outsourcing in industrial environments? Are said
indicators in keeping with the objectives pursued by outsourcing? We
aim to answer these and other research questions in the study we have
initiated in the field within the framework of the international High
Performance Manufacturing (HPM) project of which this paper
forms part.
As the first stage of this research, this paper reviews articles
dealing with the topic published in the last 15 years with the aim of
detecting the models most used to make this measurement and
determining which performance indicators are proposed as part of
said models and which are most used. The first steps are also taken in
determining whether these indicators, financial and operational, cover
the aims that are being pursued when outsourcing logistics services.
The findings show there is a wide variety of both models and
indicators used. This would seem to testify to the need to continue
with our research in order to try to propose a model and a set of
indicators for measuring the performance of logistics services
outsourcing in industrial environments.
Abstract: This work addresses the problem of optimizing
completely batch water-using network with multiple contaminants
where the flow change caused by mass transfer is taken into
consideration for the first time. A mathematical technique for
optimizing water-using network is proposed based on
source-tank-sink superstructure. The task is to obtain the freshwater
usage, recycle assignments among water-using units, wastewater
discharge and a steady water-using network configuration by
following steps. Firstly, operating sequences of water-using units are
determined by time constraints. Next, superstructure is simplified by
eliminating the reuse and recycle from water-using units with
maximum concentration of key contaminants. Then, the non-linear
programming model is solved by GAMS (General Algebra Model
System) for minimum freshwater usage, maximum water recycle and
minimum wastewater discharge. Finally, numbers of operating periods
are calculated to acquire the steady network configuration. A case
study is solved to illustrate the applicability of the proposed approach.
Abstract: This study proposes a new recommender system based on the collaborative folksonomy. The purpose of the proposed system is to recommend Internet resources (such as books, articles, documents, pictures, audio and video) to users. The proposed method includes four steps: creating the user profile based on the tags, grouping the similar users into clusters using an agglomerative hierarchical clustering, finding similar resources based on the user-s past collections by using content-based filtering, and recommending similar items to the target user. This study examines the system-s performance for the dataset collected from “del.icio.us," which is a famous social bookmarking website. Experimental results show that the proposed tag-based collaborative and content-based filtering hybridized recommender system is promising and effectiveness in the folksonomy-based bookmarking website.
Abstract: The usual correctness condition for a schedule of
concurrent database transactions is some form of serializability of
the transactions. For general forms, the problem of deciding whether
a schedule is serializable is NP-complete. In those cases other approaches
to proving correctness, using proof rules that allow the steps
of the proof of serializability to be guided manually, are desirable.
Such an approach is possible in the case of conflict serializability
which is proved algebraically by deriving serial schedules using
commutativity of non-conflicting operations. However, conflict serializability
can be an unnecessarily strong form of serializability restricting
concurrency and thereby reducing performance. In practice,
weaker, more general, forms of serializability for extended models of
transactions are used. Currently, there are no known methods using
proof rules for proving those general forms of serializability. In this
paper, we define serializability for an extended model of partitioned
transactions, which we show to be as expressive as serializability
for general partitioned transactions. An algebraic method for proving
general serializability is obtained by giving an initial-algebra specification
of serializable schedules of concurrent transactions in the
model. This demonstrates that it is possible to conduct algebraic
proofs of correctness of concurrent transactions in general cases.
Abstract: Text Mining is around applying knowledge discovery
techniques to unstructured text is termed knowledge discovery in text
(KDT), or Text data mining or Text Mining. In decision tree
approach is most useful in classification problem. With this
technique, tree is constructed to model the classification process.
There are two basic steps in the technique: building the tree and
applying the tree to the database. This paper describes a proposed
C5.0 classifier that performs rulesets, cross validation and boosting
for original C5.0 in order to reduce the optimization of error ratio.
The feasibility and the benefits of the proposed approach are
demonstrated by means of medial data set like hypothyroid. It is
shown that, the performance of a classifier on the training cases from
which it was constructed gives a poor estimate by sampling or using a
separate test file, either way, the classifier is evaluated on cases that
were not used to build and evaluate the classifier are both are large. If
the cases in hypothyroid.data and hypothyroid.test were to be
shuffled and divided into a new 2772 case training set and a 1000
case test set, C5.0 might construct a different classifier with a lower
or higher error rate on the test cases. An important feature of see5 is
its ability to classifiers called rulesets. The ruleset has an error rate
0.5 % on the test cases. The standard errors of the means provide an
estimate of the variability of results. One way to get a more reliable
estimate of predictive is by f-fold –cross- validation. The error rate of
a classifier produced from all the cases is estimated as the ratio of the
total number of errors on the hold-out cases to the total number of
cases. The Boost option with x trials instructs See5 to construct up to
x classifiers in this manner. Trials over numerous datasets, large and
small, show that on average 10-classifier boosting reduces the error
rate for test cases by about 25%.
Abstract: Many factors affect the success of Machine Learning
(ML) on a given task. The representation and quality of the instance
data is first and foremost. If there is much irrelevant and redundant
information present or noisy and unreliable data, then knowledge
discovery during the training phase is more difficult. It is well known
that data preparation and filtering steps take considerable amount of
processing time in ML problems. Data pre-processing includes data
cleaning, normalization, transformation, feature extraction and
selection, etc. The product of data pre-processing is the final training
set. It would be nice if a single sequence of data pre-processing
algorithms had the best performance for each data set but this is not
happened. Thus, we present the most well know algorithms for each
step of data pre-processing so that one achieves the best performance
for their data set.
Abstract: Lean manufacturing is a production philosophy made
popular by Toyota Motor Corporation (TMC). It is globally known as
the Toyota Production System (TPS) and has the ultimate aim of
reducing cost by thoroughly eliminating wastes or muda. TPS
embraces the Just-in-time (JIT) manufacturing; achieving cost
reduction through lead time reduction. JIT manufacturing can be
achieved by implementing Pull system in the production.
Furthermore, TPS aims to improve productivity and creating
continuous flow in the production by arranging the machines and
processes in cellular configurations. This is called as Cellular
Manufacturing Systems (CMS). This paper studies on integrating the
CMS with the Pull system to establish a Big Island-Pull system
production for High Mix Low Volume (HMLV) products in an
automotive component industry. The paper will use the build-in JIT
system steps adapted from TMC to create the Pull system production
and also create a shojinka line which, according to takt time, has the
flexibility to adapt to demand changes simply by adding and taking
out manpower. This will lead to optimization in production.
Abstract: It has been shown that a load discontinuity at the end of
an impulse will result in an extra impulse and hence an extra amplitude
distortion if a step-by-step integration method is employed to yield the
shock response. In order to overcome this difficulty, three remedies
are proposed to reduce the extra amplitude distortion. The first remedy
is to solve the momentum equation of motion instead of the force
equation of motion in the step-by-step solution of the shock response,
where an external momentum is used in the solution of the momentum
equation of motion. Since the external momentum is a resultant of the
time integration of external force, the problem of load discontinuity
will automatically disappear. The second remedy is to perform a single
small time step immediately upon termination of the applied impulse
while the other time steps can still be conducted by using the time step
determined from general considerations. This is because that the extra
impulse caused by a load discontinuity at the end of an impulse is
almost linearly proportional to the step size. Finally, the third remedy
is to use the average value of the two different values at the integration
point of the load discontinuity to replace the use of one of them for
loading input. The basic motivation of this remedy originates from the
concept of no loading input error associated with the integration point
of load discontinuity. The feasibility of the three remedies are
analytically explained and numerically illustrated.
Abstract: In this paper, a new face recognition method based on
PCA (principal Component Analysis), LDA (Linear Discriminant
Analysis) and neural networks is proposed. This method consists of
four steps: i) Preprocessing, ii) Dimension reduction using PCA, iii)
feature extraction using LDA and iv) classification using neural
network. Combination of PCA and LDA is used for improving the
capability of LDA when a few samples of images are available and
neural classifier is used to reduce number misclassification caused by
not-linearly separable classes. The proposed method was tested on
Yale face database. Experimental results on this database
demonstrated the effectiveness of the proposed method for face
recognition with less misclassification in comparison with previous
methods.
Abstract: The objective of this research intends to create a suitable model of distance training for community leaders in the upper northeastern region of Thailand. The implementation of the research process is divided into four steps: The first step is to analyze relevant documents. The second step deals with an interview in depth with experts. The third step is concerned with constructing a model. And the fourth step takes aim at model validation by expert assessments. The findings reveal the two important components for constructing an appropriate model of distance training for community leaders in the upper northeastern region. The first component consists of the context of technology management, e.g., principle, policy and goals. The second component can be viewed in two ways. Firstly, there are elements comprising input, process, output and feedback. Secondly, the sub-components include steps and process in training. The result of expert assessments informs that the researcher-s constructed model is consistent and suitable and overall the most appropriate.
Abstract: In this paper, an automatic determination algorithm for nuclear magnetic resonance (NMR) spectra of the metabolites in the living body by magnetic resonance spectroscopy (MRS) without human intervention or complicated calculations is presented. In such method, the problem of NMR spectrum determination is transformed into the determination of the parameters of a mathematical model of the NMR signal. To calculate these parameters efficiently, a new model called modified Hopfield neural network is designed. The main achievement of this paper over the work in literature [30] is that the speed of the modified Hopfield neural network is accelerated. This is done by applying cross correlation in the frequency domain between the input values and the input weights. The modified Hopfield neural network can accomplish complex dignals perfectly with out any additinal computation steps. This is a valuable advantage as NMR signals are complex-valued. In addition, a technique called “modified sequential extension of section (MSES)" that takes into account the damping rate of the NMR signal is developed to be faster than that presented in [30]. Simulation results show that the calculation precision of the spectrum improves when MSES is used along with the neural network. Furthermore, MSES is found to reduce the local minimum problem in Hopfield neural networks. Moreover, the performance of the proposed method is evaluated and there is no effect on the performance of calculations when using the modified Hopfield neural networks.
Abstract: As a vital activity for companies, new product
development (NPD) is also a very risky process due to the high
uncertainty degree encountered at every development stage and the
inevitable dependence on how previous steps are successfully
accomplished. Hence, there is an apparent need to evaluate new
product initiatives systematically and make accurate decisions under
uncertainty. Another major concern is the time pressure to launch a
significant number of new products to preserve and increase the
competitive power of the company. In this work, we propose an
integrated decision-making framework based on neural networks and
fuzzy logic to make appropriate decisions and accelerate the
evaluation process. We are especially interested in the two initial
stages where new product ideas are selected (go/no go decision) and
the implementation order of the corresponding projects are
determined. We show that this two-staged intelligent approach allows
practitioners to roughly and quickly separate good and bad product
ideas by making use of previous experiences, and then, analyze a
more shortened list rigorously.
Abstract: In the self-stabilizing algorithmic paradigm, each node has a local view of the system, in a finite amount of time the system converges to a global state with desired property. In a graph G =
(V, E), a subset S C V is a 2-packing if Vi c V: IN[i] n SI
Abstract: As there are also graph methods of circuit analysis in
addition to algebraic methods, it is, in theory, clearly possible to
carry out an analysis of a whole switched circuit in two-phase
switching exclusively by the graph method as well. This article deals
with two methods of full-graph solving of switched circuits: by
transformation graphs and by two-graphs. It deals with the circuit
switched capacitors and the switched current, too. All methods are
presented in an equally detailed steps to be able to compare.
Abstract: Medical image registration is the key technology in image guided radiation therapy (IGRT) systems. On the basis of the previous work on our IGRT prototype with a biorthogonal x-ray imaging system, we described a method focused on the 2D/2D rigid-body registration using multiresolution pyramid based mutual information in this paper. Three key steps were involved in the method : firstly, four 2D images were obtained including two x-ray projection images and two digital reconstructed radiographies(DRRs ) as the input for the registration ; Secondly, each pair of the corresponding x-ray image and DRR image were matched using multiresolution pyramid based mutual information under the ITK registration framework ; Thirdly, we got the final couch offset through a coordinate transformation by calculating the translations acquired from the two pairs of the images. A simulation example of a parotid gland tumor case and a clinical example of an anthropomorphic head phantom were employed in the verification tests. In addition, the influence of different CT slice thickness were tested. The simulation results showed that the positioning errors were 0.068±0.070, 0.072±0.098, 0.154±0.176mm along three axes which were lateral, longitudinal and vertical. The clinical test indicated that the positioning errors of the planned isocenter were 0.066, 0.07, 2.06mm on average with a CT slice thickness of 2.5mm. It can be concluded that our method with its verified accuracy and robustness can be effectively used in IGRT systems for patient setup.
Abstract: Full search block matching algorithm is widely used for hardware implementation of motion estimators in video compression algorithms. In this paper we are proposing a new architecture, which consists of a 2D parallel processing unit and a 1D unit both working in parallel. The proposed architecture reduces both data access power and computational power which are the main causes of power consumption in integer motion estimation. It also completes the operations with nearly the same number of clock cycles as compared to a 2D systolic array architecture. In this work sum of absolute difference (SAD)-the most repeated operation in block matching, is calculated in two steps. The first step is to calculate the SAD for alternate rows by a 2D parallel unit. If the SAD calculated by the parallel unit is less than the stored minimum SAD, the SAD of the remaining rows is calculated by the 1D unit. Early termination, which stops avoidable computations has been achieved with the help of alternate rows method proposed in this paper and by finding a low initial SAD value based on motion vector prediction. Data reuse has been applied to the reference blocks in the same search area which significantly reduced the memory access.
Abstract: Image retrieval is a topic where scientific interest is currently high. The important steps associated with image retrieval system are the extraction of discriminative features and a feasible similarity metric for retrieving the database images that are similar in content with the search image. Gabor filtering is a widely adopted technique for feature extraction from the texture images. The recently proposed sparsity promoting l1-norm minimization technique finds the sparsest solution of an under-determined system of linear equations. In the present paper, the l1-norm minimization technique as a similarity metric is used in image retrieval. It is demonstrated through simulation results that the l1-norm minimization technique provides a promising alternative to existing similarity metrics. In particular, the cases where the l1-norm minimization technique works better than the Euclidean distance metric are singled out.
Abstract: In the gas refineries of Iran-s South Pars Gas
Complex, Sulfrex demercaptanization process is used to remove
volatile and corrosive mercaptans from liquefied petroleum gases by
caustic solution. This process consists of two steps. Removing low
molecular weight mercaptans and regeneration exhaust caustic. Some
parameters such as LPG feed temperature, caustic concentration and
feed-s mercaptan in extraction step and sodium mercaptide content in
caustic, catalyst concentration, caustic temperature, air injection rate
in regeneration step are effective factors. In this paper was focused on
temperature factor that play key role in mercaptans extraction and
caustic regeneration. The experimental results demonstrated by
optimization of temperature, sodium mercaptide content in caustic
because of good oxidation minimized and sulfur impurities in
product reduced.
Abstract: This paper presents a robust vehicle detection approach using Haar-like feature. It is possible to get a strong edge feature from this Haar-like feature. Therefore it is very effective to remove the shadow of a vehicle on the road. And we can detect the boundary of vehicles accurately. In the paper, the vehicle detection algorithm can be divided into two main steps. One is hypothesis generation, and the other is hypothesis verification. In the first step, it determines vehicle candidates using features such as a shadow, intensity, and vertical edge. And in the second step, it determines whether the candidate is a vehicle or not by using the symmetry of vehicle edge features. In this research, we can get the detection rate over 15 frames per second on our embedded system.
Abstract: Nejad and Mashinchi (2011) proposed a revision for ranking fuzzy numbers based on the areas of the left and the right sides of a fuzzy number. However, this method still has some shortcomings such as lack of discriminative power to rank similar fuzzy numbers and no guarantee the consistency between the ranking of fuzzy numbers and the ranking of their images. To overcome these drawbacks, we propose an epsilon-deviation degree method based on the left area and the right area of a fuzzy number, and the concept of the centroid point. The main advantage of the new approach is the development of an innovative index value which can be used to consistently evaluate and rank fuzzy numbers. Numerical examples are presented to illustrate the efficiency and superiority of the proposed method.