Abstract: Lean manufacturing is a production philosophy made
popular by Toyota Motor Corporation (TMC). It is globally known as
the Toyota Production System (TPS) and has the ultimate aim of
reducing cost by thoroughly eliminating wastes or muda. TPS
embraces the Just-in-time (JIT) manufacturing; achieving cost
reduction through lead time reduction. JIT manufacturing can be
achieved by implementing Pull system in the production.
Furthermore, TPS aims to improve productivity and creating
continuous flow in the production by arranging the machines and
processes in cellular configurations. This is called as Cellular
Manufacturing Systems (CMS). This paper studies on integrating the
CMS with the Pull system to establish a Big Island-Pull system
production for High Mix Low Volume (HMLV) products in an
automotive component industry. The paper will use the build-in JIT
system steps adapted from TMC to create the Pull system production
and also create a shojinka line which, according to takt time, has the
flexibility to adapt to demand changes simply by adding and taking
out manpower. This will lead to optimization in production.
Abstract: Recently research on human wayfinding has focused
mainly on mental representations rather than processes of
wayfinding. The objective of this paper is to demonstrate the
rationality behind applying multi-agent simulation paradigm to the
modeling of rescuer team wayfinding in order to develop
computational theory of perceptual wayfinding in crisis situations
using image schemata and affordances, which explains how people
find a specific destination in an unfamiliar building such as a
hospital. The hypothesis of this paper is that successful navigation is
possible if the agents are able to make the correct decision through
well-defined cues in critical cases, so the design of the building
signage is evaluated through the multi-agent-based simulation. In
addition, a special case of wayfinding in a building, finding one-s
way through three hospitals, is used to demonstrate the model.
Thereby, total rescue time for rescue operation during building fire is
computed. This paper discuses the computed rescue time for various
signage localization and provides experimental result for
optimization of building signage design. Therefore the most
appropriate signage design resulted in the shortest total rescue time in
various situations.
Abstract: This paper addresses a stock-cutting problem with rotation of items and without the guillotine cutting constraint. In order to solve the large-scale problem effectively and efficiently, we propose a simple but fast heuristic algorithm. It is shown that this heuristic outperforms the latest published algorithms for large-scale problem instances.
Abstract: In this paper, a design methodology to implement low-power and high-speed 2nd order recursive digital Infinite Impulse Response (IIR) filter has been proposed. Since IIR filters suffer from a large number of constant multiplications, the proposed method replaces the constant multiplications by using addition/subtraction and shift operations. The proposed new 6T adder cell is used as the Carry-Save Adder (CSA) to implement addition/subtraction operations in the design of recursive section IIR filter to reduce the propagation delay. Furthermore, high-level algorithms designed for the optimization of the number of CSA blocks are used to reduce the complexity of the IIR filter. The DSCH3 tool is used to generate the schematic of the proposed 6T CSA based shift-adds architecture design and it is analyzed by using Microwind CAD tool to synthesize low-complexity and high-speed IIR filters. The proposed design outperforms in terms of power, propagation delay, area and throughput when compared with MUX-12T, MCIT-7T based CSA adder filter design. It is observed from the experimental results that the proposed 6T based design method can find better IIR filter designs in terms of power and delay than those obtained by using efficient general multipliers.
Abstract: In Algeria, now, the oil pumping plants are fed with electric power by independent local sources. This type of feeding has many advantages (little climatic influence, independent operation). However it requires a qualified maintenance staff, a rather high frequency of maintenance and repair and additional fuel costs. Taking into account the increasing development of the national electric supply network (Sonelgaz), a real possibility of transfer of the local sources towards centralized sources appears.These latter cannot only be more economic but more reliable than the independent local sources as well. In order to carry out this transfer, it is necessary to work out an optimal strategy to rebuilding these networks taking in account the economic parameters and the indices of reliability.
Abstract: SAD (Sum of Absolute Difference) algorithm is
heavily used in motion estimation which is computationally highly
demanding process in motion picture encoding. To enhance the
performance of motion picture encoding on a VLIW processor, an
efficient implementation of SAD algorithm on the VLIW processor is
essential. SAD algorithm is programmed as a nested loop with a
conditional branch. In VLIW processors, loop is usually optimized by
software pipelining, but researches on optimal scheduling of software
pipelining for nested loops, especially nested loops with conditional
branches are rare. In this paper, we propose an optimal scheduling and
implementation of SAD algorithm with conditional branch on a VLIW
DSP processor. The proposed optimal scheduling first transforms the
nested loop with conditional branch into a single loop with conditional
branch with consideration of full utilization of ILP capability of the
VLIW processor and realization of earlier escape from the loop. Next,
the proposed optimal scheduling applies a modulo scheduling
technique developed for single loop. Based on this optimal scheduling
strategy, optimal implementation of SAD algorithm on TMS320C67x,
a VLIW DSP is presented. Through experiments on TMS320C6713
DSK, it is shown that H.263 encoder with the proposed SAD
implementation performs better than other H.263 encoder with other
SAD implementations, and that the code size of the optimal SAD
implementation is small enough to be appropriate for embedded
environments.
Abstract: With the aim of improving nutritional profile and antioxidant capacity of gluten-free cookies, blueberry pomace, by-product of juice production, was processed into a new food ingredient by drying and grinding and used for a gluten-free cookie formulation. Since the quality of a baked product is highly influenced by the baking conditions, the objective of this work was to optimize the baking time and thickness of dough pieces, by applying Response Surface Methodology (RSM) in order to obtain the best technological quality of the cookies. The experiments were carried out according to a Central Composite Design (CCD) by selecting the dough thickness and baking time as independent variables, while hardness, color parameters (L*, a* and b* values), water activity, diameter and short/long ratio were response variables. According to the results of RSM analysis, the baking time of 13.74min and dough thickness of 4.08mm was found to be the optimal for the baking temperature of 170°C. As similar optimal parameters were obtained by previously conducted experiment based on sensory analysis, response surface methodology (RSM) can be considered as a suitable approach to optimize the baking process.
Abstract: Proteins or genes that have similar sequences are likely to perform the same function. One of the most widely used techniques for sequence comparison is sequence alignment. Sequence alignment allows mismatches and insertion/deletion, which represents biological mutations. Sequence alignment is usually performed only on two sequences. Multiple sequence alignment, is a natural extension of two-sequence alignment. In multiple sequence alignment, the emphasis is to find optimal alignment for a group of sequences. Several applicable techniques were observed in this research, from traditional method such as dynamic programming to the extend of widely used stochastic optimization method such as Genetic Algorithms (GAs) and Simulated Annealing. A framework with combination of Genetic Algorithm and Simulated Annealing is presented to solve Multiple Sequence Alignment problem. The Genetic Algorithm phase will try to find new region of solution while Simulated Annealing can be considered as an alignment improver for any near optimal solution produced by GAs.
Abstract: A new fast correlation algorithm for calibrating the
wavelength of Optical Spectrum Analyzers (OSAs) was introduced
in [1]. The minima of acetylene gas spectra were measured and
correlated with saved theoretical data [2]. So it is possible to find the
correct wavelength calibration data using a noisy reference spectrum.
First tests showed good algorithmic performance for gas line spectra
with high noise. In this article extensive performance tests were made
to validate the noise resistance of this algorithm. The filter and
correlation parameters of the algorithm were optimized for improved
noise performance. With these parameters the performance of this
wavelength calibration was simulated to predict the resulting
wavelength error in real OSA systems. Long term simulations were
made to evaluate the performance of the algorithm over the lifetime
of a real OSA.
Abstract: This paper presents a comparative study of Ant Colony and Genetic Algorithms for VLSI circuit bi-partitioning. Ant colony optimization is an optimization method based on behaviour of social insects [27] whereas Genetic algorithm is an evolutionary optimization technique based on Darwinian Theory of natural evolution and its concept of survival of the fittest [19]. Both the methods are stochastic in nature and have been successfully applied to solve many Non Polynomial hard problems. Results obtained show that Genetic algorithms out perform Ant Colony optimization technique when tested on the VLSI circuit bi-partitioning problem.
Abstract: Honeycomb sandwich panels are increasingly used in the construction of space vehicles because of their outstanding strength, stiffness and light weight properties. However, the use of honeycomb sandwich plates comes with difficulties in the design process as a result of the large number of design variables involved, including composite material design, shape and geometry. Hence, this work deals with the presentation of an optimal design of hexagonal honeycomb sandwich structures subjected to space environment. The optimization process is performed using a set of algorithms including the gravitational search algorithm (GSA). Numerical results are obtained and presented for a set of algorithms. The results obtained by the GSA algorithm are much better compared to other algorithms used in this study.
Abstract: Response Surface Methodology (RSM) is a powerful
and efficient mathematical approach widely applied in the
optimization of cultivation process. Cellulase enzyme production by
Trichoderma reesei RutC30 using agricultural waste rice straw and
banana fiber as carbon source were investigated. In this work,
sequential optimization strategy based statistical design was
employed to enhance the production of cellulase enzyme through
submerged cultivation. A fractional factorial design (26-2) was applied
to elucidate the process parameters that significantly affect cellulase
production. Temperature, Substrate concentration, Inducer
concentration, pH, inoculum age and agitation speed were identified
as important process parameters effecting cellulase enzyme synthesis.
The concentration of lignocelluloses and lactose (inducer) in the
cultivation medium were found to be most significant factors. The
steepest ascent method was used to locate the optimal domain and a
Central Composite Design (CCD) was used to estimate the quadratic
response surface from which the factor levels for maximum
production of cellulase were determined.
Abstract: For the past one decade, biclustering has become popular data mining technique not only in the field of biological data analysis but also in other applications like text mining, market data analysis with high-dimensional two-way datasets. Biclustering clusters both rows and columns of a dataset simultaneously, as opposed to traditional clustering which clusters either rows or columns of a dataset. It retrieves subgroups of objects that are similar in one subgroup of variables and different in the remaining variables. Firefly Algorithm (FA) is a recently-proposed metaheuristic inspired by the collective behavior of fireflies. This paper provides a preliminary assessment of discrete version of FA (DFA) while coping with the task of mining coherent and large volume bicluster from web usage dataset. The experiments were conducted on two web usage datasets from public dataset repository whereby the performance of FA was compared with that exhibited by other population-based metaheuristic called binary Particle Swarm Optimization (PSO). The results achieved demonstrate the usefulness of DFA while tackling the biclustering problem.
Abstract: This paper deals with the optimal design of two-channel recursive parallelogram quadrature mirror filter (PQMF) banks. The analysis and synthesis filters of the PQMF bank are composed of two-dimensional (2-D) recursive digital all-pass filters (DAFs) with nonsymmetric half-plane (NSHP) support region. The design problem can be facilitated by using the 2-D doubly complementary half-band (DC-HB) property possessed by the analysis and synthesis filters. For finding the coefficients of the 2-D recursive NSHP DAFs, we appropriately formulate the design problem to result in an optimization problem that can be solved by using a weighted least-squares (WLS) algorithm in the minimax (L∞) optimal sense. The designed 2-D recursive PQMF bank achieves perfect magnitude response and possesses satisfactory phase response without requiring extra phase equalizer. Simulation results are also provided for illustration and comparison.
Abstract: A kind of crash energy absorption structure adopted by vehicle simulator crash testing equipment based on mechanical energy
storage was studied. Dynamic explicit finite element simulation was achieved for thin-walled tube structure under different conditions of
section shape, thickness and inducement groove style. Crash energy absorption property of the structure was obtained. After optimization,
a reasonable structure was given which can meet current vehicle crash regulation. And the optimized structure can be adopted in vehicle
simulator, which can increase the practicability of the testing
equipment.
Abstract: With the prevalence of computer and development of information technology, Geographic Information Systems (GIS) have long used for a variety of applications in electrical engineering. GIS are designed to support the analysis, management, manipulation and mapping of spatial data. This paper presents several usages of GIS in power utilities such as automated route selection for the construction of new power lines which uses a dynamic programming model for route optimization, load forecasting and optimizing planning of substation-s location and capacity with comprehensive algorithm which involves an accurate small-area electric load forecasting procedure and simulates the different cost functions of substations.
Abstract: The contribution deals with current or potential approaches to the modeling and optimization of tactical activities. This issue takes on importance in recent times, particularly with the increasing trend of digitized battlefield, the development of C4ISR systems and intention to streamline the command and control process at the lowest levels of command. From fundamental and philosophically point of view, this new approaches seek to significantly upgrade and enhance the decision-making process of the tactical commanders.
Abstract: This paper proposes a neural network weights and
topology optimization using genetic evolution and the
backpropagation training algorithm. The proposed crossover and
mutation operators aims to adapt the networks architectures and
weights during the evolution process. Through a specific inheritance
procedure, the weights are transmitted from the parents to their
offsprings, which allows re-exploitation of the already trained
networks and hence the acceleration of the global convergence of the
algorithm. In the preprocessing phase, a new feature extraction
method is proposed based on Legendre moments with the Maximum
entropy principle MEP as a selection criterion. This allows a global
search space reduction in the design of the networks. The proposed
method has been applied and tested on the well known MNIST
database of handwritten digits.
Abstract: In recent years, there have been attempts to store
natural gas in adsorptive form. This is called adsorptive natural gas,
or ANG. The problem with this technology is the low sorption
capacity. The purpose is to achieve compressed natural gas (CNG)
capacity of 230 V/V. Further research is required to achieve such
target. Several research studies have been performed with this target;
through either the modification or development of new sorbents or
the optimization of the operation sorption process itself. In this work,
storage of methane on molecular sieves 5A and 13X was studied on
dry basis, and on wet basis to certain extent. The temperature and the
pressure dynamics were investigated. The results indicated that
regardless of the charge pressure, the time for the peak temperature
during the methane charge process is always the same. This can be
used as a characteristic of the adsorbent. The total achieved
deliveries using molecular sieves were much lower than that of
activated carbons; 53.0 V/V for the case of 13X molecular sieves and
43 V/V for the case of 5A molecular sieves, both at 2oC and 4 MPa
(580 psi). Investigation of charge pressure dynamic using wet
molecular sieves at 2oC and a mass ratio of 0.5, revealed slowness of
the process and unexpected behavior.
Abstract: Recently, genetic algorithms (GA) and particle swarm optimization (PSO) technique have attracted considerable attention among various modern heuristic optimization techniques. Since the two approaches are supposed to find a solution to a given objective function but employ different strategies and computational effort, it is appropriate to compare their performance. This paper presents the application and performance comparison of PSO and GA optimization techniques, for Thyristor Controlled Series Compensator (TCSC)-based controller design. The design objective is to enhance the power system stability. The design problem of the FACTS-based controller is formulated as an optimization problem and both the PSO and GA optimization techniques are employed to search for optimal controller parameters. The performance of both optimization techniques in terms of computational time and convergence rate is compared. Further, the optimized controllers are tested on a weakly connected power system subjected to different disturbances, and their performance is compared with the conventional power system stabilizer (CPSS). The eigenvalue analysis and non-linear simulation results are presented and compared to show the effectiveness of both the techniques in designing a TCSC-based controller, to enhance power system stability.