Abstract: Stair climbing is one of critical issues for field robots to
widen applicable areas. This paper presents optimal design on
kinematic parameters of a new robotic platform for stair climbing. The
robotic platform climbs various stairs by body flip locomotion with
caterpillar type main platform. Kinematic parameters such as platform
length, platform height, and caterpillar rotation speed are optimized to
maximize stair climbing stability. Three types of stairs are used to
simulate typical user conditions. The optimal design process is
conducted based on Taguchi methodology, and resulting parameters
with optimized objective function are presented. In near future, a
prototype is assembled for real environment testing.
Abstract: This study aims to screen out and to optimize the
major nutrients for maximum carotenoid production and
antioxidation characteristics by Rhodotorula rubra. It was found that
supplementary of 10 g/l glucose as carbon source, 1 g/l ammonium
sulfate as nitrogen source and 1 g/l yeast extract as growth factor in
the medium provided the better yield of carotenoid content of 30.39
μg/g cell dry weight the amount of antioxidation of Rhodotorula
rubra by DPPH, ABTS and MDA method were 1.463%, 34.21% and
34.09 μmol/l, respectively.
Abstract: Environmental awareness and the recent
environmental policies have forced many electric utilities to
restructure their operational practices to account for their emission
impacts. One way to accomplish this is by reformulating the
traditional economic dispatch problem such that emission effects are
included in the mathematical model. This paper presents a Particle
Swarm Optimization (PSO) algorithm to solve the Economic-
Emission Dispatch problem (EED) which gained recent attention due
to the deregulation of the power industry and strict environmental
regulations. The problem is formulated as a multi-objective one with
two competing functions, namely economic cost and emission
functions, subject to different constraints. The inequality constraints
considered are the generating unit capacity limits while the equality
constraint is generation-demand balance. A novel equality constraint
handling mechanism is proposed in this paper. PSO algorithm is
tested on a 30-bus standard test system. Results obtained show that
PSO algorithm has a great potential in handling multi-objective
optimization problems and is capable of capturing Pareto optimal
solution set under different loading conditions.
Abstract: A semi-active control strategy for suspension
systems of passenger cars is presented employing
Magnetorheological (MR) dampers. The vehicle is modeled with
seven DOFs including the, roll pitch and bounce of car body, and
the vertical motion of the four tires. In order to design an optimal
controller based on the actuator constraints, a Linear-Quadratic
Regulator (LQR) is designed. The design procedure of the LQR
consists of selecting two weighting matrices to minimize the energy
of the control system. This paper presents a hybrid optimization
procedure which is a combination of gradient-based and
evolutionary algorithms to choose the weighting matrices with
regards to the actuator constraint. The optimization algorithm is
defined based on maximum comfort and actuator constraints. It is
noted that utilizing the present control algorithm may significantly
reduce the vibration response of the passenger car, thus, providing
a comfortable ride.
Abstract: Support vector machines (SVMs) are considered to be
the best machine learning algorithms for minimizing the predictive
probability of misclassification. However, their drawback is that for
large data sets the computation of the optimal decision boundary is a
time consuming function of the size of the training set. Hence several
methods have been proposed to speed up the SVM algorithm. Here
three methods used to speed up the computation of the SVM
classifiers are compared experimentally using a musical genre
classification problem. The simplest method pre-selects a random
sample of the data before the application of the SVM algorithm. Two
additional methods use proximity graphs to pre-select data that are
near the decision boundary. One uses k-Nearest Neighbor graphs and
the other Relative Neighborhood Graphs to accomplish the task.
Abstract: Voltage collapse is instability of heavily loaded electric
power systems that cause to declining voltages and blackout. Power
systems are predicated to become more heavily loaded in the future
decade as the demand for electric power rises while economic and
environmental concerns limit the construction of new transmission
and generation capacity. Heavily loaded power systems are closer to
their stability limits and voltage collapse blackouts will occur if
suitable monitoring and control measures are not taken. To control
transmission lines, it can be used from FACTS devices.
In this paper Harmony search algorithm (HSA) and Genetic
Algorithm (GA) have applied to determine optimal location of
FACTS devices in a power system to improve power system stability.
Three types of FACTS devices (TCPAT, UPFS, and SVC) have been
introduced. Bus under voltage has been solved by controlling reactive
power of shunt compensator. Also a combined series-shunt
compensators has been also used to control transmission power flow
and bus voltage simultaneously.
Different scenarios have been considered. First TCPAT, UPFS, and
SVC are placed solely in transmission lines and indices have been
calculated. Then two types of above controller try to improve
parameters randomly. The last scenario tries to make better voltage
stability index and losses by implementation of three types controller
simultaneously. These scenarios are executed on typical 34-bus test
system and yields efficiency in improvement of voltage profile and
reduction of power losses; it also may permit an increase in power
transfer capacity, maximum loading, and voltage stability margin.
Abstract: This is an application research presenting the
improvement of production quality using the six sigma solutions and
the analyses of benefit-cost ratio. The case of interest is the
production of tile-concrete. Such production has faced with the
problem of high nonconforming products from an inappropriate
surface coating and had low process capability based on the strength
property of tile. Surface coating and tile strength are the most critical
to quality of this product. The improvements followed five stages of
six sigma solutions. After the improvement, the production yield was
improved to 80% as target required and the defective products from
coating process was remarkably reduced from 29.40% to 4.09%. The
process capability based on the strength quality was increased from
0.87 to 1.08 as customer oriented. The improvement was able to save
the materials loss for 3.24 millions baht or 0.11 million dollars. The
benefits from the improvement were analyzed from (1) the reduction
of the numbers of non conforming tile using its factory price for
surface coating improvement and (2) the materials saved from the
increment of process capability. The benefit-cost ratio of overall
improvement was high as 7.03. It was non valuable investment in
define, measure, analyses and the initial of improve stages after that
it kept increasing. This was due to there were no benefits in define,
measure, and analyze stages of six sigma since these three stages
mainly determine the cause of problem and its effects rather than
improve the process. The benefit-cost ratio starts existing in the
improve stage and go on. Within each stage, the individual benefitcost
ratio was much higher than the accumulative one as there was an
accumulation of cost since the first stage of six sigma. The
consideration of the benefit-cost ratio during the improvement
project helps make decisions for cost saving of similar activities
during the improvement and for new project. In conclusion, the
determination of benefit-cost ratio behavior through out six sigma
implementation period provides the useful data for managing quality
improvement for the optimal effectiveness. This is the additional
outcome from the regular proceeding of six sigma.
Abstract: Text categorization is the problem of classifying text
documents into a set of predefined classes. In this paper, we
investigated three approaches to build a meta-classifier in order to
increase the classification accuracy. The basic idea is to learn a metaclassifier
to optimally select the best component classifier for each
data point. The experimental results show that combining classifiers
can significantly improve the accuracy of classification and that our
meta-classification strategy gives better results than each individual
classifier. For 7083 Reuters text documents we obtained a
classification accuracies up to 92.04%.
Abstract: A conventional binding method for low power in a
high-level synthesis mainly focuses on finding an optimal binding for
an assumed input data, and obtains only one binding table. In this
paper, we show that a binding method which uses multiple binding
tables gets better solution compared with the conventional methods
which use a single binding table, and propose a dynamic bus binding
scheme for low power using multiple binding tables. The proposed
method finds multiple binding tables for the proper partitions of an
input data, and switches binding tables dynamically to produce the
minimum total switching activity. Experimental result shows that the
proposed method obtains a binding solution having 12.6-28.9%
smaller total switching activity compared with the conventional
methods.
Abstract: In this paper we compare the response of linear and
nonlinear neural network-based prediction schemes in prediction of
received Signal-to-Interference Power Ratio (SIR) in Direct
Sequence Code Division Multiple Access (DS/CDMA) systems. The
nonlinear predictor is Multilayer Perceptron MLP and the linear
predictor is an Adaptive Linear (Adaline) predictor. We solve the
problem of complexity by using the Minimum Mean Squared Error
(MMSE) principle to select the optimal predictors. The optimized
Adaline predictor is compared to optimized MLP by employing
noisy Rayleigh fading signals with 1.8 GHZ carrier frequency in an
urban environment. The results show that the Adaline predictor can
estimates SIR with the same error as MLP when the user has the
velocity of 5 km/h and 60 km/h but by increasing the velocity up-to
120 km/h the mean squared error of MLP is two times more than
Adaline predictor. This makes the Adaline predictor (with lower
complexity) more suitable than MLP for closed-loop power control
where efficient and accurate identification of the time-varying
inverse dynamics of the multi path fading channel is required.
Abstract: Environmental pollution problems have been globally
main concern in all fields including economy, society and culture into
the 21st century. Beginning with the Kyoto Protocol, the reduction on
the emissions of greenhouse gas such as CO2 and SOX has been a
principal challenge of our day. As most buildings unlike durable goods
in other industries have a characteristic and long life cycle, they
consume energy in quantity and emit much CO2. Thus, for green
building construction, more research is needed to reduce the CO2
emissions at each stage in the life cycle. However, recent studies are
focused on the use and maintenance phase. Also, there is a lack of
research on the initial design stage, especially the structure design.
Therefore, in this study, we propose an optimal design plan
considering CO2 emissions and cost in composite buildings
simultaneously by applying to the structural design of actual building.
Abstract: Rapid economic development and population growth
in Malaysia had accelerated the generation of solid waste. This issue
gives pressure for effective management of municipal solid waste
(MSW) to take place in Malaysia due to the increased cost of landfill.
This paper discusses optimal planning of waste-to-energy (WTE)
using a combinatorial simulation and optimization model through
mixed integer linear programming (MILP) approach. The proposed
multi-period model is tested in Iskandar Malaysia (IM) as case study
for a period of 12 years (2011 -2025) to illustrate the economic
potential and tradeoffs involved in this study. In this paper, 3
scenarios have been used to demonstrate the applicability of the
model: (1) Incineration scenario (2) Landfill scenario (3) Optimal
scenario. The model revealed that the minimum cost of electricity
generation from 9,995,855 tonnes of MSW is estimated as USD
387million with a total electricity generation of 50MW /yr in the
optimal scenario.
Abstract: Land degradation is of concern in many countries. People more and more must address the problems associated with the degradation of soil properties due to man. Increasingly, organic soil amendments, such as compost are being examined for their potential use in soil restoration and for preventing soil erosion. In the Czech Republic, compost is the most used to improve soil structure and increase the content of soil organic matter. Land reclamation / restoration is one of the ways to evaluate industrially produced compost because Czech farmers are not willing to use compost as organic fertilizer. The most common use of reclamation substrates in the Czech Republic is for the rehabilitation of landfills and contaminated sites.
This paper deals with the influence of reclamation substrates (RS) with different proportions of compost and sand on selected soil properties–chemical characteristics, nitrogen bioavailability, leaching of mineral nitrogen, respiration activity and plant biomass production. Chemical properties vary proportionally with addition of compost and sand to the control variant (topsoil). The highest differences between the variants were recorded in leaching of mineral nitrogen (varies from 1.36mg dm-3 in C to 9.09mg dm-3). Addition of compost to soil improves conditions for plant growth in comparison with soil alone. However, too high addition of compost may have adverse effects on plant growth. In addition, high proportion of compost increases leaching of mineral N. Therefore, mixture of 70% of soil with 10% of compost and 20% of sand may be recommended as optimal composition of RS.
Abstract: This paper presents the averaging model of a buck
converter derived from the generalized state-space averaging method.
The sliding mode control is used to regulate the output voltage of the
converter and taken into account in the model. The proposed model
requires the fast computational time compared with those of the full
topology model. The intensive time-domain simulations via the exact
topology model are used as the comparable model. The results show
that a good agreement between the proposed model and the switching
model is achieved in both transient and steady-state responses. The
reported model is suitable for the optimal controller design by using
the artificial intelligence techniques.
Abstract: Problems on algebraical polynomials appear in many fields of mathematics and computer science. Especially the task of determining the roots of polynomials has been frequently investigated.Nonetheless, the task of locating the zeros of complex polynomials is still challenging. In this paper we deal with the location of zeros of univariate complex polynomials. We prove some novel upper bounds for the moduli of the zeros of complex polynomials. That means, we provide disks in the complex plane where all zeros of a complex polynomial are situated. Such bounds are extremely useful for obtaining a priori assertations regarding the location of zeros of polynomials. Based on the proven bounds and a test set of polynomials, we present an experimental study to examine which bound is optimal.
Abstract: Distant-talking voice-based HCI system suffers from
performance degradation due to mismatch between the acoustic
speech (runtime) and the acoustic model (training). Mismatch is
caused by the change in the power of the speech signal as observed at
the microphones. This change is greatly influenced by the change in
distance, affecting speech dynamics inside the room before reaching
the microphones. Moreover, as the speech signal is reflected, its
acoustical characteristic is also altered by the room properties. In
general, power mismatch due to distance is a complex problem. This
paper presents a novel approach in dealing with distance-induced
mismatch by intelligently sensing instantaneous voice power variation
and compensating model parameters. First, the distant-talking speech
signal is processed through microphone array processing, and the
corresponding distance information is extracted. Distance-sensitive
Gaussian Mixture Models (GMMs), pre-trained to capture both
speech power and room property are used to predict the optimal
distance of the speech source. Consequently, pre-computed statistic
priors corresponding to the optimal distance is selected to correct
the statistics of the generic model which was frozen during training.
Thus, model combinatorics are post-conditioned to match the power
of instantaneous speech acoustics at runtime. This results to an
improved likelihood in predicting the correct speech command at
farther distances. We experiment using real data recorded inside two
rooms. Experimental evaluation shows voice recognition performance
using our method is more robust to the change in distance compared
to the conventional approach. In our experiment, under the most
acoustically challenging environment (i.e., Room 2: 2.5 meters), our
method achieved 24.2% improvement in recognition performance
against the best-performing conventional method.
Abstract: This paper presents an efficient method of obtaining a straight-line motion in the tool configuration space using an articulated robot between two specified points. The simulation results & the implementation results show the effectiveness of the method.
Abstract: We address the balancing problem of transfer lines in
this paper to find the optimal line balancing that minimizes the nonproductive
time. We focus on the tool change time and face
orientation change time both of which influence the makespane. We
consider machine capacity limitations and technological constraints
associated with the manufacturing process of auto cylinder heads.
The problem is represented by a mixed integer programming model
that aims at distributing the design features to workstations and
sequencing the machining processes at a minimum non-productive
time. The proposed model is solved by an algorithm established using
linearization schemes and Benders- decomposition approach. The
experiments show the efficiency of the algorithm in reaching the
exact solution of small and medium problem instances at reasonable
time.
Abstract: Optimization is often a critical issue for most system
design problems. Evolutionary Algorithms are population-based,
stochastic search techniques, widely used as efficient global
optimizers. However, finding optimal solution to complex high
dimensional, multimodal problems often require highly
computationally expensive function evaluations and hence are
practically prohibitive. The Dynamic Approximate Fitness based
Hybrid EA (DAFHEA) model presented in our earlier work [14]
reduced computation time by controlled use of meta-models to
partially replace the actual function evaluation by approximate
function evaluation. However, the underlying assumption in
DAFHEA is that the training samples for the meta-model are
generated from a single uniform model. Situations like model
formation involving variable input dimensions and noisy data
certainly can not be covered by this assumption. In this paper we
present an enhanced version of DAFHEA that incorporates a
multiple-model based learning approach for the SVM approximator.
DAFHEA-II (the enhanced version of the DAFHEA framework) also
overcomes the high computational expense involved with additional
clustering requirements of the original DAFHEA framework. The
proposed framework has been tested on several benchmark functions
and the empirical results illustrate the advantages of the proposed
technique.
Abstract: The use of electronic sensors in the electronics
industry has become increasingly popular over the past few years,
and it has become a high competition product. The frequency
adjustment process is regarded as one of the most important process
in the electronic sensor manufacturing process. Due to inaccuracies
in the frequency adjustment process, up to 80% waste can be caused
due to rework processes; therefore, this study aims to provide a
preliminary understanding of the role of parameters used in the
frequency adjustment process, and also make suggestions in order to
further improve performance. Four parameters are considered in this
study: air pressure, dispensing time, vacuum force, and the distance
between the needle tip and the product. A full factorial design for
experiment 2k was considered to determine those parameters that
significantly affect the accuracy of the frequency adjustment process,
where a deviation in the frequency after adjustment and the target
frequency is expected to be 0 kHz. The experiment was conducted on
two levels, using two replications and with five center-points added.
In total, 37 experiments were carried out. The results reveal that air
pressure and dispensing time significantly affect the frequency
adjustment process. The mathematical relationship between these
two parameters was formulated, and the optimal parameters for air
pressure and dispensing time were found to be 0.45 MPa and 458 ms,
respectively. The optimal parameters were examined by carrying out
a confirmation experiment in which an average deviation of 0.082
kHz was achieved.