Abstract: At present, intelligent planning in the Graphplan framework is a focus of artificial intelligence. While the Creating or Destroying Objects Planning (CDOP) is one unsolved problem of this field, one of the difficulties, too. In this paper, we study this planning problem and bring forward the idea of transforming objects to propositions, based on which we offer an algorithm, Creating or Destroying Objects in the Graphplan framework (CDOGP). Compared to Graphplan, the new algorithm can solve not only the entire problems that Graphplan do, but also a part of CDOP. It is for the first time that we introduce the idea of object-proposition, and we emphasize the discussion on the representations of creating or destroying objects operator and an algorithm in the Graphplan framework. In addition, we analyze the complexity of this algorithm.
Abstract: Collective action can be an effective means for local development as well as important strategy to enhance livelihoods especially among rural people. This article explores the level of collective action among members of Fishermen-s Wives Group (KUNITA) in Malaysia. KUNITA was established by the Malaysian Fishery Development Authority (LKIM) with an objective to increase the socio-economic status of fishermen-s families. The members who are mostly the wives and daughters of fishermen are strongly encouraged by LKIM to venture into entrepreneurship activities. The objective of this research was to see the level of collective action among members in KUNITA groups in the state of Selangor. The finding shows that high level of collective action among KUNITA members is strongly based on volunteerism. However, the level of cooperation among members in the group is relatively low. The findings present significant challenges for the group in maintaining the sustainability of KUNITA organization.
Abstract: Owing the fact that optimization of business process
is a crucial requirement to navigate, survive and even thrive in
today-s volatile business environment, this paper presents a
framework for selecting a best-fit optimization package for solving
complex business problems. Complexity level of the problem and/or
using incorrect optimization software can lead to biased solutions of
the optimization problem. Accordingly, the proposed framework
identifies a number of relevant factors (e.g. decision variables,
objective functions, and modeling approach) to be considered during
the evaluation and selection process. Application domain, problem
specifications, and available accredited optimization approaches are
also to be regarded. A recommendation of one or two optimization
software is the output of the framework which is believed to provide
the best results of the underlying problem. In addition to a set of
guidelines and recommendations on how managers can conduct an
effective optimization exercise is discussed.
Abstract: The geometric errors in the manufacturing process can
be reduced by optimal positioning of the fixture elements in the
fixture to make the workpiece stiff. We propose a new fixture layout
optimization method N-3-2-1 for large metal sheets in this paper that
combines the genetic algorithm and finite element analysis. The
objective function in this method is to minimize the sum of the nodal
deflection normal to the surface of the workpiece. Two different
kinds of case studies are presented, and optimal position of the
fixturing element is obtained for different cases.
Abstract: The objective of present work is to stimulate the
machining of material by electrical discharge machining (EDM) to
give effect of input parameters like discharge current (Ip), pulse on
time (Ton), pulse off time (Toff) which can bring about changes in the
output parameter, i.e. material removal rate. Experimental data was
gathered from die sinking EDM process using copper electrode and
Medium Carbon Steel (AISI 1040) as work-piece. The rules of
membership function (MF) and the degree of closeness to the
optimum value of the MMR are within the upper and lower range of
the process parameters. It was found that proposed fuzzy model is in
close agreement with the experimental results. By Intelligent, model
based design and control of EDM process parameters in this study
will help to enable dramatically decreased product and process
development cycle times.
Abstract: The research on two-wheeled inverted pendulum (TWIP) mobile robots or commonly known as balancing robots have gained momentum over the last decade in a number of robotic laboratories around the world. This paper describes the hardware design of such a robot. The objective of the design is to develop a TWIP mobile robot as well as MATLAB interfacing configuration to be used as flexible platform comprises of embedded unstable linear plant intended for research and teaching purposes. Issues such as selection of actuators and sensors, signal processing units, MATLAB Real Time Workshop coding, modeling and control scheme will be addressed and discussed. The system is then tested using a wellknown state feedback controller to verify its functionality.
Abstract: Thermo-chemical treatment (TCT) such as pyrolysis
is getting recognized as a valid route for (i) materials and valuable
products and petrochemicals recovery; (ii) waste recycling; and (iii)
elemental characterization. Pyrolysis is also receiving renewed
attention for its operational, economical and environmental
advantages. In this study, samples of polyethylene terephthalate
(PET) and polystyrene (PS) were pyrolysed in a microthermobalance
reactor (using a thermogravimetric-TGA setup). Both
polymers were prepared and conditioned prior to experimentation.
The main objective was to determine the kinetic parameters of the
depolymerization reactions that occur within the thermal degradation
process. Overall kinetic rate constants (ko) and activation energies
(Eo) were determined using the general kinetics theory (GKT)
method previously used by a number of authors. Fitted correlations
were found and validated using the GKT, errors were within ± 5%.
This study represents a fundamental step to pave the way towards the
development of scaling relationship for the investigation of larger
scale reactors relevant to industry.
Abstract: The objective of this study is to investigate fire
behaviors, experimentally and numerically, in a scaled version of an
underground station. The effect of ventilation velocity on the fire is
examined. Fire experiments are simulated by burning 10 ml
isopropyl alcohol fuel in a fire pool with dimensions 5cm x 10cm x 4
mm at the center of 1/100 scaled underground station model. A
commercial CFD program FLUENT was used in numerical
simulations. For air flow simulations, k-ω SST turbulence model and
for combustion simulation, non-premixed combustion model are
used. This study showed that, the ventilation velocity is increased
from 1 m/s to 3 m/s the maximum temperature in the station is found
to be less for ventilation velocity of 1 m/s. The reason for these
experimental result lies on the relative dominance of oxygen supply
effect on cooling effect. Without piston effect, maximum temperature
occurs above the fuel pool. However, when the ventilation velocity
increased the flame was tilted in the direction of ventilation and the
location of maximum temperature moves along the flow direction.
The velocities measured experimentally in the station at different
locations are well matched by the CFD simulation results. The
prediction of general flow pattern is satisfactory with the smoke
visualization tests. The backlayering in velocity is well predicted by
CFD simulation. However, all over the station, the CFD simulations
predicted higher temperatures compared to experimental
measurements.
Abstract: Low temperature (LT) is one of the most abiotic
stresses causing loss of yield in wheat (T. aestivum). Four major
genes in wheat (Triticum aestivum L.) with the dominant alleles
designated Vrn–A1,Vrn–B1,Vrn–D1 and Vrn4, are known to have
large effects on the vernalization response, but the effects on cold
hardiness are ambiguous. Poor cold tolerance has restricted winter
wheat production in regions of high winter stress [9]. It was known
that nearly all wheat chromosomes [5] or at least 10 chromosomes of
21 chromosome pairs are important in winter hardiness [15]. The
objective of present study was to clarify the role of each chromosome
in cold tolerance. With this purpose we used 20 isogenic lines of
wheat. In each one of these isogenic lines only a chromosome from
‘Bezostaya’ variety (a winter habit cultivar) was substituted to
‘Capple desprez’ variety. The plant materials were planted in
controlled conditions with 20º C and 16 h day length in moderately
cold areas of Iran at Karaj Agricultural Research Station in 2006-07
and the acclimation period was completed for about 4 weeks in a
cold room with 4º C. The cold hardiness of these isogenic lines was
measured by LT50 (the temperature in which 50% of the plants are
killed by freezing stress).The experimental design was completely
randomized block design (RCBD)with three replicates. The results
showed that chromosome 5A had a major effect on freezing
tolerance, and then chromosomes 1A and 4A had less effect on this
trait. Further studies are essential to understanding the importance of
each chromosome in controlling cold hardiness in wheat.
Abstract: Recent quasi-experimental evaluation of the Canadian Active Labour Market Policies (ALMP) by Human Resources and Skills Development Canada (HRSDC) has provided an opportunity to examine alternative methods to estimating the incremental effects of Employment Benefits and Support Measures (EBSMs) on program participants. The focus of this paper is to assess the efficiency and robustness of inverse probability weighting (IPW) relative to kernel matching (KM) in the estimation of program effects. To accomplish this objective, the authors compare pairs of 1,080 estimates, along with their associated standard errors, to assess which type of estimate is generally more efficient and robust. In the interest of practicality, the authorsalso document the computationaltime it took to produce the IPW and KM estimates, respectively.
Abstract: This paper presents comparative emission study of
newly introduced gasoline/LPG bifuel automotive engine in Indian
market. Emissions were tested as per LPG-Bharat stage III driving
cycle. Emission tests were carried out for urban cycle and extra urban
cycle. Total time for urban and extra urban cycle was 1180 sec.
Engine was run in LPG mode by using conversion system. Emissions
were tested as per standard procedure and were compared. Corrected
emissions were computed by deducting ambient reading from sample
reading. Paper describes detail emission test procedure and results
obtained. CO emissions were in the range of38.9 to 111.3 ppm. HC
emissions were in the range of 18.2 to 62.6 ppm. Nox emissions were
08 to 3.9 ppm and CO2 emissions were from 6719.2 to 8051 ppm.
Paper throws light on emission results of LPG vehicles recently
introduced in Indian automobile market. Objectives of this
experimental study were to measure emissions of engines in gasoline
& LPG mode and compare them.
Abstract: This paper argues that a product development exercise
involves in addition to the conventional stages, several decisions
regarding other aspects. These aspects should be addressed
simultaneously in order to develop a product that responds to the
customer needs and that helps realize objectives of the stakeholders
in terms of profitability, market share and the like. We present a
framework that encompasses these different development
dimensions. The framework shows that a product development
methodology such as the Quality Function Deployment (QFD) is the
basic tool which allows definition of the target specifications of a
new product. Creativity is the first dimension that enables the
development exercise to live and end successfully. A number of
group processes need to be followed by the development team in
order to ensure enough creativity and innovation. Secondly,
packaging is considered to be an important extension of the product.
Branding strategies, quality and standardization requirements,
identification technologies, design technologies, production
technologies and costing and pricing are also integral parts to the
development exercise. These dimensions constitute the proposed
framework. The paper also presents a mathematical model used to
calculate the design targets based on the target costing principle. The
framework is used to study a case of a new product development in
the telecommunications services sector.
Abstract: The objective of this research is to investigate the
advantages of using large-diameter 0.7 inch prestressing strands in
pretention applications. The advantages of large-diameter strands are
mainly beneficial in the heavy construction applications. Bridges and
tunnels are subjected to a higher daily traffic with an exponential
increase in trucks ultimate weight, which raise the demand for higher
structural capacity of bridges and tunnels. In this research, precast
prestressed I-girders were considered as a case study. Flexure
capacities of girders fabricated using 0.7 inch strands and different
concrete strengths were calculated and compared to capacities of 0.6
inch strands girders fabricated using equivalent concrete strength.
The effect of bridge deck concrete strength on composite deck-girder
section capacity was investigated due to its possible effect on final
section capacity. Finally, a comparison was made to compare the
bridge cross-section of girders designed using regular 0.6 inch strands
and the large-diameter 0.7 inch. The research findings showed that
structural advantages of 0.7 inch strands allow for using fewer bridge
girders, reduced material quantity, and light-weight members. The
structural advantages of 0.7 inch strands are maximized when high
strength concrete (HSC) are used in girder fabrication, and concrete
of minimum 5ksi compressive strength is used in pouring bridge
decks. The use of 0.7 inch strands in bridge industry can partially
contribute to the improvement of bridge conditions, minimize
construction cost, and reduce the construction duration of the project.
Abstract: Nanotechnology is the science of creating, using and
manipulating objects which have at least one dimension in range of
0.1 to 100 nanometers. In other words, nanotechnology is
reconstructing a substance using its individual atoms and arranging
them in a way that is desirable for our purpose.
The main reason that nanotechnology has been attracting
attentions is the unique properties that objects show when they are
formed at nano-scale. These differing characteristics that nano-scale
materials show compared to their nature-existing form is both useful
in creating high quality products and dangerous when being in
contact with body or spread in environment.
In order to control and lower the risk of such nano-scale particles,
the main following three topics should be considered:
1) First of all, these materials would cause long term diseases that
may show their effects on body years after being penetrated in human
organs and since this science has become recently developed in
industrial scale not enough information is available about their
hazards on body.
2) The second is that these particles can easily spread out in
environment and remain in air, soil or water for very long time,
besides their high ability to penetrate body skin and causing new
kinds of diseases.
3) The third one is that to protect body and environment against
the danger of these particles, the protective barriers must be finer than
these small objects and such defenses are hard to accomplish.
This paper will review, discuss and assess the risks that human and
environment face as this new science develops at a high rate.
Abstract: In this presentation, we discuss the use of information technologies in the area of special education for teaching individuals with learning disabilities. Application software which was developed for this purpose is used to demonstrate the applicability of a database integrated information processing system to alleviate the burden of educators. The software allows the preparation of individualized education programs based on the predefined objectives, goals and behaviors.
Abstract: This work presents a new algorithm based on a combination of fuzzy (FUZ), Dynamic Programming (DP), and Genetic Algorithm (GA) approach for capacitor allocation in distribution feeders. The problem formulation considers two distinct objectives related to total cost of power loss and total cost of capacitors including the purchase and installation costs. The novel formulation is a multi-objective and non-differentiable optimization problem. The proposed method of this article uses fuzzy reasoning for sitting of capacitors in radial distribution feeders, DP for sizing and finally GA for finding the optimum shape of membership functions which are used in fuzzy reasoning stage. The proposed method has been implemented in a software package and its effectiveness has been verified through a 9-bus radial distribution feeder for the sake of conclusions supports. A comparison has been done among the proposed method of this paper and similar methods in other research works that shows the effectiveness of the proposed method of this paper for solving optimum capacitor planning problem.
Abstract: The main objective of this research was to investigate
the biosorption capacity for biofilms of sulphate reducing bacteria
(SRB) to remove heavy metals, such as Zn, Pb and Cd from
rainwater using laboratory-scale reactors containing mixed support
media. Evidence showed that biosorption had contributed to removal
of heavy metals including Zn, Pb and Cd in presence of SRB and
SRB were also found in the aqueous samples from reactors.
However, the SRB and specific families (Desulfobacteriaceae and
Desulfovibrionaceae) were found mainly in the biomass samples
taken from all reactors at the end of the experiment. EDX-analysis
of reactor solids at end of experiment showed that heavy metals Zn,
Pb and Cd had also accumulated in these precipitates.
Abstract: Knowledge of factors, which influence stress and its
distribution, is of key importance to the successful production of
durable restorations. One of this is the marginal geometry. The
objective of this study was to evaluate, by finite element analysis
(FEA), the influence of different marginal designs on the stress
distribution in teeth prepared for cast metal crowns. Five margin
designs were taken into consideration: shoulderless, chamfer,
shoulder, sloped shoulder and shoulder with bevel. For each kind of
preparation three dimensional finite element analyses were initiated.
Maximal equivalent stresses were calculated and stress patterns were
represented in order to compare the marginal designs. Within the
limitation of this study, the shoulder and beveled shoulder margin
preparations of the teeth are preferred for cast metal crowns from
biomechanical point of view.
Abstract: In this paper, all variables are supposed to be integer
and positive. In this modern method, objective function is assumed to
be maximized or minimized but constraints are always explained like
less or equal to. In this method, choosing a dual combination of ideal
nonequivalent and omitting one of variables. With continuing this
act, finally, having one nonequivalent with (n-m+1) unknown
quantities in which final nonequivalent, m is counter for constraints,
n is counter for variables of decision.
Abstract: This paper deals with under actuator dynamic systems such as spring-mass-damper system when the number of control variable is less than the number of state variable. In order to apply optimal control, the controllability must be checked. There are many objective functions to be selected as the goal of the optimal control such as minimum energy, maximum energy and minimum jerk. As the objective function is the first priority, if one like to have the second goal to be applied; however, it could not fit in the objective function format and also avoiding the vector cost for the objective, this paper will illustrate the problem of under actuator dynamic systems with the easiest to deal with comparing between minimum energy and minimum jerk.