Abstract: In most of the popular implementation of Parallel GAs
the whole population is divided into a set of subpopulations, each
subpopulation executes GA independently and some individuals are
migrated at fixed intervals on a ring topology. In these studies,
the migrations usually occur 'synchronously' among subpopulations.
Therefore, CPUs are not used efficiently and the communication
do not occur efficiently either. A few studies tried asynchronous
migration but it is hard to implement and setting proper parameter
values is difficult.
The aim of our research is to develop a migration method which is
easy to implement, which is easy to set parameter values, and which
reduces communication traffic. In this paper, we propose a traffic
reduction method for the Asynchronous Parallel Distributed GA by
migration of elites only. This is a Server-Client model. Every client
executes GA on a subpopulation and sends an elite information to the
server. The server manages the elite information of each client and
the migrations occur according to the evolution of sub-population in
a client. This facilitates the reduction in communication traffic.
To evaluate our proposed model, we apply it to many function optimization
problems. We confirm that our proposed method performs
as well as current methods, the communication traffic is less, and
setting of the parameters are much easier.
Abstract: This paper considers the problem of scheduling maintenance actions for identical aircraft gas turbine engines. Each one of the turbines consists of parts which frequently require replacement. A finite inventory of spare parts is available and all parts are ready for replacement at any time. The inventory consists of both new and refurbished parts. Hence, these parts have different field lives. The goal is to find a replacement part sequencing that maximizes the time that the aircraft will keep functioning before the inventory is replenished. The problem is formulated as an identical parallel machine scheduling problem where the minimum completion time has to be maximized. Two models have been developed. The first one is an optimization model which is based on a 0-1 linear programming formulation, while the second one is an approximate procedure which consists in decomposing the problem into several two-machine subproblems. Each subproblem is optimally solved using the first model. Both models have been implemented using Lingo and have been tested on two sets of randomly generated data with up to 150 parts and 10 turbines. Experimental results show that the optimization model is able to solve only instances with no more than 4 turbines, while the decomposition procedure often provides near-optimal solutions within a maximum CPU time of 3 seconds.
Abstract: This paper concerns a formal model to help the
simulation of agent societies where institutional roles and
institutional links can be specified operationally. That is, this paper
concerns institutional roles that can be specified in terms of a minimal behavioral capability that an agent should have in order to
enact that role and, thus, to perform the set of institutional functions that role is responsible for. Correspondingly, the paper concerns
institutional links that can be specified in terms of a minimal
interactional capability that two agents should have in order to, while
enacting the two institutional roles that are linked by that institutional
link, perform for each other the institutional functions supported by
that institutional link. The paper proposes a cognitive architecture
approach to institutional roles and institutional links, that is, an approach in which a institutional role is seen as an abstract cognitive
architecture that should be implemented by any concrete agent (or set of concrete agents) that enacts the institutional role, and in which
institutional links are seen as interactions between the two abstract
cognitive agents that model the two linked institutional roles. We
introduce a cognitive architecture for such purpose, called the
Institutional BCC (IBCC) model, which lifts Yoav Shoham-s BCC
(Beliefs-Capabilities-Commitments) agent architecture to social
contexts. We show how the resulting model can be taken as a means
for a cognitive architecture account of institutional roles and
institutional links of agent societies. Finally, we present an example
of a generic scheme for certain fragments of the social organization
of agent societies, where institutional roles and institutional links are
given in terms of the model.
Abstract: This paper focuses on creating a component model of information system under uncertainty. The paper identifies problem in current approach of component modeling and proposes fuzzy tool, which will work with vague customer requirements and propose components of the resulting component model. The proposed tool is verified on specific information system and results are shown in paper. After finding suitable sub-components of the resulting component model, the component model is visualised by tool.
Abstract: In this empirical research, how marketing managers evaluate their firms- performances and decide to make innovation is examined. They use some standards which are past performance of the firm, target performance of the firm, competitor performance, and average performance of the industry to compare and evaluate the firms- performances. It is hypothesized that marketing managers and owners of the firm compare the firms- current performance with these four standards at the same time to decide when to make innovation relating to any aspects of the firm, either management style or products. Relationship between the comparison of the firm-s performance with these standards and innovation are searched in the same regression model. The results of the regression analysis are discussed and some recommendations are made for future studies and applicants.
Abstract: The paper presents a one-dimensional transient
mathematical model of compressible thermal multi-component gas
mixture flows in pipes. The set of the mass, momentum and enthalpy
conservation equations for gas phase is solved. Thermo-physical
properties of multi-component gas mixture are calculated by solving
the Equation of State (EOS) model. The Soave-Redlich-Kwong
(SRK-EOS) model is chosen. Gas mixture viscosity is calculated on
the basis of the Lee-Gonzales-Eakin (LGE) correlation. Numerical
analysis on rapid decompression in conventional dry gases is
performed by using the proposed mathematical model. The model is
validated on measured values of the decompression wave speed in
dry natural gas mixtures. All predictions show excellent agreement
with the experimental data at high and low pressure. The presented
model predicts the decompression in dry natural gas mixtures much
better than GASDECOM and OLGA codes, which are the most
frequently-used codes in oil and gas pipeline transport service.
Abstract: The present paper reports the removal of Cd(II) and
Zn(II) ions using synthetic Zeolit NaA. The adsorption capacity of
the sorbent (Zeolite NaA) strongly depends on simultaneous or not
simultaneous (concurrent) presence of Cd(II) and Zn(II) in the
sorbate. When Cd(II) and Zn(II) are present simultaneously
(concurrently) in the sorbate, Zn(II) ions were sorbed at higher rate.
Equilibrium data fitted Langmuir, Freundlich and Tempkin isotherms
well. The applicability of the isotherm equation to describe the
adsorption process was judged by the correlation coefficients R2. The
Langmuir model yielded the best fit with R2 values equal to or higher
than 0.970, as compared to the Freundlich and Tempkin models. The
fact that 1/n values range from 0.322 to 0.755 indicates that the
adsorption of Cd(II) and Zn(II) ions from aqueous solutions also
favored by the Freundlich model.
Abstract: This paper presents a method of model selection and
identification of Hammerstein systems by hybridization of the genetic
algorithm (GA) and particle swarm optimization (PSO). An unknown
nonlinear static part to be estimated is approximately represented
by an automatic choosing function (ACF) model. The weighting
parameters of the ACF and the system parameters of the linear
dynamic part are estimated by the linear least-squares method. On
the other hand, the adjusting parameters of the ACF model structure
are properly selected by the hybrid algorithm of the GA and PSO,
where the Akaike information criterion is utilized as the evaluation
value function. Simulation results are shown to demonstrate the
effectiveness of the proposed hybrid algorithm.
Abstract: This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of Pulping of Sugar Maple problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified problem where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.
Abstract: The deviation between the target state variable and the
practical state variable should be used to form the state tending factor
of complex systems, which can reflect the process for the complex
system to tend rationalization. Relating to the system of basic
equations of complete factor synergetics consisting of twenty
nonlinear stochastic differential equations, the two new models are
considered to set, which should be called respectively the
rationalizing tendency model and the non- rationalizing tendency
model. Therefore we can extend the theory of programming with the
objective function & constraint condition suitable only for the realm
of man-s activities into the new analysis with the tendency function &
constraint condition suitable for all the field of complex system.
Abstract: In this paper, we explore the applicability of the Sinc-
Collocation method to a three-dimensional (3D) oceanography model.
The model describes a wind-driven current with depth-dependent
eddy viscosity in the complex-velocity system. In general, the
Sinc-based methods excel over other traditional numerical methods
due to their exponentially decaying errors, rapid convergence and
handling problems in the presence of singularities in end-points.
Together with these advantages, the Sinc-Collocation approach that
we utilize exploits first derivative interpolation, whose integration
is much less sensitive to numerical errors. We bring up several
model problems to prove the accuracy, stability, and computational
efficiency of the method. The approximate solutions determined by
the Sinc-Collocation technique are compared to exact solutions and
those obtained by the Sinc-Galerkin approach in earlier studies. Our
findings indicate that the Sinc-Collocation method outperforms other
Sinc-based methods in past studies.
Abstract: In this paper, subtractive clustering based fuzzy inference system approach is used for early detection of faults in the function oriented software systems. This approach has been tested with real time defect datasets of NASA software projects named as PC1 and CM1. Both the code based model and joined model (combination of the requirement and code based metrics) of the datasets are used for training and testing of the proposed approach. The performance of the models is recorded in terms of Accuracy, MAE and RMSE values. The performance of the proposed approach is better in case of Joined Model. As evidenced from the results obtained it can be concluded that Clustering and fuzzy logic together provide a simple yet powerful means to model the earlier detection of faults in the function oriented software systems.
Abstract: A model is presented to find the optimal design of the
mixed renewable warranty policy for non-repairable Weibull life
products. The optimal design considers the conflict of interests
between the customer and the manufacturer: the customer interests
are longer full rebate coverage period and longer total warranty
coverage period, the manufacturer interests are lower warranty cost
and lower risk. The design factors are full rebate and total warranty
coverage periods. Results showed that mixed policy is better than full
rebate policy in terms of risk and total warranty coverage period in all
of the three bathtub regions. In addition, results showed that linear
policy is better than mixed policy in infant mortality and constant
failure regions while the mixed policy is better than linear policy in
ageing region of the model. Furthermore, the results showed that
using burn-in period for infant mortality products reduces warranty
cost and risk.
Abstract: The zero truncated model is usually used in modeling
count data without zero. It is the opposite of zero inflated model.
Zero truncated Poisson and zero truncated negative binomial models
are discussed and used by some researchers in analyzing the
abundance of rare species and hospital stay. Zero truncated models
are used as the base in developing hurdle models. In this study, we
developed a new model, the zero truncated strict arcsine model,
which can be used as an alternative model in modeling count data
without zero and with extra variation. Two simulated and one real
life data sets are used and fitted into this developed model. The
results show that the model provides a good fit to the data. Maximum
likelihood estimation method is used in estimating the parameters.
Abstract: This study describes analysis of tower grounding
resistance effected the back flashover voltage across insulator string
in a transmission system. This paper studies the 500 kV transmission
lines from Mae Moh, Lampang to Nong Chok, Bangkok, Thailand,
which is double circuit in the same steel tower with two overhead
ground wires. The factor of this study includes magnitude of
lightning stroke, and front time of lightning stroke. Steel tower uses
multistory tower model. The assumption of studies based on the
return stroke current ranged 1-200 kA, front time of lightning stroke
between 1 μs to 3 μs. The simulations study the effect of varying
tower grounding resistance that affect the lightning current.
Simulation results are analyzed lightning over voltage that causes
back flashover at insulator strings. This study helps to know causes
of problems of back flashover the transmission line system, and also
be as a guideline solving the problem for 500 kV transmission line
systems, as well.
Abstract: The mitigation of crop loss due to damaging freezes requires accurate air temperature prediction models. An improved model for temperature prediction in Georgia was developed by including information on seasonality and modifying parameters of an existing artificial neural network model. Alternative models were compared by instantiating and training multiple networks for each model. The inclusion of up to 24 hours of prior weather information and inputs reflecting the day of year were among improvements that reduced average four-hour prediction error by 0.18°C compared to the prior model. Results strongly suggest model developers should instantiate and train multiple networks with different initial weights to establish appropriate model parameters.
Abstract: Wavelet transform has been extensively used in
machine fault diagnosis and prognosis owing to its strength to deal
with non-stationary signals. The existing Wavelet transform based
schemes for fault diagnosis employ wavelet decomposition of the
entire vibration frequency which not only involve huge
computational overhead in extracting the features but also increases
the dimensionality of the feature vector. This increase in the
dimensionality has the tendency to 'over-fit' the training data and
could mislead the fault diagnostic model. In this paper a novel
technique, envelope wavelet packet transform (EWPT) is proposed in
which features are extracted based on wavelet packet transform of the
filtered envelope signal rather than the overall vibration signal. It not
only reduces the computational overhead in terms of reduced number
of wavelet decomposition levels and features but also improves the
fault detection accuracy. Analytical expressions are provided for the
optimal frequency resolution and decomposition level selection in
EWPT. Experimental results with both actual and simulated machine
fault data demonstrate significant gain in fault detection ability by
EWPT at reduced complexity compared to existing techniques.
Abstract: An artificial neural network (ANN) model is
presented for the prediction of kinematic viscosity of binary mixtures
of poly (ethylene glycol) (PEG) in water as a function of temperature,
number-average molecular weight and mass fraction. Kinematic
viscosities data of aqueous solutions for PEG (0.55419×10-6 –
9.875×10-6 m2/s) were obtained from the literature for a wide range
of temperatures (277.15 - 338.15 K), number-average molecular
weight (200 -10000), and mass fraction (0.0 – 1.0). A three layer
feed-forward artificial neural network was employed. This model
predicts the kinematic viscosity with a mean square error (MSE) of
0.281 and the coefficient of determination (R2) of 0.983. The results
show that the kinematic viscosity of binary mixture of PEG in water
could be successfully predicted using an artificial neural network
model.
Abstract: Automatic Extraction of Event information from
social text stream (emails, social network sites, blogs etc) is a vital
requirement for many applications like Event Planning and
Management systems and security applications. The key information
components needed from Event related text are Event title, location,
participants, date and time. Emails have very unique distinctions over
other social text streams from the perspective of layout and format
and conversation style and are the most commonly used
communication channel for broadcasting and planning events.
Therefore we have chosen emails as our dataset. In our work, we
have employed two statistical NLP methods, named as Finite State
Machines (FSM) and Hidden Markov Model (HMM) for the
extraction of event related contextual information. An application
has been developed providing a comparison among the two methods
over the event extraction task. It comprises of two modules, one for
each method, and works for both bulk as well as direct user input.
The results are evaluated using Precision, Recall and F-Score.
Experiments show that both methods produce high performance and
accuracy, however HMM was good enough over Title extraction and
FSM proved to be better for Venue, Date, and time.
Abstract: The mechanism of abiotic stress tolerance is crucial
for plants to survive in harsh condition and the knowledge of this
mechanism can be use to solve the problem of declining productivity
of plants or crops around the world. However in-depth description is
still unclear and it is argued, in particular that there is a relationship
between high salinity tolerance and the ability to tolerate high light
condition. In this study, Dunaliella salina, which can withstand high
salt was used as a model. Chlorophyll fluorometer for nonphotochemical
quenching (NPQ) measurement and high-performance
liquid chromatography for pigment determination was used. The
results show that NPQ value and the amount of pigment were
increased along with the levels of salinity. However, it establish a
clear relationship between high salt and high light but the further
study to optimized the solutions mentioned above is still required.