Abstract: The present models and simulation algorithms of intracellular stochastic kinetics are usually based on the premise that diffusion is so fast that the concentrations of all the involved species are homogeneous in space. However, recents experimental measurements of intracellular diffusion constants indicate that the assumption of a homogeneous well-stirred cytosol is not necessarily valid even for small prokaryotic cells. In this work a mathematical treatment of diffusion that can be incorporated in a stochastic algorithm simulating the dynamics of a reaction-diffusion system is presented. The movement of a molecule A from a region i to a region j of the space is represented as a first order reaction Ai k- ! Aj , where the rate constant k depends on the diffusion coefficient. The diffusion coefficients are modeled as function of the local concentration of the solutes, their intrinsic viscosities, their frictional coefficients and the temperature of the system. The stochastic time evolution of the system is given by the occurrence of diffusion events and chemical reaction events. At each time step an event (reaction or diffusion) is selected from a probability distribution of waiting times determined by the intrinsic reaction kinetics and diffusion dynamics. To demonstrate the method the simulation results of the reaction-diffusion system of chaperoneassisted protein folding in cytoplasm are shown.
Abstract: Deep and radical social reforms of the last century-s
nineties in many Eastern European countries caused changes in
Information Technology-s (IT) field. Inefficient information
technologies were rapidly replaced with forefront IT solutions, e.g.,
in Eastern European countries there is a high level penetration of
qualitative high-speed Internet. The authors have taken part in the
introduction of those changes in Latvia-s leading IT research
institute. Grounding on their experience authors in this paper offer an
IT services based model for analysis the mentioned changes- and
development processes in the higher education and research fields,
i.e., for research e-infrastructure-s development. Compare to the
international practice such services were developed in Eastern Europe
in an untraditional way, which provided swift and positive
technological changes.
Abstract: Evolutionary Algorithms are population-based,
stochastic search techniques, widely used as efficient global
optimizers. However, many real life optimization problems often
require finding optimal solution to complex high dimensional,
multimodal problems involving computationally very expensive
fitness function evaluations. Use of evolutionary algorithms in such
problem domains is thus practically prohibitive. An attractive
alternative is to build meta models or use an approximation of the
actual fitness functions to be evaluated. These meta models are order
of magnitude cheaper to evaluate compared to the actual function
evaluation. Many regression and interpolation tools are available to
build such meta models. This paper briefly discusses the
architectures and use of such meta-modeling tools in an evolutionary
optimization context. We further present two evolutionary algorithm
frameworks which involve use of meta models for fitness function
evaluation. The first framework, namely the Dynamic Approximate
Fitness based Hybrid EA (DAFHEA) model [14] reduces
computation time by controlled use of meta-models (in this case
approximate model generated by Support Vector Machine
regression) to partially replace the actual function evaluation by
approximate function evaluation. However, the underlying
assumption in DAFHEA is that the training samples for the metamodel
are generated from a single uniform model. This does not take
into account uncertain scenarios involving noisy fitness functions.
The second model, DAFHEA-II, an enhanced version of the original
DAFHEA framework, incorporates a multiple-model based learning
approach for the support vector machine approximator to handle
noisy functions [15]. Empirical results obtained by evaluating the
frameworks using several benchmark functions demonstrate their
efficiency
Abstract: In this paper, we present an efficient numerical algorithm, namely block homotopy perturbation method, for solving fuzzy linear systems based on homotopy perturbation method. Some numerical examples are given to show the efficiency of the algorithm.
Abstract: This paper presents a novel two-phase hybrid optimization algorithm with hybrid genetic operators to solve the optimal control problem of a single stage hybrid manufacturing system. The proposed hybrid real coded genetic algorithm (HRCGA) is developed in such a way that a simple real coded GA acts as a base level search, which makes a quick decision to direct the search towards the optimal region, and a local search method is next employed to do fine tuning. The hybrid genetic operators involved in the proposed algorithm improve both the quality of the solution and convergence speed. The phase–1 uses conventional real coded genetic algorithm (RCGA), while optimisation by direct search and systematic reduction of the size of search region is employed in the phase – 2. A typical numerical example of an optimal control problem with the number of jobs varying from 10 to 50 is included to illustrate the efficacy of the proposed algorithm. Several statistical analyses are done to compare the validity of the proposed algorithm with the conventional RCGA and PSO techniques. Hypothesis t – test and analysis of variance (ANOVA) test are also carried out to validate the effectiveness of the proposed algorithm. The results clearly demonstrate that the proposed algorithm not only improves the quality but also is more efficient in converging to the optimal value faster. They can outperform the conventional real coded GA (RCGA) and the efficient particle swarm optimisation (PSO) algorithm in quality of the optimal solution and also in terms of convergence to the actual optimum value.
Abstract: Heat powered solid sorption is a feasible alternative to
electrical vapor compression refrigeration systems. In this paper,
activated carbon (powder type Maxsorb and fiber type ACF-A10)-
CO2 based adsorption cooling cycles are studied using the pressuretemperature-
concentration (P-T-W) diagram. The specific cooling
effect (SCE) and the coefficient of performance (COP) of these two
cooling systems are simulated for the driving heat source
temperatures ranging from 30 ºC to 90 ºC in terms of different
cooling load temperatures with a cooling source temperature of 25
ºC. It is found from the present analysis that Maxsorb-CO2 couple
shows higher cooling capacity and COP. The maximum COPs of
Maxsorb-CO2 and ACF(A10)-CO2 based cooling systems are found
to be 0.15 and 0.083, respectively. The main innovative feature of
this cooling cycle is the ability to utilize low temperature waste heat
or solar energy using CO2 as the refrigerant, which is one of the best
alternative for applications where flammability and toxicity are not
allowed.
Abstract: This paper evaluates performances of an adaptive noise
cancelling (ANC) based target detection algorithm on a set of real test
data supported by the Defense Evaluation Research Agency (DERA
UK) for multi-target wideband active sonar echolocation system. The
hybrid algorithm proposed is a combination of an adaptive ANC
neuro-fuzzy scheme in the first instance and followed by an iterative
optimum target motion estimation (TME) scheme. The neuro-fuzzy
scheme is based on the adaptive noise cancelling concept with the
core processor of ANFIS (adaptive neuro-fuzzy inference system) to
provide an effective fine tuned signal. The resultant output is then
sent as an input to the optimum TME scheme composed of twogauge
trimmed-mean (TM) levelization, discrete wavelet denoising
(WDeN), and optimal continuous wavelet transform (CWT) for
further denosing and targets identification. Its aim is to recover the
contact signals in an effective and efficient manner and then determine
the Doppler motion (radial range, velocity and acceleration) at very
low signal-to-noise ratio (SNR). Quantitative results have shown that
the hybrid algorithm have excellent performance in predicting targets-
Doppler motion within various target strength with the maximum
false detection of 1.5%.
Abstract: Control chart pattern recognition is one of the most important tools to identify the process state in statistical process control. The abnormal process state could be classified by the recognition of unnatural patterns that arise from assignable causes. In this study, a wavelet based neural network approach is proposed for the recognition of control chart patterns that have various characteristics. The procedure of proposed control chart pattern recognizer comprises three stages. First, multi-resolution wavelet analysis is used to generate time-shape and time-frequency coefficients that have detail information about the patterns. Second, distance based features are extracted by a bi-directional Kohonen network to make reduced and robust information. Third, a back-propagation network classifier is trained by these features. The accuracy of the proposed method is shown by the performance evaluation with numerical results.
Abstract: This paper introduces new algorithms (Fuzzy relative
of the CLARANS algorithm FCLARANS and Fuzzy c Medoids
based on randomized search FCMRANS) for fuzzy clustering of
relational data. Unlike existing fuzzy c-medoids algorithm (FCMdd)
in which the within cluster dissimilarity of each cluster is minimized
in each iteration by recomputing new medoids given current
memberships, FCLARANS minimizes the same objective function
minimized by FCMdd by changing current medoids in such away
that that the sum of the within cluster dissimilarities is minimized.
Computing new medoids may be effected by noise because outliers
may join the computation of medoids while the choice of medoids in
FCLARANS is dictated by the location of a predominant fraction of
points inside a cluster and, therefore, it is less sensitive to the
presence of outliers. In FCMRANS the step of computing new
medoids in FCMdd is modified to be based on randomized search.
Furthermore, a new initialization procedure is developed that add
randomness to the initialization procedure used with FCMdd. Both
FCLARANS and FCMRANS are compared with the robust and
linearized version of fuzzy c-medoids (RFCMdd). Experimental
results with different samples of the Reuter-21578, Newsgroups
(20NG) and generated datasets with noise show that FCLARANS is
more robust than both RFCMdd and FCMRANS. Finally, both
FCMRANS and FCLARANS are more efficient and their outputs
are almost the same as that of RFCMdd in terms of classification
rate.
Abstract: The purpose of this study was to evaluate and
compare new indices based on the discrete wavelet transform
with another spectral parameters proposed in the literature as
mean average voltage, median frequency and ratios between
spectral moments applied to estimate acute exercise-induced
changes in power output, i.e., to assess peripheral muscle
fatigue during a dynamic fatiguing protocol. 15 trained
subjects performed 5 sets consisting of 10 leg press, with 2
minutes rest between sets. Surface electromyography was
recorded from vastus medialis (VM) muscle. Several surface
electromyographic parameters were compared to detect
peripheral muscle fatigue. These were: mean average voltage
(MAV), median spectral frequency (Fmed), Dimitrov spectral
index of muscle fatigue (FInsm5), as well as other five
parameters obtained from the discrete wavelet transform
(DWT) as ratios between different scales. The new wavelet
indices achieved the best results in Pearson correlation
coefficients with power output changes during acute dynamic
contractions. Their regressions were significantly different
from MAV and Fmed. On the other hand, they showed the
highest robustness in presence of additive white gaussian
noise for different signal to noise ratios (SNRs). Therefore,
peripheral impairments assessed by sEMG wavelet indices
may be a relevant factor involved in the loss of power output
after dynamic high-loading fatiguing task.
Abstract: This study investigated the number of Aedes larvae,
the key breeding sites of Aedes sp., and the relationship between
climatic factors and the incidence of DHF in Samui Islands. We
conducted our questionnaire and larval surveys from randomly
selected 105 households in Samui Islands in July-September 2006.
Pearson-s correlation coefficient was used to explore the primary
association between the DHF incidence and all climatic factors.
Multiple stepwise regression technique was then used to fit the
statistical model. The results showed that the positive indoor
containers were small jars, cement tanks, and plastic tanks. The
positive outdoor containers were small jars, cement tanks, plastic
tanks, used cans, tires, plastic bottles, discarded objects, pot saucers,
plant pots, and areca husks. All Ae. albopictus larval indices (i.e., CI,
HI, and BI) were higher than Ae. aegypti larval indices in this area.
These larval indices were higher than WHO standard. This indicated
a high risk of DHF transmission at Samui Islands. The multiple
stepwise regression model was y = –288.80 + 11.024xmean temp. The
mean temperature was positively associated with the DHF incidence
in this area.
Abstract: Energy efficient protocol design is the aim of current
researches in the area of sensor networks where limited power
resources impose energy conservation considerations. In this paper
we care for Medium Access Control (MAC) protocols and after an
extensive literature review, two adaptive schemes are discussed. Of
them, adaptive-rate MACs which were introduced for throughput
enhancement show the potency to save energy, even more than
adaptive-power schemes. Then we propose an allocation algorithm
for getting accurate and reliable results. Through a simulation study
we validated our claim and showed the power saving of adaptive-rate
protocols.
Abstract: The present study presents a new approach to automatic
data clustering and classification problems in large and complex
databases and, at the same time, derives specific types of explicit rules
describing each cluster. The method works well in both sparse and
dense multidimensional data spaces. The members of the data space
can be of the same nature or represent different classes. A number
of N-dimensional ellipsoids are used for enclosing the data clouds.
Due to the geometry of an ellipsoid and its free rotation in space
the detection of clusters becomes very efficient. The method is based
on genetic algorithms that are used for the optimization of location,
orientation and geometric characteristics of the hyper-ellipsoids. The
proposed approach can serve as a basis for the development of
general knowledge systems for discovering hidden knowledge and
unexpected patterns and rules in various large databases.
Abstract: Market competition and a desire to gain advantages on globalized market, drives companies towards innovation efforts. Project overload is an unpleasant phenomenon, which is happening for employees inside those organizations trying to make the most efficient use of their resources to be innovative. But what are the impacts of project overload on organization-s innovation capabilities? Advanced engineering teams (AE) inside a major heavy equipment manufacturer are suffering from project overload in their quest for innovation. In this paper, Agent-based modeling (ABM) is used to examine the current reality of the company context, and of the AE team, where the opportunities and challenges for reducing the risk of project overload and moving towards innovation were identified. Project overload is more likely to stifle innovation and creativity inside teams. On the other hand, motivations on proper challenging goals are more likely to help individual to alleviate the negative aspects of low level of project overload.
Abstract: The prediction of Software quality during development life cycle of software project helps the development organization to make efficient use of available resource to produce the product of highest quality. “Whether a module is faulty or not" approach can be used to predict quality of a software module. There are numbers of software quality prediction models described in the literature based upon genetic algorithms, artificial neural network and other data mining algorithms. One of the promising aspects for quality prediction is based on clustering techniques. Most quality prediction models that are based on clustering techniques make use of K-means, Mixture-of-Guassians, Self-Organizing Map, Neural Gas and fuzzy K-means algorithm for prediction. In all these techniques a predefined structure is required that is number of neurons or clusters should be known before we start clustering process. But in case of Growing Neural Gas there is no need of predetermining the quantity of neurons and the topology of the structure to be used and it starts with a minimal neurons structure that is incremented during training until it reaches a maximum number user defined limits for clusters. Hence, in this work we have used Growing Neural Gas as underlying cluster algorithm that produces the initial set of labeled cluster from training data set and thereafter this set of clusters is used to predict the quality of test data set of software modules. The best testing results shows 80% accuracy in evaluating the quality of software modules. Hence, the proposed technique can be used by programmers in evaluating the quality of modules during software development.
Abstract: We report on the development of a model to
understand why the range of experience with respect to HIV
infection is so diverse, especially with respect to the latency period.
To investigate this, an agent-based approach is used to extract highlevel
behaviour which cannot be described analytically from the set
of interaction rules at the cellular level. A network of independent
matrices mimics the chain of lymph nodes. Dealing with massively
multi-agent systems requires major computational effort. However,
parallelisation methods are a natural consequence and advantage of
the multi-agent approach and, using the MPI library, are here
implemented, tested and optimized. Our current focus is on the
various implementations of the data transfer across the network.
Three communications strategies are proposed and tested, showing
that the most efficient approach is communication based on the
natural lymph-network connectivity.
Abstract: The Improved Generalized Diversity Index (IGDI)
has been proposed as a tool that can be used to identify areas that
have high conservation value and measure the ecological condition of
an area. IGDI is based on the species relative abundances. This paper
is concerned with particular attention is given to comparisons
involving the MacArthur model of species abundances. The
properties and performance of various species indices were assessed.
Both IGDI and species richness increased with sampling area
according to a power function. IGDI were also found to be acceptable
ecological indicators of conditions and consistently outperformed
coefficient of conservatism indices.
Abstract: Contour filter strips planted with perennial vegetation
can be used to improve surface and ground water quality by reducing
pollutant, such as NO3-N, and sediment outflow from cropland to a
river or lake. Meanwhile, the filter strips of perennial grass with biofuel
potentials also have economic benefits of producing ethanol. In
this study, The Soil and Water Assessment Tool (SWAT) model was
applied to the Walnut Creek Watershed to examine the effectiveness
of contour strips in reducing NO3-N outflows from crop fields to the
river or lake. Required input data include watershed topography,
slope, soil type, land-use, management practices in the watershed and
climate parameters (precipitation, maximum/minimum air
temperature, solar radiation, wind speed and relative humidity).
Numerical experiments were conducted to identify potential
subbasins in the watershed that have high water quality impact, and
to examine the effects of strip size and location on NO3-N reduction
in the subbasins under various meteorological conditions (dry,
average and wet). Variable sizes of contour strips (10%, 20%, 30%
and 50%, respectively, of a subbasin area) planted with perennial
switchgrass were selected for simulating the effects of strip size and
location on stream water quality. Simulation results showed that a
filter strip having 10%-50% of the subbasin area could lead to 55%-
90% NO3-N reduction in the subbasin during an average rainfall
year. Strips occupying 10-20% of the subbasin area were found to be
more efficient in reducing NO3-N when placed along the contour
than that when placed along the river. The results of this study can
assist in cost-benefit analysis and decision-making in best water
resources management practices for environmental protection.
Abstract: The energy consumption of home femto base stations
(BSs) can be reduced, by turning off the Wi-Fi radio interface when
there is no mobile station (MS) under the coverage of the BSs or
MSs do not transmit or receive data packet for long time, especially
in late night. In the energy-efficient home femto BSs, if MSs have
any data packet to transmit and the Wi-Fi radio interface in off
state, MSs wake up the Wi-Fi radio interface of home femto BSs
by using additional low power radio interface. In this paper, the
performance of the energy-efficient home femto BSs from the aspect
of energy consumption and cumulative average delay, and show the
effect of various parameters on energy consumption and cumulative
average delay. From the results, the tradeoff relationship between
energy consumption and cumulative average delay is shown and thus,
appropriate operation should be needed to balance the tradeoff.
Abstract: In this paper, an efficient structural approach for
recognizing on-line handwritten digits is proposed. After reading
the digit from the user, the slope is estimated and normalized for
adjacent nodes. Based on the changing of signs of the slope values,
the primitives are identified and extracted. The names of these
primitives are represented by strings, and then a finite state
machine, which contains the grammars of the digits, is traced to
identify the digit. Finally, if there is any ambiguity, it will be
resolved. Experiments showed that this technique is flexible and
can achieve high recognition accuracy for the shapes of the digits
represented in this work.