Abstract: In this paper numerical studies have been carried out
to examine the pre-ignition flow features of high-performance solid
propellant rocket motors with two different port geometries but with
same propellant loading density. Numerical computations have been
carried out using a validated 3D, unsteady, 2nd-order implicit, SST k-
ω turbulence model. In the numerical study, a fully implicit finite
volume scheme of the compressible, Reynolds-Averaged, Navier-
Stokes equations is employed. We have observed from the numerical
results that in solid rocket motors with highly loaded propellants
having divergent port geometry the hot igniter gases can create preignition
pressure oscillations leading to thrust oscillations due to the
flow unsteadiness and recirculation. We have also observed that the
igniter temperature fluctuations are diminished rapidly thereby
reaching the steady state value faster in the case of solid propellant
rocket motors with convergent port than the divergent port
irrespective of the igniter total pressure. We have concluded that the
prudent selection of the port geometry, without altering the propellant
loading density, for damping the total temperature fluctuations within
the motor is a meaningful objective for the suppression and control of
instability and/or thrust oscillations often observed in solid propellant
rocket motors with non-uniform port geometry.
Abstract: The aim of the current work was to employ the finite
element method to model a slab, with a small hole across its width,
undergoing plastic plane strain deformation. The computational
model had, however, to be validated by comparing its results with
those obtained experimentally. Since they were in good agreement,
the finite element method can therefore be considered a reliable tool
that can help gain better understanding of the mechanism of ductile
failure in structural members having stress raisers. The finite element
software used was ANSYS, and the PLANE183 element was utilized.
It is a higher order 2-D, 8-node or 6-node element with quadratic
displacement behavior. A bilinear stress-strain relationship was used
to define the material properties, with constants similar to those of the
material used in the experimental study. The model was run for
several tensile loads in order to observe the progression of the plastic
deformation region, and the stress concentration factor was
determined in each case. The experimental study involved employing the visioplasticity
technique, where a circular mesh (each circle was 0.5 mm in
diameter, with 0.05 mm line thickness) was initially printed on the
side of an aluminum slab having a small hole across its width.
Tensile loading was then applied to produce a small increment of
plastic deformation. Circles in the plastic region became ellipses,
where the directions of the principal strains and stresses coincided
with the major and minor axes of the ellipses. Next, we were able to
determine the directions of the maximum and minimum shear
stresses at the center of each ellipse, and the slip-line field was then
constructed. We were then able to determine the stress at any point in
the plastic deformation zone, and hence the stress concentration
factor. The experimental results were found to be in good agreement
with the analytical ones.
Abstract: Facility location is a complex real-world problem
which needs a strategic management decision. This paper provides a
general review on studies, efforts and developments in Facility
Location Problems which are classical optimization problems having
a wide-spread applications in various areas such as transportation,
distribution, production, supply chain decisions and
telecommunication. Our goal is not to review all variants of different
studies in FLPs or to describe very detailed computational techniques
and solution approaches, but rather to provide a broad overview of
major location problems that have been studied, indicating how they
are formulated and what are proposed by researchers to tackle the
problem. A brief, elucidative table based on a grouping according to
“General Problem Type” and “Methods Proposed” used in the studies
is also presented at the end of the work.
Abstract: Batch production plants provide a wide range of
scheduling problems. In pharmaceutical industries a batch process
is usually described by a recipe, consisting of an ordering of tasks
to produce the desired product. In this research work we focused
on pharmaceutical production processes requiring the culture of
a microorganism population (i.e. bacteria, yeasts or antibiotics).
Several sources of uncertainty may influence the yield of the culture
processes, including (i) low performance and quality of the cultured
microorganism population or (ii) microbial contamination. For
these reasons, robustness is a valuable property for the considered
application context. In particular, a robust schedule will not collapse
immediately when a cell of microorganisms has to be thrown away
due to a microbial contamination. Indeed, a robust schedule should
change locally in small proportions and the overall performance
measure (i.e. makespan, lateness) should change a little if at all.
In this research work we formulated a constraint programming
optimization (COP) model for the robust planning of antibiotics
production. We developed a discrete-time model with a multi-criteria
objective, ordering the different criteria and performing a
lexicographic optimization. A feasible solution of the proposed
COP model is a schedule of a given set of tasks onto available
resources. The schedule has to satisfy tasks precedence constraints,
resource capacity constraints and time constraints. In particular
time constraints model tasks duedates and resource availability
time windows constraints. To improve the schedule robustness, we
modeled the concept of (a, b) super-solutions, where (a, b) are input
parameters of the COP model. An (a, b) super-solution is one in
which if a variables (i.e. the completion times of a culture tasks)
lose their values (i.e. cultures are contaminated), the solution can be
repaired by assigning these variables values with a new values (i.e.
the completion times of a backup culture tasks) and at most b other
variables (i.e. delaying the completion of at most b other tasks).
The efficiency and applicability of the proposed model is
demonstrated by solving instances taken from a real-life
pharmaceutical company. Computational results showed that
the determined super-solutions are near-optimal.
Abstract: In this article, we deal with a variant of the classical
course timetabling problem that has a practical application in many
areas of education. In particular, in this paper we are interested in
high schools remedial courses. The purpose of such courses is to
provide under-prepared students with the skills necessary to succeed
in their studies. In particular, a student might be under prepared in
an entire course, or only in a part of it. The limited availability
of funds, as well as the limited amount of time and teachers at
disposal, often requires schools to choose which courses and/or which
teaching units to activate. Thus, schools need to model the training
offer and the related timetabling, with the goal of ensuring the
highest possible teaching quality, by meeting the above-mentioned
financial, time and resources constraints. Moreover, there are some
prerequisites between the teaching units that must be satisfied. We
first present a Mixed-Integer Programming (MIP) model to solve
this problem to optimality. However, the presence of many peculiar
constraints contributes inevitably in increasing the complexity of
the mathematical model. Thus, solving it through a general-purpose
solver may be performed for small instances only, while solving
real-life-sized instances of such model requires specific techniques
or heuristic approaches. For this purpose, we also propose a heuristic
approach, in which we make use of a fast constructive procedure
to obtain a feasible solution. To assess our exact and heuristic
approaches we perform extensive computational results on both
real-life instances (obtained from a high school in Lecce, Italy) and
randomly generated instances. Our tests show that the MIP model is
never solved to optimality, with an average optimality gap of 57%.
On the other hand, the heuristic algorithm is much faster (in about the
50% of the considered instances it converges in approximately half of
the time limit) and in many cases allows achieving an improvement
on the objective function value obtained by the MIP model. Such an
improvement ranges between 18% and 66%.
Abstract: This paper presents a methodology for probabilistic
assessment of bearing capacity and prediction of failure mechanism
of masonry vaults at the ultimate state with consideration of the
natural variability of Young’s modulus of stones. First, the
computation model is explained. The failure mode corresponds to the
four-hinge mechanism. Based on this consideration, the study of a
vault composed of 16 segments is presented. The Young’s modulus of
the segments is considered as random variable defined by a mean
value and a coefficient of variation. A relationship linking the vault
bearing capacity to the voussoirs modulus variation is proposed. The
most probable failure mechanisms, in addition to that observed in the
deterministic case, are identified for each variability level as well as
their probability of occurrence. The results show that the mechanism
observed in the deterministic case has decreasing probability of
occurrence in terms of variability, while the number of other
mechanisms and their probability of occurrence increases with the
coefficient of variation of Young’s modulus. This means that if a
significant change in the Young’s modulus of the segments is proven,
taking it into account in computations becomes mandatory, both for
determining the vault bearing capacity and for predicting its failure
mechanism.
Abstract: This paper presents the results obtained by numerical
simulation using the software ANSYS CFX-CFD for the air
pollutants dispersion in the atmosphere coming from the evacuation
of combustion gases resulting from the fuel combustion in an electric
thermal power plant. The model uses the Navier-Stokes equation to
simulate the dispersion of pollutants in the atmosphere. It is
considered as important factors in elaboration of simulation the
atmospheric conditions (pressure, temperature, wind speed, wind
direction), the exhaust velocity of the combustion gases, chimney
height and the obstacles (buildings). Using the air quality monitoring
stations it is measured the concentrations of main pollutants (SO2,
NOx and PM). The pollutants were monitored over a period of 3
months, after that the average concentration are calculated, which is
used by the software. The concentrations are: 8.915 μg/m3 (NOx),
9.587 μg/m3 (SO2) and 42 μg/m3 (PM). A comparison of test data
with simulation results demonstrated that CFX was able to describe
the dispersion of the pollutant as well the concentration of this
pollutants in the atmosphere.
Abstract: In this paper comprehensive studies have been carried
out for the design optimization of a waste heat recovery system for
effectively utilizing the domestic air conditioner heat energy for
producing hot water. Numerical studies have been carried for the
geometry optimization of a waste heat recovery system for domestic
air conditioners. Numerical computations have been carried out using
a validated 2d pressure based, unsteady, 2nd-order implicit, SST k-ω
turbulence model. In the numerical study, a fully implicit finite
volume scheme of the compressible, Reynolds-Averaged, Navier-
Stokes equations is employed. At identical inflow and boundary
conditions various geometries were tried and effort has been taken for
proposing the best design criteria. Several combinations of pipe line
shapes viz., straight and spiral with different number of coils for the
radiator have been attempted and accordingly the design criteria has
been proposed for the waste heat recovery system design. We have
concluded that, within the given envelope, the geometry optimization
is a meaningful objective for getting better performance of waste heat
recovery system for air conditioners.
Abstract: Singular value decomposition based optimisation of
geometric design parameters of a 5-speed gearbox is studied. During
the optimisation, a four-degree-of freedom torsional vibration model
of the pinion gear-wheel gear system is obtained and the minimum
singular value of the transfer matrix is considered as the objective
functions. The computational cost of the associated singular value
problems is quite low for the objective function, because it is only
necessary to compute the largest and smallest singular values (μmax
and μmin) that can be achieved by using selective eigenvalue solvers;
the other singular values are not needed. The design parameters are
optimised under several constraints that include bending stress,
contact stress and constant distance between gear centres. Thus, by
optimising the geometric parameters of the gearbox such as, the
module, number of teeth and face width it is possible to obtain a
light-weight-gearbox structure. It is concluded that the all optimised
geometric design parameters also satisfy all constraints.
Abstract: Particle size distribution, the most important
characteristics of aerosols, is obtained through electrical
characterization techniques. The dynamics of charged nanoparticles
under the influence of electric field in Electrical Mobility
Spectrometer (EMS) reveals the size distribution of these particles.
The accuracy of this measurement is influenced by flow conditions,
geometry, electric field and particle charging process, therefore by
the transfer function (transfer matrix) of the instrument. In this work,
a wire-cylinder corona charger was designed and the combined fielddiffusion
charging process of injected poly-disperse aerosol particles
was numerically simulated as a prerequisite for the study of a
multichannel EMS. The result, a cloud of particles with no uniform
charge distribution, was introduced to the EMS. The flow pattern and
electric field in the EMS were simulated using Computational Fluid
Dynamics (CFD) to obtain particle trajectories in the device and
therefore to calculate the reported signal by each electrometer.
According to the output signals (resulted from bombardment of
particles and transferring their charges as currents), we proposed a
modification to the size of detecting rings (which are connected to
electrometers) in order to evaluate particle size distributions more
accurately. Based on the capability of the system to transfer
information contents about size distribution of the injected particles,
we proposed a benchmark for the assessment of optimality of the
design. This method applies the concept of Von Neumann entropy
and borrows the definition of entropy from information theory
(Shannon entropy) to measure optimality. Entropy, according to the
Shannon entropy, is the ''average amount of information contained in
an event, sample or character extracted from a data stream''.
Evaluating the responses (signals) which were obtained via various
configurations of detecting rings, the best configuration which gave
the best predictions about the size distributions of injected particles,
was the modified configuration. It was also the one that had the
maximum amount of entropy. A reasonable consistency was also
observed between the accuracy of the predictions and the entropy
content of each configuration. In this method, entropy is extracted
from the transfer matrix of the instrument for each configuration.
Ultimately, various clouds of particles were introduced to the
simulations and predicted size distributions were compared to the
exact size distributions.
Abstract: In this paper, the secure BioSemantic Scheme is
presented to bridge biological/biomedical research problems and
computational solutions via semantic computing. Due to the diversity
of problems in various research fields, the semantic capability
description language (SCDL) plays and important role as a common
language and generic form for problem formalization. SCDL is
expected the essential for future semantic and logical computing in
Biosemantic field. We show several example to Biomedical problems
in this paper. Moreover, in the coming age of cloud computing, the
security problem is considered to be crucial issue and we presented a
practical scheme to cope with this problem.
Abstract: In this paper, we provided a literature survey on the
artificial stock problem (ASM). The paper began by exploring the
complexity of the stock market and the needs for ASM. ASM
aims to investigate the link between individual behaviors (micro
level) and financial market dynamics (macro level). The variety of
patterns at the macro level is a function of the AFM complexity. The
financial market system is a complex system where the relationship
between the micro and macro level cannot be captured analytically.
Computational approaches, such as simulation, are expected to
comprehend this connection. Agent-based simulation is a simulation
technique commonly used to build AFMs. The paper proceeds by
discussing the components of the ASM. We consider the roles
of behavioral finance (BF) alongside the traditionally risk-averse
assumption in the construction of agent’s attributes. Also, the
influence of social networks in the developing of agents interactions is
addressed. Network topologies such as a small world, distance-based,
and scale-free networks may be utilized to outline economic
collaborations. In addition, the primary methods for developing
agents learning and adaptive abilities have been summarized.
These incorporated approach such as Genetic Algorithm, Genetic
Programming, Artificial neural network and Reinforcement Learning.
In addition, the most common statistical properties (the stylized facts)
of stock that are used for calibration and validation of ASM are
discussed. Besides, we have reviewed the major related previous
studies and categorize the utilized approaches as a part of these
studies. Finally, research directions and potential research questions
are argued. The research directions of ASM may focus on the macro
level by analyzing the market dynamic or on the micro level by
investigating the wealth distributions of the agents.
Abstract: The quantitative study of cell mechanics is of
paramount interest, since it regulates the behaviour of the living cells
in response to the myriad of extracellular and intracellular
mechanical stimuli. The novel experimental techniques together with
robust computational approaches have given rise to new theories and
models, which describe cell mechanics as combination of
biomechanical and biochemical processes. This review paper
encapsulates the existing continuum-based computational approaches
that have been developed for interpreting the mechanical responses of
living cells under different loading and boundary conditions. The
salient features and drawbacks of each model are discussed from both
structural and biological points of view. This discussion can
contribute to the development of even more precise and realistic
computational models of cell mechanics based on continuum
approaches or on their combination with microstructural approaches,
which in turn may provide a better understanding of
mechanotransduction in living cells.
Abstract: DNA Barcode provides good sources of needed
information to classify living species. The classification problem has
to be supported with reliable methods and algorithms. To analyze
species regions or entire genomes, it becomes necessary to use the
similarity sequence methods. A large set of sequences can be
simultaneously compared using Multiple Sequence Alignment which
is known to be NP-complete. However, all the used methods are still
computationally very expensive and require significant computational
infrastructure. Our goal is to build predictive models that are highly
accurate and interpretable. In fact, our method permits to avoid the
complex problem of form and structure in different classes of
organisms. The empirical data and their classification performances
are compared with other methods. Evenly, in this study, we present
our system which is consisted of three phases. The first one, is called
transformation, is composed of three sub steps; Electron-Ion
Interaction Pseudopotential (EIIP) for the codification of DNA
Barcodes, Fourier Transform and Power Spectrum Signal Processing.
Moreover, the second phase step is an approximation; it is
empowered by the use of Multi Library Wavelet Neural Networks
(MLWNN). Finally, the third one, is called the classification of DNA
Barcodes, is realized by applying the algorithm of hierarchical
classification.
Abstract: In this paper, the specific sound Transmission Loss
(TL) of the Laminated Composite Plate (LCP) with different material
properties in each layer is investigated. The numerical method to
obtain the TL of the LCP is proposed by using elastic plate theory. The
transfer matrix approach is novelty presented for computational
efficiency in solving the numerous layers of dynamic stiffness matrix
(D-matrix) of the LCP. Besides the numerical simulations for
calculating the TL of the LCP, the material properties inverse method
is presented for the design of a laminated composite plate analogous to
a metallic plate with a specified TL. As a result, it demonstrates that
the proposed computational algorithm exhibits high efficiency with a
small number of iterations for achieving the goal. This method can be
effectively employed to design and develop tailor-made materials for
various applications.
Abstract: In this study, nuclear magnetic resonance
spectroscopy and nuclear quadrupole resonance spectroscopy
parameters of 14N (Nitrogen in imidazole ring) in N–H…O hydrogen
bonding for Histidine hydrochloride monohydrate were calculated via
density functional theory. We considered a five-molecule model
system of Histidine hydrochloride monohydrate. Also we examined
the trends of environmental effect on hydrogen bonds as well as
cooperativity. The functional used in this research is M06-2X which
is a good functional and the obtained results has shown good
agreement with experimental data. This functional was applied to
calculate the NMR and NQR parameters. Some correlations among
NBO parameters, NMR and NQR parameters have been studied
which have shown the existence of strong correlations among them.
Furthermore, the geometry optimization has been performed using
M062X/6-31++G(d,p) method. In addition, in order to study
cooperativity and changes in structural parameters, along with
increase in cluster size, natural bond orbitals have been employed.
Abstract: Background subtraction and temporal difference are
often used for moving object detection in video. Both approaches are
computationally simple and easy to be deployed in real-time image
processing. However, while the background subtraction is highly
sensitive to dynamic background and illumination changes, the
temporal difference approach is poor at extracting relevant pixels of
the moving object and at detecting the stopped or slowly moving
objects in the scene. In this paper, we propose a simple moving object
detection scheme based on adaptive background subtraction and
temporal difference exploiting dynamic background updates. The
proposed technique consists of histogram equalization, a linear
combination of background and temporal difference, followed by the
novel frame-based and pixel-based background updating techniques.
Finally, morphological operations are applied to the output images.
Experimental results show that the proposed algorithm can solve the
drawbacks of both background subtraction and temporal difference
methods and can provide better performance than that of each method.
Abstract: In this paper, the energy saving and human thermal
comfort in a typical office room are investigated. The impact of a
combined system of exhaust inlet air with light slots located at the
ceiling level in a room served by displacement ventilation system is
numerically modelled. Previous experimental data are used to
validate the Computational Fluid Dynamic (CFD) model. A case
study of simulated office room includes two seating occupants, two
computers, two data loggers and four lamps. The combined system is
located at the ceiling level above the heat sources. A new method of
calculation for the cooling coil load in Stratified Air Distribution
(STRAD) system is used in this study. The results show that 47.4%
energy saving of space cooling load can be achieved by combing the
exhaust inlet air with light slots at the ceiling level above the heat
sources.
Abstract: In this paper, the cable model of dendrites have been
considered. The dendrites are cylindrical cables of various segments
having variable length and reducing radius from start point at synapse
and end points. For a particular event signal being received by a
neuron in response only some dendrite are active at a particular
instance. Initial current signals with different current flows in
dendrite are assumed. Due to overlapping and coupling of active
dendrite, they induce currents in the dendrite segments of each other
at a particular instance. But how these currents are induced in the
various segments of active dendrites due to coupling between these
dendrites, It is not presented in the literature. Here the paper presents
a model for induced currents in active dendrite segments due to
mutual coupling at the starting instance of an activity in dendrite. The
model is as discussed further.
Abstract: Model predictive control is a kind of optimal feedback
control in which control performance over a finite future is optimized
with a performance index that has a moving initial time and a moving
terminal time. This paper examines the stability of model predictive
control for linear discrete-time systems with additive stochastic
disturbances. A sufficient condition for the stability of the closed-loop
system with model predictive control is derived by means of a linear
matrix inequality. The objective of this paper is to show the results
of computational simulations in order to verify the effectiveness of
the obtained stability condition.