Abstract: A mammal-s body can be seen as a blood vessel with
complex tunnels. When heart pumps blood periodically, blood runs
through blood vessels and rebounds from walls of blood vessels.
Blood pressure signals can be measured with complex but periodic
patterns. When an artery is clamped during a surgical operation, the
spectrum of blood pressure signals will be different from that of
normal situation. In this investigation, intestinal artery clamping
operations were conducted to a pig for simulating the situation of
intestinal blocking during a surgical operation. Similarity theory is a
convenient and easy tool to prove that patterns of blood pressure
signals of intestinal artery blocking and unblocking are surely
different. And, the algorithm of Hilbert Huang Transform can be
applied to extract the character parameters of blood pressure pattern.
In conclusion, the patterns of blood pressure signals of two different
situations, intestinal artery blocking and unblocking, can be
distinguished by these character parameters defined in this paper.
Abstract: Numerical design optimization is a powerful tool that
can be used by engineers during any stage of the design process.
There are many different applications for structural optimization. A
specific application that will be discussed in the following paper is
experimental data matching. Data obtained through tests on a physical
structure will be matched with data from a numerical model of that
same structure. The data of interest will be the dynamic characteristics
of an antenna structure focusing on the mode shapes and modal
frequencies. The structure used was a scaled and simplified model of
the Karoo Array Telescope-7 (KAT-7) antenna structure.
This kind of data matching is a complex and difficult task. This
paper discusses how optimization can assist an engineer during the
process of correlating a finite element model with vibration test data.
Abstract: Signalized intersections on high-volume arterials are
often congested during peak hours, causing a decrease in through
movement efficiency on the arterial. Much of the vehicle delay
incurred at conventional intersections is caused by high left-turn
demand. Unconventional intersection designs attempt to reduce
intersection delay and travel time by rerouting left-turns away from
the main intersection and replacing it with right-turn followed by Uturn.
The proposed new type of U-turn intersection is geometrically
designed with a raised island which provides a protected U-turn
movement. In this study several scenarios based on different
distances between U-turn and main intersection, traffic volume of
major/minor approaches and percentage of left-turn volumes were
simulated by use of AIMSUN, a type of traffic microsimulation
software. Subsequently some models are proposed in order to
compute travel time of each movement. Eventually by correlating
these equations to some in-field collected data of some implemented
U-turn facilities, the reliability of the proposed models are approved.
With these models it would be possible to calculate travel time of
each movement under any kind of geometric and traffic condition. By
comparing travel time of a conventional signalized intersection with
U-turn intersection travel time, it would be possible to decide on
converting signalized intersections into this new kind of U-turn
facility or not. However comparison of travel time is not part of the
scope of this research. In this paper only travel time of this innovative
U-turn facility would be predicted. According to some before and
after study about the traffic performance of some executed U-turn
facilities, it is found that commonly, this new type of U-turn facility
produces lower travel time. Thus, evaluation of using this type of
unconventional intersection should be seriously considered.
Abstract: In this manuscript, the LBM is applied for simulating of Mixed Convection in a Lid-Driven cavity with an open side. The cavity horizontal walls are insulated while the west Lid-driven wall is maintained at a uniform temperature higher than the ambient. Prandtl number (Pr) is fixed to 0.71 (air) while Reynolds number (Re) , Richardson number (Ri) and aspect ratio (A) of the cavity are changed in the range of 50-150 , of 0.1-10 and of 1-4 , respectively. The numerical code is validated for the standard square cavity, and then the results of an open ended cavity are presented. Result shows by increasing of aspect ratio, the average Nusselt number (Nu) on lid- driven wall decreases and with same Reynolds number (Re) by increasing of aspect ratio (A), Richardson number plays more important role in heat transfer rate.
Abstract: Dietary macro and micro nutrients in their respective proportion and fractions present a practical potential tool to fabricate milk constituents since cells of lactating mammary glands obtain about 80 % of milk synthesis nutrients from blood, reflecting the existence of an isotonic equilibrium between blood and milk. Diverting milk biosynthetic activities through manipulation of nutrients towards producing milk not only keeping in view its significance as natural food but also as food item which prevents or dilutes the adverse effects of some diseases (like cardiovascular problem by saturated milk fat intake) has been area of interest in the last decade. Nutritional modification / supplementation has been reported to enhance conjugated linoleic acid, fatty acid type and concentration, essential fatty acid concentration, vitamin B12& C, Se, Cu, I and Fe which are involved to counter the health threats to human well being. Synchronizing dietary nutrients aimed to modify rumen dynamics towards synthesis of nutrients or their precursors to make their drive towards formulated milk constituents presents a practical option. Formulating dietary constituents to design milk constituents will let the farmers, consumers and investors know about the real potential and profit margins associated with this enterprise. This article briefly recapitulates the ways and means to modify milk constituents keeping an eye on human health and well being issues, which allows milk to serve more than a food item.
Abstract: Understanding how airborne pathogens are
transported through hospital wards is essential for determining the
infection risk to patients and healthcare workers. This study utilizes
Computational Fluid Dynamics (CFD) simulations to explore
possible pathogen transport within a six-bed partitioned Nightingalestyle
hospital ward.
Grid independence of a ward model was addressed using the Grid
Convergence Index (GCI) from solutions obtained using three fullystructured
grids. Pathogens were simulated using source terms in
conjunction with a scalar transport equation and a RANS turbulence
model. Errors were found to be less than 4% in the calculation of air
velocities but an average of 13% was seen in the scalar field.
A parametric study of variations in the pathogen release point
illustrated that its distribution is strongly influenced by the local
velocity field and the degree of air mixing present.
Abstract: Various models have been derived by studying large number of completed software projects from various organizations and applications to explore how project sizes mapped into project effort. But, still there is a need to prediction accuracy of the models. As Neuro-fuzzy based system is able to approximate the non-linear function with more precision. So, Neuro-Fuzzy system is used as a soft computing approach to generate model by formulating the relationship based on its training. In this paper, Neuro-Fuzzy technique is used for software estimation modeling of on NASA software project data and performance of the developed models are compared with the Halstead, Walston-Felix, Bailey-Basili and Doty Models mentioned in the literature.
Abstract: This paper describes a segmentation algorithm based
on the cooperation of an optical flow estimation method with edge
detection and region growing procedures.
The proposed method has been developed as a pre-processing
stage to be used in methodologies and tools for video/image indexing
and retrieval by content. The addressed problem consists in
extracting whole objects from background for producing images of
single complete objects from videos or photos. The extracted images
are used for calculating the object visual features necessary for both
indexing and retrieval processes.
The first task of the algorithm exploits the cues from motion
analysis for moving area detection. Objects and background are then
refined using respectively edge detection and region growing
procedures. These tasks are iteratively performed until objects and
background are completely resolved.
The developed method has been applied to a variety of indoor and
outdoor scenes where objects of different type and shape are
represented on variously textured background.
Abstract: Most of the losses in a power system relate to
the distribution sector which always has been considered.
From the important factors which contribute to increase losses
in the distribution system is the existence of radioactive flows.
The most common way to compensate the radioactive power
in the system is the power to use parallel capacitors. In
addition to reducing the losses, the advantages of capacitor
placement are the reduction of the losses in the release peak of
network capacity and improving the voltage profile. The point
which should be considered in capacitor placement is the
optimal placement and specification of the amount of the
capacitor in order to maximize the advantages of capacitor
placement.
In this paper, a new technique has been offered for the
placement and the specification of the amount of the constant
capacitors in the radius distribution network on the basis of
Genetic Algorithm (GA). The existing optimal methods for
capacitor placement are mostly including those which reduce
the losses and voltage profile simultaneously. But the
retaliation cost and load changes have not been considered as
influential UN the target function .In this article, a holistic
approach has been considered for the optimal response to this
problem which includes all the parameters in the distribution
network: The price of the phase voltage and load changes. So,
a vast inquiry is required for all the possible responses. So, in
this article, we use Genetic Algorithm (GA) as the most
powerful method for optimal inquiry.
Abstract: In this paper we propose a new traffic simulation
package, TDMSim, which supports both macroscopic and
microscopic simulation on free-flowing and regulated traffic systems.
Both simulators are based on travel demands, which specify the
numbers of vehicles departing from origins to arrive at different
destinations. The microscopic simulator implements the carfollowing
model given the pre-defined routes of the vehicles but also
supports the rerouting of vehicles. We also propose a macroscopic
simulator which is built in integration with the microscopic simulator
to allow the simulation to be scaled for larger networks without
sacrificing the precision achievable through the microscopic
simulator. The macroscopic simulator also enables the reuse of
previous simulation results when simulating traffic on the same
networks at later time. Validations have been conducted to show the
correctness of both simulators.
Abstract: Spatial and mobile computing evolves. This paper
describes a smart modeling platform called “GeoSEMA". This
approach tends to model multidimensional GeoSpatial Evolutionary
and Mobile Agents. Instead of 3D and location-based issues, there
are some other dimensions that may characterize spatial agents, e.g.
discrete-continuous time, agent behaviors. GeoSEMA is seen as a
devoted design pattern motivating temporal geographic-based
applications; it is a firm foundation for multipurpose and
multidimensional special-based applications. It deals with
multipurpose smart objects (buildings, shapes, missiles, etc.) by
stimulating geospatial agents.
Formally, GeoSEMA refers to geospatial, spatio-evolutive and
mobile space constituents where a conceptual geospatial space model
is given in this paper. In addition to modeling and categorizing
geospatial agents, the model incorporates the concept of inter-agents
event-based protocols. Finally, a rapid software-architecture
prototyping GeoSEMA platform is also given. It will be
implemented/ validated in the next phase of our work.
Abstract: In a competitive production environment, critical
decision making are based on data resulted by random sampling of
product units. Efficiency of these decisions depends on data quality
and also their reliability scale. This point leads to the necessity of a
reliable measurement system. Therefore, the conjecture process and
analysing the errors contributes to a measurement system known as
Measurement System Analysis (MSA). The aim of this research is on
determining the necessity and assurance of extensive development in
analysing measurement systems, particularly with the use of
Repeatability and Reproducibility Gages (GR&R) to improve
physical measurements. Nowadays in productive industries,
repeatability and reproducibility gages released so well but they are
not applicable as well as other measurement system analysis
methods. To get familiar with this method and gain a feedback in
improving measurement systems, this survey would be on
“ANOVA" method as the most widespread way of calculating
Repeatability and Reproducibility (R&R).
Abstract: In this paper is reported an analysis about the outdoor air pollution of the urban centre of the city of Messina. The variations of the most critical pollutants concentrations (PM10, O3, CO, C6H6) and their trends respect of climatic parameters and vehicular traffic have been studied. Linear regressions have been effectuated for representing the relations among the pollutants; the differences between pollutants concentrations on weekend/weekday were also analyzed. In order to evaluate air pollution and its effects on human health, a method for calculating a pollution index was implemented and applied in the urban centre of the city. This index is based on the weighted mean of the most detrimental air pollutants concentrations respect of their limit values for protection of human health. The analyzed data of the polluting substances were collected by the Assessorship of the Environment of the Regional Province of Messina in the year 2004. A statistical analysis of the air quality index trends is also reported.
Abstract: wind catchers have been served as a cooling system, used to provide acceptable ventilation by means of renewable energy of wind. In the present study, the city of Yazd in arid climate is selected as case study. From the architecture point of view, learning about wind catchers in this study is done by means of field surveys. Research method for selection of the case is based on random form, and analytical method. Wind catcher typology and knowledge of relationship governing the wind catcher's architecture were those measures that are taken for the first time. 53 wind catchers were analyzed. The typology of the wind-catchers is done by the physical analyzing, patterns and common concepts as incorporated in them. How the architecture of wind catcher can influence their operations by analyzing thermal behavior are the archetypes of selected wind catchers. Calculating fluids dynamics science, fluent software and numerical analysis are used in this study as the most accurate analytical approach. The results obtained from these analyses show the formal specifications of wind catchers with optimum operation in Yazd. The knowledge obtained from the optimum model could be used for design and construction of wind catchers with more improved operation
Abstract: This research paper is based upon the simulation of
gradient of mathematical functions and scalar fields using MATLAB.
Scalar fields, their gradient, contours and mesh/surfaces are
simulated using different related MATLAB tools and commands for
convenient presentation and understanding. Different mathematical
functions and scalar fields are examined here by taking their
gradient, visualizing results in 3D with different color shadings and
using other necessary relevant commands. In this way the outputs of
required functions help us to analyze and understand in a better way
as compared to just theoretical study of gradient.
Abstract: One astonishing capability of humans is to recognize thousands of different objects visually, and to learn the semantic association between those objects and words referring to them. This work is an attempt to build a computational model of such capacity,simulating the process by which infants learn how to recognize objects and words through exposure to visual stimuli and vocal sounds.One of the main fact shaping the brain of a newborn is that lights and colors come from entities of the world. Gradually the visual system learn which light sensations belong to same entities, despite large changes in appearance. This experience is common between humans and several other mammals, like non-human primates. But humans only can recognize a huge variety of objects, most manufactured by himself, and make use of sounds to identify and categorize them. The aim of this model is to reproduce these processes in a biologically plausible way, by reconstructing the essential hierarchy of cortical circuits on the visual and auditory neural paths.
Abstract: Computer modeling has played a unique role in
understanding electrocardiography. Modeling and simulating cardiac
action potential propagation is suitable for studying normal and
pathological cardiac activation. This paper presents a 2-D Cellular
Automata model for simulating action potential propagation in
cardiac tissue. We demonstrate a novel algorithm in order to use
minimum neighbors. This algorithm uses the summation of the
excitability attributes of excited neighboring cells. We try to
eliminate flat edges in the result patterns by inserting probability to
the model. We also preserve the real shape of action potential by
using linear curve fitting of one well known electrophysiological
model.
Abstract: This paper introduces a new method called ARPDC (Advanced Robust Parallel Distributed Compensation) for automatic control of nonlinear systems. This method improves a quality of robust control by interpolating of robust and optimal controller. The weight of each controller is determined by an original criteria function for model validity and disturbance appreciation. ARPDC method is based on nonlinear Takagi-Sugeno (T-S) fuzzy systems and Parallel Distributed Compensation (PDC) control scheme. The relaxed stability conditions of ARPDC control of nominal system have been derived. The advantages of presented method are demonstrated on the inverse pendulum benchmark problem. From comparison between three different controllers (robust, optimal and ARPDC) follows, that ARPDC control is almost optimal with the robustness close to the robust controller. The results indicate that ARPDC algorithm can be a good alternative not only for a robust control, but in some cases also to an adaptive control of nonlinear systems.
Abstract: This paper studies the application of a variety of
sawdust materials in the production of lightweight insulating bricks.
First, the mineralogical and chemical composition of clays was determined. Next, ceramic bricks were fabricated with different
quantities of materials (3–6 and 9 wt. % for sawdust, 65 wt. % for grey clay, 24–27 and 30 wt. % for yellow clay and 2 wt% of tuff).
These bricks were fired at 800 and 950 °C. The effect of adding this sawdust on the technological behaviour of the brick was assessed by
drying and firing shrinkage, water absorption, porosity, bulk density
and compressive strength. The results have shown that the optimum
sintering temperature is 950 °C. Below this temperature, at 950 °C,
increased open porosity was observed, which decreased the compressive strength of the bricks. Based on the results obtained, the
optimum amounts of waste were 9 wt. % sawdust of eucalyptus, 24 wt. % shaping moisture and 1.6 particle size diameter. These percentages produced bricks whose mechanical properties were
suitable for use as secondary raw materials in ceramic brick
production.
Abstract: This paper is focused on issues of nonlinear dynamic process modeling and model-based predictive control of a fed-batch sugar crystallization process applying the concept of artificial neural networks as computational tools. The control objective is to force the operation into following optimal supersaturation trajectory. It is achieved by manipulating the feed flow rate of sugar liquor/syrup, considered as the control input. A feed forward neural network (FFNN) model of the process is first built as part of the controller structure to predict the process response over a specified (prediction) horizon. The predictions are supplied to an optimization procedure to determine the values of the control action over a specified (control) horizon that minimizes a predefined performance index. The control task is rather challenging due to the strong nonlinearity of the process dynamics and variations in the crystallization kinetics. However, the simulation results demonstrated smooth behavior of the control actions and satisfactory reference tracking.