Abstract: CFD simulations are carried out in arterial stenoses
with 48 % areal occlusion. Non-newtonian fluid model is selected for
the blood flow as the same problem has been solved before with
Newtonian fluid model. Studies on flow resistance with the presence
of surface irregularities are carried out. Investigations are also
performed on the pressure drop at various Reynolds numbers. The
present study revealed that the pressure drop across a stenosed artery
is practically unaffected by surface irregularities at low Reynolds
numbers, while flow features are observed and discussed at higher
Reynolds numbers.
Abstract: In this work, we examine fluid mixing in a full three-stream mixing channel with longitudinal vortex generators (LVGs) built on the channel bottom by numerical simulation and experiment. The effects of the asymmetrical arrangement and the attack angle of the LVGs on fluid mixing are investigated. The results show that the micromixer with LVGs at a small asymmetry index (defined by the ratio of the distance from the center plane of the gap between the winglets to the center plane of the main channel to the width of the main channel) is superior to the micromixer with symmetric LVGs and that with LVGs at a large asymmetry index. The micromixer using five mixing modules of the LVGs with an attack angle between 16.5 degrees and 22.5 degrees can achieve excellent mixing over a wide range of Reynolds numbers. Here, we call a section of channel with two pairs of staggered asymmetrical LVGs a mixing module. Besides, the micromixer with LVGs at a small attack angle is more efficient than that with a larger attack angle when pressure losses are taken into account.
Abstract: Context awareness is a capability whereby mobile
computing devices can sense their physical environment and adapt
their behavior accordingly. The term context-awareness, in
ubiquitous computing, was introduced by Schilit in 1994 and has
become one of the most exciting concepts in early 21st-century
computing, fueled by recent developments in pervasive computing
(i.e. mobile and ubiquitous computing). These include computing
devices worn by users, embedded devices, smart appliances, sensors
surrounding users and a variety of wireless networking technologies.
Context-aware applications use context information to adapt
interfaces, tailor the set of application-relevant data, increase the
precision of information retrieval, discover services, make the user
interaction implicit, or build smart environments. For example: A
context aware mobile phone will know that the user is currently in a
meeting room, and reject any unimportant calls. One of the major
challenges in providing users with context-aware services lies in
continuously monitoring their contexts based on numerous sensors
connected to the context aware system through wireless
communication. A number of context aware frameworks based on
sensors have been proposed, but many of them have neglected the
fact that monitoring with sensors imposes heavy workloads on
ubiquitous devices with limited computing power and battery. In this
paper, we present CALEEF, a lightweight and energy efficient
context aware framework for resource limited ubiquitous devices.
Abstract: The current study describes a multi-objective optimization technique for positioning of houses in a residential neighborhood. The main task is the placement of residential houses in a favorable configuration satisfying a number of objectives. Solving the house layout problem is a challenging task. It requires an iterative approach to satisfy design requirements (e.g. energy efficiency, skyview, daylight, roads network, visual privacy, and clear access to favorite views). These design requirements vary from one project to another based on location and client preferences. In the Gulf region, the most important socio-cultural factor is the visual privacy in indoor space. Hence, most of the residential houses in this region are surrounded by high fences to provide privacy, which has a direct impact on other requirements (e.g. daylight and direction to favorite views). This investigation introduces a novel technique to optimally locate and orient residential buildings to satisfy a set of design requirements. The developed technique explores the search space for possible solutions. This study considers two dimensional house planning problems. However, it can be extended to solve three dimensional cases.
Abstract: In this paper the General Game problem is described.
In this problem the competition or cooperation dilemma occurs as the
two basic types of strategies. The strategy possibilities have been
analyzed for finding winning strategy in uncertain situations (no
information about the number of players and their strategy types).
The winning strategy is missing, but a good solution can be found by
simulation by varying the ratio of the two types of strategies. This
new method has been used in a real contest with human players,
where the created strategies by simulation have reached very good
ranks. This construction can be applied in other real social games as
well.
Abstract: Finger spelling is an art of communicating by signs
made with fingers, and has been introduced into sign language to serve
as a bridge between the sign language and the verbal language.
Previous approaches to finger spelling recognition are classified into
two categories: glove-based and vision-based approaches. The
glove-based approach is simpler and more accurate recognizing work
of hand posture than vision-based, yet the interfaces require the user to
wear a cumbersome and carry a load of cables that connected the
device to a computer. In contrast, the vision-based approaches provide
an attractive alternative to the cumbersome interface, and promise
more natural and unobtrusive human-computer interaction. The
vision-based approaches generally consist of two steps: hand
extraction and recognition, and two steps are processed independently.
This paper proposes real-time vision-based Korean finger spelling
recognition system by integrating hand extraction into recognition.
First, we tentatively detect a hand region using CAMShift algorithm.
Then fill factor and aspect ratio estimated by width and height
estimated by CAMShift are used to choose candidate from database,
which can reduce the number of matching in recognition step. To
recognize the finger spelling, we use DTW(dynamic time warping)
based on modified chain codes, to be robust to scale and orientation
variations. In this procedure, since accurate hand regions, without
holes and noises, should be extracted to improve the precision, we use
graph cuts algorithm that globally minimize the energy function
elegantly expressed by Markov random fields (MRFs). In the
experiments, the computational times are less than 130ms, and the
times are not related to the number of templates of finger spellings in
database, as candidate templates are selected in extraction step.
Abstract: This paper considers a multi criteria cell formation
problem in Cellular Manufacturing System (CMS). Minimizing the
number of voids and exceptional elements in cells simultaneously are
two proposed objective functions. This problem is an Np-hard
problem according to the literature, and therefore, we can-t find the
optimal solution by an exact method. In this paper we developed two
ant algorithms, Ant Colony Optimization (ACO) and Max-Min Ant
System (MMAS), based on Data Envelopment Analysis (DEA). Both
of them try to find the efficient solutions based on efficiency concept
in DEA. Each artificial ant is considered as a Decision Making Unit
(DMU). For each DMU we considered two inputs, the values of
objective functions, and one output, the value of one for all of them.
In order to evaluate performance of proposed methods we provided
an experimental design with some empirical problem in three
different sizes, small, medium and large. We defined three different
criteria that show which algorithm has the best performance.
Abstract: Many algorithms are available for sorting the unordered elements. Most important of them are Bubble sort, Heap sort, Insertion sort and Shell sort. These algorithms have their own pros and cons. Shell Sort which is an enhanced version of insertion sort, reduces the number of swaps of the elements being sorted to minimize the complexity and time as compared to insertion sort. Shell sort improves the efficiency of insertion sort by quickly shifting values to their destination. Average sort time is O(n1.25), while worst-case time is O(n1.5). It performs certain iterations. In each iteration it swaps some elements of the array in such a way that in last iteration when the value of h is one, the number of swaps will be reduced. Donald L. Shell invented a formula to calculate the value of ?h?. this work focuses to identify some improvement in the conventional Shell sort algorithm. ''Enhanced Shell Sort algorithm'' is an improvement in the algorithm to calculate the value of 'h'. It has been observed that by applying this algorithm, number of swaps can be reduced up to 60 percent as compared to the existing algorithm. In some other cases this enhancement was found faster than the existing algorithms available.
Abstract: This study presents the numerical simulation of
optimum pin-fin heat sink with air impinging cooling by using
Taguchi method. 9 L ( 4 3 ) orthogonal array is selected as a plan for
the four design-parameters with three levels. The governing
equations are discretized by using the
control-volume-based-finite-difference method with a power-law
scheme on the non-uniform staggered grid. We solved the coupling of
the velocity and the pressure terms of momentum equations using
SIMPLEC algorithm. We employ the k −ε two-equations
turbulence model to describe the turbulent behavior. The parameters
studied include fin height H (35mm-45mm), inter-fin spacing a , b ,
and c (2 mm-6.4 mm), and Reynolds number ( Re = 10000- 25000).
The objective of this study is to examine the effects of the fin
spacings and fin height on the thermal resistance and to find the
optimum group by using the Taguchi method. We found that the fin
spacings from the center to the edge of the heat sink gradually
extended, and the longer the fin’s height the better the results. The
optimum group is 3 1 2 3 H a b c . In addition, the effects of parameters are
ranked by importance as a , H , c , and b .
Abstract: This paper investigates the solutions of two-point fuzzy boundary value problems as the form x = f(t, x(t)), x(0) = A and x(l) = B, where A and B are fuzzy numbers. There are four different solutions for the problems when the lateral type of H-derivative is employed to solve the problems. As f(t, x) is a monotone function of x, these four solutions are reduced to two different solutions. As f(t, x(t)) = λx(t) or f(t, x(t)) = -λx(t), solutions and several comparison results are presented to indicate advantages of each solution.
Abstract: In this study, an optimization of supersonic air-to-air ejector is carried out by a recently developed single-objective genetic algorithm based on adaption of sequence of individuals. Adaptation of sequence is based on Shape-based distance of individuals and embedded micro-genetic algorithm. The optimal sequence found defines the succession of CFD-aimed objective calculation within each generation of regular micro-genetic algorithm. A spring-based deformation mutates the computational grid starting the initial individualvia adapted population in the optimized sequence. Selection of a generation initial individual is knowledge-based. A direct comparison of the newly defined and standard micro-genetic algorithm is carried out for supersonic air-to-air ejector. The only objective is to minimize the loose of total stagnation pressure in the ejector. The result is that sequence-adopted micro-genetic algorithm can provide comparative results to standard algorithm but in significantly lower number of overall CFD iteration steps.
Abstract: Discharges in hydrogen, ignited by wire explosion, with current amplitude up to 1.5 MA were investigated. Channel diameter oscillations were observed on the photostreaks. Voltage and current curves correlated with the photostreaks. At initial gas pressure of 5-35 MPa the oscillation period was proportional to square root of atomic number of the initiating wire material. These oscillations were associated with aligned magnetic and gas-kinetic pressures. At initial pressure of 80-160 MPa acoustic pressure fluctuations on the discharge chamber wall were increased up to 150 MPa and there were the growth of voltage fluctuations on the discharge gap up to 3 kV simultaneously with it. In some experiments it was observed abrupt increase in the oscillation amplitude, which can be caused by the resonance of the acoustic oscillations in discharge chamber volume and the oscillations connected with alignment of the gaskinetic pressure and the magnetic pressure, as far as frequencies of these oscillations are close to each other in accordance with the estimates and the experimental data. Resonance of different type oscillations can produce energy density increasing in the discharge channel. Thus, the appropriate initial conditions in the experiment allow to increase the energy density in the discharge channel
Abstract: Owing the fact that optimization of business process
is a crucial requirement to navigate, survive and even thrive in
today-s volatile business environment, this paper presents a
framework for selecting a best-fit optimization package for solving
complex business problems. Complexity level of the problem and/or
using incorrect optimization software can lead to biased solutions of
the optimization problem. Accordingly, the proposed framework
identifies a number of relevant factors (e.g. decision variables,
objective functions, and modeling approach) to be considered during
the evaluation and selection process. Application domain, problem
specifications, and available accredited optimization approaches are
also to be regarded. A recommendation of one or two optimization
software is the output of the framework which is believed to provide
the best results of the underlying problem. In addition to a set of
guidelines and recommendations on how managers can conduct an
effective optimization exercise is discussed.
Abstract: In molecular biology, microarray technology is widely and successfully utilized to efficiently measure gene activity. If working with less studied organisms, methods to design custom-made microarray probes are available. One design criterion is to select probes with minimal melting temperature variances thus ensuring similar hybridization properties. If the microarray application focuses on the investigation of metabolic pathways, it is not necessary to cover the whole genome. It is more efficient to cover each metabolic pathway with a limited number of genes. Firstly, an approach is presented which minimizes the overall melting temperature variance of selected probes for all genes of interest. Secondly, the approach is extended to include the additional constraints of covering all pathways with a limited number of genes while minimizing the overall variance. The new optimization problem is solved by a bottom-up programming approach which reduces the complexity to make it computationally feasible. The new method is exemplary applied for the selection of microarray probes in order to cover all fungal secondary metabolite gene clusters for Aspergillus terreus.
Abstract: The scientific achievements coming from molecular
biology depend greatly on the capability of computational
applications to analyze the laboratorial results. A comprehensive
analysis of an experiment requires typically the simultaneous study
of the obtained dataset with data that is available in several distinct
public databases. Nevertheless, developing a centralized access to
these distributed databases rises up a set of challenges such as: what
is the best integration strategy, how to solve nomenclature clashes,
how to solve database overlapping data and how to deal with huge
datasets. In this paper we present GeNS, a system that uses a simple and yet innovative approach to address several biological data integration issues. Compared with existing systems, the main
advantages of GeNS are related to its maintenance simplicity and to its coverage and scalability, in terms of number of supported
databases and data types. To support our claims we present the current use of GeNS in two concrete applications. GeNS currently contains more than 140 million of biological relations and it can be
publicly downloaded or remotely access through SOAP web services.
Abstract: In this paper, a novel associative memory model will be proposed and applied to memory retrievals based on the conventional continuous time model. The conventional model presents memory capacity is very low and retrieval process easily converges to an equilibrium state which is very different from the stored patterns. Genetic Algorithms is well-known with the capability of global optimal search escaping local optimum on progress to reach a global optimum. Based on the well-known idea of Genetic Algorithms, this work proposes a heuristic rule to make a mutation when the state of the network is trapped in a spurious memory. The proposal heuristic associative memory show the stored capacity does not depend on the number of stored patterns and the retrieval ability is up to ~ 1.
Abstract: The research on two-wheeled inverted pendulum (TWIP) mobile robots or commonly known as balancing robots have gained momentum over the last decade in a number of robotic laboratories around the world. This paper describes the hardware design of such a robot. The objective of the design is to develop a TWIP mobile robot as well as MATLAB interfacing configuration to be used as flexible platform comprises of embedded unstable linear plant intended for research and teaching purposes. Issues such as selection of actuators and sensors, signal processing units, MATLAB Real Time Workshop coding, modeling and control scheme will be addressed and discussed. The system is then tested using a wellknown state feedback controller to verify its functionality.
Abstract: The forest fires in Thailand are annual occurrence which is the cause of air pollutions. This study intended to estimate the emission from forest fire during 2005-2009 using MODerateresolution Imaging Spectro-radiometer (MODIS) sensor aboard the Terra and Aqua satellites, experimental data, and statistical data. The forest fire emission is estimated using equation established by Seiler and Crutzen in 1982. The spatial and temporal variation of forest fire emission is analyzed and displayed in the form of grid density map. From the satellite data analysis suggested between 2005 and 2009, the number of fire hotspots occurred 86,877 fire hotspots with a significant highest (more than 80% of fire hotspots) in the deciduous forest. The peak period of the forest fire is in January to May. The estimation on the emissions from forest fires during 2005 to 2009 indicated that the amount of CO, CO2, CH4, and N2O was about 3,133,845 tons, 47,610.337 tons, 204,905 tons, and 6,027 tons, respectively, or about 6,171,264 tons of CO2eq. They also emitted 256,132 tons of PM10. The year 2007 was found to be the year when the emissions were the largest. Annually, March is the period that has the maximum amount of forest fire emissions. The areas with high density of forest fire emission were the forests situated in the northern, the western, and the upper northeastern parts of the country.
Abstract: Thermo-chemical treatment (TCT) such as pyrolysis
is getting recognized as a valid route for (i) materials and valuable
products and petrochemicals recovery; (ii) waste recycling; and (iii)
elemental characterization. Pyrolysis is also receiving renewed
attention for its operational, economical and environmental
advantages. In this study, samples of polyethylene terephthalate
(PET) and polystyrene (PS) were pyrolysed in a microthermobalance
reactor (using a thermogravimetric-TGA setup). Both
polymers were prepared and conditioned prior to experimentation.
The main objective was to determine the kinetic parameters of the
depolymerization reactions that occur within the thermal degradation
process. Overall kinetic rate constants (ko) and activation energies
(Eo) were determined using the general kinetics theory (GKT)
method previously used by a number of authors. Fitted correlations
were found and validated using the GKT, errors were within ± 5%.
This study represents a fundamental step to pave the way towards the
development of scaling relationship for the investigation of larger
scale reactors relevant to industry.
Abstract: The purpose of Grid computing is to utilize
computational power of idle resources which are distributed in
different areas. Given the grid dynamism and its decentralize
resources, there is a need for an efficient scheduler for scheduling
applications. Since task scheduling includes in the NP-hard problems
various researches have focused on invented algorithms especially
the genetic ones. But since genetic is an inherent algorithm which
searches the problem space globally and does not have the efficiency
required for local searching, therefore, its combination with local
searching algorithms can compensate for this shortcomings. The aim
of this paper is to combine the genetic algorithm and GELS (GAGELS)
as a method to solve scheduling problem by which
simultaneously pay attention to two factors of time and number of
missed tasks. Results show that the proposed algorithm can decrease
makespan while minimizing the number of missed tasks compared
with the traditional methods.