Abstract: Hybrid algorithm is the hot issue in Computational
Intelligence (CI) study. From in-depth discussion on Simulation
Mechanism Based (SMB) classification method and composite patterns,
this paper presents the Mamdani model based Adaptive Neural
Fuzzy Inference System (M-ANFIS) and weight updating formula in
consideration with qualitative representation of inference consequent
parts in fuzzy neural networks. M-ANFIS model adopts Mamdani
fuzzy inference system which has advantages in consequent part.
Experiment results of applying M-ANFIS to evaluate traffic Level
of service show that M-ANFIS, as a new hybrid algorithm in computational
intelligence, has great advantages in non-linear modeling,
membership functions in consequent parts, scale of training data and
amount of adjusted parameters.
Abstract: This paper presents an improved image segmentation
model with edge preserving regularization based on the
piecewise-smooth Mumford-Shah functional. A level set formulation
is considered for the Mumford-Shah functional minimization in
segmentation, and the corresponding partial difference equations are
solved by the backward Euler discretization. Aiming at encouraging
edge preserving regularization, a new edge indicator function is
introduced at level set frame. In which all the grid points which is used
to locate the level set curve are considered to avoid blurring the edges
and a nonlinear smooth constraint function as regularization term is
applied to smooth the image in the isophote direction instead of the
gradient direction. In implementation, some strategies such as a new
scheme for extension of u+ and u- computation of the grid points and
speedup of the convergence are studied to improve the efficacy of the
algorithm. The resulting algorithm has been implemented and
compared with the previous methods, and has been proved efficiently
by several cases.
Abstract: In this paper processes including large deformations of a rubber with hyperelastic material behavior are simulated by the RKPM method. Due to the loss of kronecker delta properties in the mesh less shape functions, the imposition of essential boundary conditions consumes significant CPU time in mesh free computations. In this work transformation method is used for imposition of essential boundary conditions. A RKPM material shape function is used in this analysis. The support of the material shape functions covers the same set of particles during material deformation and hence the transformation matrix is formed only once at the initial stages. A computer program in MATLAB is developed for simulations.
Abstract: In real-field applications, the correct determination of voice segments highly improves the overall system accuracy and minimises the total computation time. This paper presents reliable measures of speech compression by detcting the end points of the speech signals prior to compressing them. The two different compession schemes used are the Global threshold and the Level- Dependent threshold techniques. The performance of the proposed method is tested wirh the Signal to Noise Ratios, Peak Signal to Noise Ratios and Normalized Root Mean Square Error parameter measures.
Abstract: Parallel programming models exist as an abstraction
of hardware and memory architectures. There are several parallel
programming models in commonly use; they are shared memory
model, thread model, message passing model, data parallel model,
hybrid model, Flynn-s models, embarrassingly parallel computations
model, pipelined computations model. These models are not specific
to a particular type of machine or memory architecture. This paper
expresses the model program for concurrent approach to data parallel
model through java programming.
Abstract: This paper presents a simplified version of Data Envelopment Analysis (DEA) - a conventional approach to evaluating the performance and ranking of competitive objects characterized by two groups of factors acting in opposite directions: inputs and outputs. DEA with a Perfect Object (DEA PO) augments the group of actual objects with a virtual Perfect Object - the one having greatest outputs and smallest inputs. It allows for obtaining an explicit analytical solution and making a step to an absolute efficiency. This paper develops this approach further and introduces a DEA model with Partially Perfect Objects. DEA PPO consecutively eliminates the smallest relative inputs or greatest relative outputs, and applies DEA PO to the reduced collections of indicators. The partial efficiency scores are combined to get the weighted efficiency score. The computational scheme remains simple, like that of DEA PO, but the advantage of the DEA PPO is taking into account all of the inputs and outputs for each actual object. Firm evaluation is considered as an example.
Abstract: Blood pulse is an important human physiological signal commonly used for the understanding of the individual physical health. Current methods of non-invasive blood pulse sensing require direct contact or access to the human skin. As such, the performances of these devices tend to vary with time and are subjective to human body fluids (e.g. blood, perspiration and skin-oil) and environmental contaminants (e.g. mud, water, etc). This paper proposes a simulation model for the novel method of non-invasive acquisition of blood pulse using the disturbance created by blood flowing through a localized magnetic field. The simulation model geometry represents a blood vessel, a permanent magnet, a magnetic sensor, surrounding tissues and air in 2-dimensional. In this model, the velocity and pressure fields in the blood stream are described based on Navier-Stroke equations and the walls of the blood vessel are assumed to have no-slip condition. The blood assumes a parabolic profile considering a laminar flow for blood in major artery near the skin. And the inlet velocity follows a sinusoidal equation. This will allow the computational software to compute the interactions between the magnetic vector potential generated by the permanent magnet and the magnetic nanoparticles in the blood. These interactions are simulated based on Maxwell equations at the location where the magnetic sensor is placed. The simulated magnetic field at the sensor location is found to assume similar sinusoidal waveform characteristics as the inlet velocity of the blood. The amplitude of the simulated waveforms at the sensor location are compared with physical measurements on human subjects and found to be highly correlated.
Abstract: This paper introduces a temporal epistemic logic
CBCTL that updates agent-s belief states through communications
in them, based on computational tree logic (CTL). In practical
environments, communication channels between agents may not be
secure, and in bad cases agents might suffer blackouts. In this study,
we provide inform* protocol based on ACL of FIPA, and declare the
presence of secure channels between two agents, dependent on time.
Thus, the belief state of each agent is updated along with the progress
of time. We show a prover, that is a reasoning system for a given
formula in a given a situation of an agent ; if it is directly provable
or if it could be validated through the chains of communications, the
system returns the proof.
Abstract: The design of a modern aircraft is based on three pillars: theoretical results, experimental test and computational simulations.
As a results of this, Computational Fluid Dynamic (CFD) solvers are
widely used in the aeronautical field. These solvers require the correct
selection of many parameters in order to obtain successful results. Besides, the computational time spent in the simulation depends on
the proper choice of these parameters.
In this paper we create an expert system capable of making an
accurate prediction of the number of iterations and time required for the convergence of a computational fluid dynamic (CFD) solver.
Artificial neural network (ANN) has been used to design the expert system. It is shown that the developed expert system is capable of making an accurate prediction the number of iterations and time
required for the convergence of a CFD solver.
Abstract: This research proposes an algorithm for the simulation
of time-periodic unsteady problems via the solution unsteady Euler
and Navier-Stokes equations. This algorithm which is called Time
Spectral method uses a Fourier representation in time and hence
solve for the periodic state directly without resolving transients
(which consume most of the resources in a time-accurate scheme).
Mathematical tools used here are discrete Fourier transformations. It
has shown tremendous potential for reducing the computational cost
compared to conventional time-accurate methods, by enforcing
periodicity and using Fourier representation in time, leading to
spectral accuracy. The accuracy and efficiency of this technique is
verified by Euler and Navier-Stokes calculations for pitching airfoils.
Because of flow turbulence nature, Baldwin-Lomax turbulence
model has been used at viscous flow analysis. The results presented
by the Time Spectral method are compared with experimental data. It
has shown tremendous potential for reducing the computational cost
compared to the conventional time-accurate methods, by enforcing
periodicity and using Fourier representation in time, leading to
spectral accuracy, because results verify the small number of time
intervals per pitching cycle required to capture the flow physics.
Abstract: The main aim of this work is to establish the
capabilities of new green buildings to ascertain off-grid electricity
generation based on the integration of wind turbines in the
conceptual model of a rotating tower [2] in Dubai. An in depth
performance analysis of the WinWind 3.0MW [3] wind turbine is
performed. Data based on the Dubai Meteorological Services is
collected and analyzed in conjunction with the performance analysis
of this wind turbine. The mathematical model is compared with
Computational Fluid Dynamics (CFD) results based on a conceptual
rotating tower design model. The comparison results are further
validated and verified for accuracy by conducting experiments on a
scaled prototype of the tower design. The study concluded that
integrating wind turbines inside a rotating tower can generate enough
electricity to meet the required power consumption of the building,
which equates to a wind farm containing 9 horizontal axis wind
turbines located at an approximate area of 3,237,485 m2 [14].
Abstract: A combination of image fusion and quad tree decomposition method is used for detecting the sunspot trajectories in each month and computation of the latitudes of these trajectories in each solar hemisphere. Daily solar images taken with SOHO satellite are fused for each month and the result of fused image is decomposed with Quad Tree decomposition method in order to classifying the sunspot trajectories and then to achieve the precise information about latitudes of sunspot trajectories. Also with fusion we deduce some physical remarkable conclusions about sun magnetic fields behavior. Using quad tree decomposition we give information about the region on sun surface and the space angle that tremendous flares and hot plasma gases permeate interplanetary space and attack to satellites and human technical systems. Here sunspot images in June, July and August 2001 are used for studying and give a method to compute the latitude of sunspot trajectories in each month with sunspot images.
Abstract: This paper presents the averaging model of a buck
converter derived from the generalized state-space averaging method.
The sliding mode control is used to regulate the output voltage of the
converter and taken into account in the model. The proposed model
requires the fast computational time compared with those of the full
topology model. The intensive time-domain simulations via the exact
topology model are used as the comparable model. The results show
that a good agreement between the proposed model and the switching
model is achieved in both transient and steady-state responses. The
reported model is suitable for the optimal controller design by using
the artificial intelligence techniques.
Abstract: In this paper, we propose a novel frequency offset
estimation scheme for orthogonal frequency division multiplexing
(OFDM) systems. By correlating the OFDM signals within the coherence
phase bandwidth and employing a threshold in the frequency
offset estimation process, the proposed scheme is not only robust to
the timing offset but also has a reduced complexity compared with
that of the conventional scheme. Moreover, a timing offset estimation
scheme is also proposed as the next stage of the proposed frequency
offset estimation. Numerical results show that the proposed scheme
can estimate frequency offset with lower computational complexity
and does not require additional memory while maintaining the same
level of estimation performance.
Abstract: Numerical study of a plane jet occurring in a vertical
heated channel is carried out. The aim is to explore the influence of
the forced flow, issued from a flat nozzle located in the entry section
of a channel, on the up-going fluid along the channel walls. The
Reynolds number based on the nozzle width and the jet velocity
ranges between 3 103 and 2.104; whereas, the Grashof number based
on the channel length and the wall temperature difference is 2.57
1010. Computations are established for a symmetrically heated
channel and various nozzle positions. The system of governing
equations is solved with a finite volumes method. The obtained
results show that the jet-wall interactions activate the heat transfer,
the position variation modifies the heat transfer especially for low
Reynolds numbers: the heat transfer is enhanced for the adjacent
wall; however it is decreased for the opposite one. The numerical
velocity and temperature fields are post-processed to compute the
quantities of engineering interest such as the induced mass flow rate,
and the Nusselt number along the plates.
Abstract: The more recent satellite projects/programs makes
extensive usage of real – time embedded systems. 16 bit processors
which meet the Mil-Std-1750 standard architecture have been used in
on-board systems. Most of the Space Applications have been written
in ADA. From a futuristic point of view, 32 bit/ 64 bit processors are
needed in the area of spacecraft computing and therefore an effort is
desirable in the study and survey of 64 bit architectures for space
applications. This will also result in significant technology
development in terms of VLSI and software tools for ADA (as the
legacy code is in ADA).
There are several basic requirements for a special processor for
this purpose. They include Radiation Hardened (RadHard) devices,
very low power dissipation, compatibility with existing operational
systems, scalable architectures for higher computational needs,
reliability, higher memory and I/O bandwidth, predictability, realtime
operating system and manufacturability of such processors.
Further on, these may include selection of FPGA devices, selection
of EDA tool chains, design flow, partitioning of the design, pin
count, performance evaluation, timing analysis etc.
This project deals with a brief study of 32 and 64 bit processors
readily available in the market and designing/ fabricating a 64 bit
RISC processor named RISC MicroProcessor with added
functionalities of an extended double precision floating point unit
and a 32 bit signal processing unit acting as co-processors. In this
paper, we emphasize the ease and importance of using Open Core
(OpenSparc T1 Verilog RTL) and Open “Source" EDA tools such as
Icarus to develop FPGA based prototypes quickly. Commercial tools
such as Xilinx ISE for Synthesis are also used when appropriate.
Abstract: Optimization is often a critical issue for most system
design problems. Evolutionary Algorithms are population-based,
stochastic search techniques, widely used as efficient global
optimizers. However, finding optimal solution to complex high
dimensional, multimodal problems often require highly
computationally expensive function evaluations and hence are
practically prohibitive. The Dynamic Approximate Fitness based
Hybrid EA (DAFHEA) model presented in our earlier work [14]
reduced computation time by controlled use of meta-models to
partially replace the actual function evaluation by approximate
function evaluation. However, the underlying assumption in
DAFHEA is that the training samples for the meta-model are
generated from a single uniform model. Situations like model
formation involving variable input dimensions and noisy data
certainly can not be covered by this assumption. In this paper we
present an enhanced version of DAFHEA that incorporates a
multiple-model based learning approach for the SVM approximator.
DAFHEA-II (the enhanced version of the DAFHEA framework) also
overcomes the high computational expense involved with additional
clustering requirements of the original DAFHEA framework. The
proposed framework has been tested on several benchmark functions
and the empirical results illustrate the advantages of the proposed
technique.
Abstract: This study comprehensively simulate the use of k-ε
model for predicting flow and heat transfer with measured flow field
data in a stationary duct with elucidates on the detailed physics
encountered in the fully developed flow region, and the sharp 180°
bend region. Among the major flow features predicted with accuracy
are flow transition at the entrance of the duct, the distribution of
mean and turbulent quantities in the developing, fully developed, and
sharp 180° bend, the development of secondary flows in the duct
cross-section and the sharp 180° bend, and heat transfer
augmentation. Turbulence intensities in the sharp 180° bend are
found to reach high values and local heat transfer comparisons show
that the heat transfer augmentation shifts towards the wall and along
the duct. Therefore, understanding of the unsteady heat transfer in
sharp 180° bends is important. The design and simulation are related
to concept of fluid mechanics, heat transfer and thermodynamics.
Simulation study has been conducted on the response of turbulent
flow in a rectangular duct in order to evaluate the heat transfer rate
along the small scale multiple rectangular duct
Abstract: In this paper, a new algorithm for generating codebook is proposed for vector quantization (VQ) in image coding. The significant features of the training image vectors are extracted by using the proposed Orthogonal Polynomials based transformation. We propose to generate the codebook by partitioning these feature vectors into a binary tree. Each feature vector at a non-terminal node of the binary tree is directed to one of the two descendants by comparing a single feature associated with that node to a threshold. The binary tree codebook is used for encoding and decoding the feature vectors. In the decoding process the feature vectors are subjected to inverse transformation with the help of basis functions of the proposed Orthogonal Polynomials based transformation to get back the approximated input image training vectors. The results of the proposed coding are compared with the VQ using Discrete Cosine Transform (DCT) and Pairwise Nearest Neighbor (PNN) algorithm. The new algorithm results in a considerable reduction in computation time and provides better reconstructed picture quality.
Abstract: Three-dimensional reconstruction of small objects has
been one of the most challenging problems over the last decade.
Computer graphics researchers and photography professionals have
been working on improving 3D reconstruction algorithms to fit the
high demands of various real life applications. Medical sciences,
animation industry, virtual reality, pattern recognition, tourism
industry, and reverse engineering are common fields where 3D
reconstruction of objects plays a vital role. Both lack of accuracy and
high computational cost are the major challenges facing successful
3D reconstruction. Fringe projection has emerged as a promising 3D
reconstruction direction that combines low computational cost to both
high precision and high resolution. It employs digital projection,
structured light systems and phase analysis on fringed pictures.
Research studies have shown that the system has acceptable
performance, and moreover it is insensitive to ambient light.
This paper presents an overview of fringe projection approaches. It
also presents an experimental study and implementation of a simple
fringe projection system. We tested our system using two objects
with different materials and levels of details. Experimental results
have shown that, while our system is simple, it produces acceptable
results.