Abstract: Morphogenesis is the process that underpins the selforganised development and regeneration of biological systems. The ability to mimick morphogenesis in artificial systems has great potential for many engineering applications, including production of biological tissue, design of robust electronic systems and the co-ordination of parallel computing. Previous attempts to mimick these complex dynamics within artificial systems have relied upon the use of evolutionary algorithms that have limited their size and complexity. This paper will present some insight into the underlying dynamics of morphogenesis, then show how to, without the assistance of evolutionary algorithms, design cellular architectures that converge to complex patterns.
Abstract: A dynamic of Bertrand duopoly game is analyzed, where players use different production methods and choose their prices with bounded rationality. The equilibriums of the corresponding discrete dynamical systems are investigated. The stability conditions of Nash equilibrium under a local adjustment process are studied. The stability conditions of Nash equilibrium under a local adjustment process are studied. The stability of Nash equilibrium, as some parameters of the model are varied, gives rise to complex dynamics such as cycles of higher order and chaos. On this basis, we discover that an increase of adjustment speed of bounded rational player can make Bertrand market sink into the chaotic state. Finally, the complex dynamics, bifurcations and chaos are displayed by numerical simulation.
Abstract: Medical image segmentation based on image smoothing followed by edge detection assumes a great degree of importance in the field of Image Processing. In this regard, this paper proposes a novel algorithm for medical image segmentation based on vigorous smoothening by identifying the type of noise and edge diction ideology which seems to be a boom in medical image diagnosis. The main objective of this algorithm is to consider a particular medical image as input and make the preprocessing to remove the noise content by employing suitable filter after identifying the type of noise and finally carrying out edge detection for image segmentation. The algorithm consists of three parts. First, identifying the type of noise present in the medical image as additive, multiplicative or impulsive by analysis of local histograms and denoising it by employing Median, Gaussian or Frost filter. Second, edge detection of the filtered medical image is carried out using Canny edge detection technique. And third part is about the segmentation of edge detected medical image by the method of Normalized Cut Eigen Vectors. The method is validated through experiments on real images. The proposed algorithm has been simulated on MATLAB platform. The results obtained by the simulation shows that the proposed algorithm is very effective which can deal with low quality or marginal vague images which has high spatial redundancy, low contrast and biggish noise, and has a potential of certain practical use of medical image diagnosis.
Abstract: As a method of expanding a higher-order tensor data to tensor products of vectors we have proposed the Third-order Orthogonal Tensor Product Expansion (3OTPE) that did similar expansion as Higher-Order Singular Value Decomposition (HOSVD). In this paper we provide a computation algorithm to improve our previous method, in which SVD is applied to the matrix that constituted by the contraction of original tensor data and one of the expansion vector obtained. The residual of the improved method is smaller than the previous method, truncating the expanding tensor products to the same number of terms. Moreover, the residual is smaller than HOSVD when applying to color image data. It is able to be confirmed that the computing time of improved method is the same as the previous method and considerably better than HOSVD.
Abstract: AAM has been successfully applied to face alignment,
but its performance is very sensitive to initial values. In case the initial
values are a little far distant from the global optimum values, there
exists a pretty good possibility that AAM-based face alignment may
converge to a local minimum. In this paper, we propose a progressive
AAM-based face alignment algorithm which first finds the feature
parameter vector fitting the inner facial feature points of the face and
later localize the feature points of the whole face using the first
information. The proposed progressive AAM-based face alignment
algorithm utilizes the fact that the feature points of the inner part of the
face are less variant and less affected by the background surrounding
the face than those of the outer part (like the chin contour). The
proposed algorithm consists of two stages: modeling and relation
derivation stage and fitting stage. Modeling and relation derivation
stage first needs to construct two AAM models: the inner face AAM
model and the whole face AAM model and then derive relation matrix
between the inner face AAM parameter vector and the whole face
AAM model parameter vector. In the fitting stage, the proposed
algorithm aligns face progressively through two phases. In the first
phase, the proposed algorithm will find the feature parameter vector
fitting the inner facial AAM model into a new input face image, and
then in the second phase it localizes the whole facial feature points of
the new input face image based on the whole face AAM model using
the initial parameter vector estimated from using the inner feature
parameter vector obtained in the first phase and the relation matrix
obtained in the first stage. Through experiments, it is verified that the
proposed progressive AAM-based face alignment algorithm is more
robust with respect to pose, illumination, and face background than the
conventional basic AAM-based face alignment algorithm.
Abstract: In this paper, we present symbolic recognition models to extract knowledge characterized by document structures. Focussing on the extraction and the meticulous exploitation of the semantic structure of documents, we obtain a meaningful contextual tagging corresponding to different unit types (title, chapter, section, enumeration, etc.).
Abstract: An experimental study is presented on the effect
of microstructural change on the Portevin-Le Chatelier effect
behaviour of Al-2.5%Mg alloy. Tensile tests are performed on
the as received and heat treated (at 400 ºC for 16 hours)
samples for a wide range of strain rates. The serrations
observed in the stress-time curve are investigated from
statistical analysis point of view. Microstructures of the
samples are characterized by optical metallography and X-ray
diffraction. It is found that the excess vacancy generated due
to heat treatment leads to decrease in the strain rate sensitivity
and the increase in the number of stress drop occurrences per
unit time during the PLC effect. The microstructural
parameters like domain size, dislocation density have no
appreciable effect on the PLC effect as far as the statistical
behavior of the serrations is considered.
Abstract: Today, design requirements are extending more and
more from electronic (analogue and digital) to multidiscipline design.
These current needs imply implementation of methodologies to make
the CAD product reliable in order to improve time to market, study
costs, reusability and reliability of the design process.
This paper proposes a high level design approach applied for the
characterization and the optimization of Switched-Current Sigma-
Delta Modulators. It uses the new hardware description language
VHDL-AMS to help the designers to optimize the characteristics of
the modulator at a high level with a considerably reduced CPU time
before passing to a transistor level characterization.
Abstract: One of the most important power quality issues is voltage flicker. Nowadays this issue also impacts the power system all over the world. The fact of the matter is that the more and the larger capacity of wind generator has been installed. Under unstable wind power situation, the variation of output current and voltage have caused trouble to voltage flicker. Hence, the major purpose of this study is to analyze the impact of wind generator on voltage flicker of power system. First of all, digital simulation and analysis are carried out based on wind generator operating under various system short circuit capacity, impedance angle, loading, and power factor of load. The simulation results have been confirmed by field measurements.
Abstract: The history of technology and banking is examined as
it relates to risk and technological determinism. It is proposed that
the services that banks offer are determined by technology and that
banks must adopt new technologies to be competitive. The adoption
of technologies paradoxically forces the adoption of other new
technologies to protect the bank from the increased risk of
technology. This cycle will lead to bank examiners and regulators to
focus on human behavior, not on the ever changing technology.
Abstract: In this study, the theoretical relationship between pressure and density was investigated on cylindrical hollow fuel briquettes produced of a mixture of fibrous biomass material using a screw press without any chemical binder. The fuel briquettes were made of biomass and other waste material such as spent coffee beans, mielie husks, saw dust and coal fines under pressures of 0.878-2.2 Mega Pascals (MPa). The material was densified into briquettes of outer diameter of 100mm, inner diameter of 35mm and 50mm long. It was observed that manual screw compression action produces briquettes of relatively low density as compared to the ones made using hydraulic compression action. The pressure and density relationship was obtained in the form of power law and compare well with other cylindrical solid briquettes made using hydraulic compression action. The produced briquettes have a dry density of 989 kg/m3 and contain 26.30% fixed carbon, 39.34% volatile matter, 10.9% moisture and 10.46% ash as per dry proximate analysis. The bomb calorimeter tests have shown the briquettes yielding a gross calorific value of 18.9MJ/kg.
Abstract: Script identification is one of the challenging steps in the development of optical character recognition system for bilingual or multilingual documents. In this paper an attempt is made for identification of English numerals at word level from Punjabi documents by using Gabor features. The support vector machine (SVM) classifier with five fold cross validation is used to classify the word images. The results obtained are quite encouraging. Average accuracy with RBF kernel, Polynomial and Linear Kernel functions comes out to be greater than 99%.
Abstract: This paper presents a novel method for inferring the
odor based on neural activities observed from rats- main olfactory
bulbs. Multi-channel extra-cellular single unit recordings were done
by micro-wire electrodes (tungsten, 50μm, 32 channels) implanted in
the mitral/tufted cell layers of the main olfactory bulb of anesthetized
rats to obtain neural responses to various odors. Neural response
as a key feature was measured by substraction of neural firing rate
before stimulus from after. For odor inference, we have developed a
decoding method based on the maximum likelihood (ML) estimation.
The results have shown that the average decoding accuracy is about
100.0%, 96.0%, 84.0%, and 100.0% with four rats, respectively. This
work has profound implications for a novel brain-machine interface
system for odor inference.
Abstract: Image compression plays a vital role in today-s
communication. The limitation in allocated bandwidth leads to
slower communication. To exchange the rate of transmission in the
limited bandwidth the Image data must be compressed before
transmission. Basically there are two types of compressions, 1)
LOSSY compression and 2) LOSSLESS compression. Lossy
compression though gives more compression compared to lossless
compression; the accuracy in retrievation is less in case of lossy
compression as compared to lossless compression. JPEG, JPEG2000
image compression system follows huffman coding for image
compression. JPEG 2000 coding system use wavelet transform,
which decompose the image into different levels, where the
coefficient in each sub band are uncorrelated from coefficient of
other sub bands. Embedded Zero tree wavelet (EZW) coding exploits
the multi-resolution properties of the wavelet transform to give a
computationally simple algorithm with better performance compared
to existing wavelet transforms. For further improvement of
compression applications other coding methods were recently been
suggested. An ANN base approach is one such method. Artificial
Neural Network has been applied to many problems in image
processing and has demonstrated their superiority over classical
methods when dealing with noisy or incomplete data for image
compression applications. The performance analysis of different
images is proposed with an analysis of EZW coding system with
Error Backpropagation algorithm. The implementation and analysis
shows approximately 30% more accuracy in retrieved image
compare to the existing EZW coding system.
Abstract: The method of gait identification based on the nearest neighbor classification technique with motion similarity assessment by the dynamic time warping is proposed. The model based kinematic motion data, represented by the joints rotations coded by Euler angles and unit quaternions is used. The different pose distance functions in Euler angles and quaternion spaces are considered. To evaluate individual features of the subsequent joints movements during gait cycle, joint selection is carried out. To examine proposed approach database containing 353 gaits of 25 humans collected in motion capture laboratory is used. The obtained results are promising. The classifications, which takes into consideration all joints has accuracy over 91%. Only analysis of movements of hip joints allows to correctly identify gaits with almost 80% precision.
Abstract: The performance of high-resolution schemes is investigated for unsteady, inviscid and compressible multiphase flows. An Eulerian diffuse interface approach has been chosen for the simulation of multicomponent flow problems. The reduced fiveequation and seven equation models are used with HLL and HLLC approximation. The authors demonstrated the advantages and disadvantages of both seven equations and five equations models studying their performance with HLL and HLLC algorithms on simple test case. The seven equation model is based on two pressure, two velocity concept of Baer–Nunziato [10], while five equation model is based on the mixture velocity and pressure. The numerical evaluations of two variants of Riemann solvers have been conducted for the classical one-dimensional air-water shock tube and compared with analytical solution for error analysis.
Abstract: Lighvan cheese is basically made from sheep milk in
the area of Sahand mountainside which is located in the North West
of Iran. The main objective of this study was to investigate the effect
of enterococci isolated from traditional Lighvan cheese on the quality
of Iranian UF white during ripening. The experimental design was
split plot based on randomized complete blocks, main plots were four
types of starters and subplots were different ripening durations.
Addition of Enterococcus spp. did not significantly (P
Abstract: Freeze concentration freezes or crystallises the water
molecules out as ice crystals and leaves behind a highly concentrated
solution. In conventional suspension freeze concentration where ice
crystals formed as a suspension in the mother liquor, separation of
ice is difficult. The size of the ice crystals is still very limited which
will require usage of scraped surface heat exchangers, which is very
expensive and accounted for approximately 30% of the capital cost.
This research is conducted using a newer method of freeze
concentration, which is progressive freeze concentration. Ice crystals
were formed as a layer on the designed heat exchanger surface. In
this particular research, a helical structured copper crystallisation
chamber was designed and fabricated. The effect of two operating
conditions on the performance of the newly designed crystallisation
chamber was investigated, which are circulation flowrate and coolant
temperature. The performance of the design was evaluated by the
effective partition constant, K, calculated from the volume and
concentration of the solid and liquid phase. The system was also
monitored by a data acquisition tool in order to see the temperature
profile throughout the process. On completing the experimental
work, it was found that higher flowrate resulted in a lower K, which
translated into high efficiency. The efficiency is the highest at 1000
ml/min. It was also found that the process gives the highest
efficiency at a coolant temperature of -6 °C.
Abstract: This work is a proposed model of CMOS for which
the algorithm has been created and then the performance evaluation
of this proposition has been done. In this context, another commonly
used model called ZSTT (Zero Switching Time Transient) model is
chosen to compare all the vital features and the results for the
Proposed Equivalent CMOS are promising. In the end, the excerpts
of the created algorithm are also included
Abstract: Because of high ductility, aluminum alloys, have been widely used as an important base of metal forming industries. But the main week point of these alloys is their low strength so in forming them with conventional methods like deep drawing, hydro forming, etc have been always faced with problems like fracture during of forming process. Because of this, recently using of explosive forming method for forming of these plates has been recommended. In this paper free explosive forming of A2024 aluminum alloy is numerically simulated and during it, explosion wave propagation process is studied. Consequences of this simulation can be effective in prediction of quality of production. These consequences are compared with an experimental test and show the superiority of this method to similar methods like hydro forming and deep drawing.