Abstract: The mixture formation prior to the ignition process
plays as a key element in the diesel combustion. Parametric studies of
mixture formation and ignition process in various injection parameter
has received considerable attention in potential for reducing
emissions. Purpose of this study is to clarify the effects of injection
pressure on mixture formation and ignition especially during ignition
delay period, which have to be significantly influences throughout the
combustion process and exhaust emissions. This study investigated
the effects of injection pressure on diesel combustion fundamentally
using rapid compression machine. The detail behavior of mixture
formation during ignition delay period was investigated using the
schlieren photography system with a high speed camera. This method
can capture spray evaporation, spray interference, mixture formation
and flame development clearly with real images. Ignition process and
flame development were investigated by direct photography method
using a light sensitive high-speed color digital video camera. The
injection pressure and air motion are important variable that strongly
affect to the fuel evaporation, endothermic and prolysis process
during ignition delay. An increased injection pressure makes spray tip
penetration longer and promotes a greater amount of fuel-air mixing
occurs during ignition delay. A greater quantity of fuel prepared
during ignition delay period thus predominantly promotes more rapid
heat release.
Abstract: The evaluation and measurement of human body
dimensions are achieved by physical anthropometry. This research
was conducted in view of the importance of anthropometric indices
of the face in forensic medicine, surgery, and medical imaging. The
main goal of this research is to optimization of facial feature point by
establishing a mathematical relationship among facial features and
used optimize feature points for age classification. Since selected
facial feature points are located to the area of mouth, nose, eyes and
eyebrow on facial images, all desire facial feature points are extracted
accurately. According this proposes method; sixteen Euclidean
distances are calculated from the eighteen selected facial feature
points vertically as well as horizontally. The mathematical
relationships among horizontal and vertical distances are established.
Moreover, it is also discovered that distances of the facial feature
follows a constant ratio due to age progression. The distances
between the specified features points increase with respect the age
progression of a human from his or her childhood but the ratio of the
distances does not change (d = 1 .618 ) . Finally, according to the
proposed mathematical relationship four independent feature
distances related to eight feature points are selected from sixteen
distances and eighteen feature point-s respectively. These four feature
distances are used for classification of age using Support Vector
Machine (SVM)-Sequential Minimal Optimization (SMO) algorithm
and shown around 96 % accuracy. Experiment result shows the
proposed system is effective and accurate for age classification.
Abstract: The nickel and gold nanoclusters as supported
catalysts were analyzed by XAS, XRD and XPS in order to
determine their local, global and electronic structure. The present
study has pointed out a strong deformation of the local structure of
the metal, due to its interaction with oxide supports. The average
particle size, the mean squares of the microstrain, the particle size
distribution and microstrain functions of the supported Ni and Au
catalysts were determined by XRD method using Generalized Fermi
Function for the X-ray line profiles approximation. Based on EXAFS
analysis we consider that the local structure of the investigated
systems is strongly distorted concerning the atomic number pairs.
Metal-support interaction is confirmed by the shape changes of the
probability densities of electron transitions: Ni K edge (1s →
continuum and 2p), Au LIII-edge (2p3/2 → continuum, 6s, 6d5/2 and
6d3/2). XPS investigations confirm the metal-support interaction at
their interface.
Abstract: Optimal capacitor allocation in distribution systems
has been studied for a long times. It is an optimization problem
which has an objective to define the optimal sizes and locations of
capacitors to be installed. In this works, an overview of capacitor
placement problem in distribution systems is briefly introduced. The
objective functions and constraints of the problem are listed and the
methodologies for solving the problem are summarized.
Abstract: With the rapid popularization of internet services, it is apparent that the next generation terrestrial communication systems must be capable of supporting various applications like voice, video, and data. This paper presents the performance evaluation of turbo- coded mobile terrestrial communication systems, which are capable of providing high quality services for delay sensitive (voice or video) and delay tolerant (text transmission) multimedia applications in urban and suburban areas. Different types of multimedia information require different service qualities, which are generally expressed in terms of a maximum acceptable bit-error-rate (BER) and maximum tolerable latency. The breakthrough discovery of turbo codes allows us to significantly reduce the probability of bit errors with feasible latency. In a turbo-coded system, a trade-off between latency and BER results from the choice of convolutional component codes, interleaver type and size, decoding algorithm, and the number of decoding iterations. This trade-off can be exploited for multimedia applications by using optimal and suboptimal performance parameter amalgamations to achieve different service qualities. The results are therefore proposing an adaptive framework for turbo-coded wireless multimedia communications which incorporate a set of performance parameters that achieve an appropriate set of service qualities, depending on the application's requirements.
Abstract: Effective evaluation of software development effort is an important issue during project plan. This study provides a model to predict development effort based on the software size estimated with function points. We generalize the average amount of effort spent on each phase of the development, and give the estimates for the effort used in software building, testing, and implementation. Finally, this paper finds a strong correlation between software defects and software size. As the size of software constantly increases, the quality remains to be a matter which requires major concern.
Abstract: Some of the students' problems in writing skill stem
from inadequate preparation for the writing assignment. Students
should be taught how to write well when they arrive in language
classes. Having selected a topic, the students examine and explore the
theme from as large a variety of viewpoints as their background and
imagination make possible. Another strategy is that the students
prepare an Outline before writing the paper. The comparison between
the two mentioned thought provoking techniques was carried out
between the two class groups –students of Islamic Azad University of
Dezful who were studying “Writing 2" as their main course. Each
class group was assigned to write five compositions separately in
different periods of time. Then a t-test for each pair of exams between
the two class groups showed that the t-observed in each pair was
more than the t-critical. Consequently, the first hypothesis which
states those who utilize Brainstorming as a thought provoking
technique in prewriting phase are more successful than those who
outline the papers before writing was verified.
Abstract: Combustion of sprays is of technological importance, but its flame behavior is not fully understood. Furthermore, the multiplicity of dependent variables such as pressure, temperature, equivalence ratio, and droplet sizes complicates the study of spray combustion. Fundamental study on the influence of the presence of liquid droplets has revealed that laminar flames within aerosol mixtures more readily become unstable than for gaseous ones and this increases the practical burning rate. However, fundamental studies on turbulent flames of aerosol mixtures are limited particularly those under near mono-dispersed droplet conditions. In the present work, centrally ignited expanding flames at near atmospheric pressures are employed to quantify the burning rates in gaseous and aerosol flames. Iso-octane-air aerosols are generated by expansion of the gaseous pre-mixture to produce a homogeneously distributed suspension of fuel droplets. The effects of the presence of droplets and turbulence velocity in relation to the burning rates of the flame are also investigated.
Abstract: Corner detection and optical flow are common techniques for feature-based video stabilization. However, these algorithms are computationally expensive and should be performed at a reasonable rate. This paper presents an algorithm for discarding irrelevant feature points and maintaining them for future use so as to improve the computational cost. The algorithm starts by initializing a maintained set. The feature points in the maintained set are examined against its accuracy for modeling. Corner detection is required only when the feature points are insufficiently accurate for future modeling. Then, optical flows are computed from the maintained feature points toward the consecutive frame. After that, a motion model is estimated based on the simplified affine motion model and least square method, with outliers belonging to moving objects presented. Studentized residuals are used to eliminate such outliers. The model estimation and elimination processes repeat until no more outliers are identified. Finally, the entire algorithm repeats along the video sequence with the points remaining from the previous iteration used as the maintained set. As a practical application, an efficient video stabilization can be achieved by exploiting the computed motion models. Our study shows that the number of times corner detection needs to perform is greatly reduced, thus significantly improving the computational cost. Moreover, optical flow vectors are computed for only the maintained feature points, not for outliers, thus also reducing the computational cost. In addition, the feature points after reduction can sufficiently be used for background objects tracking as demonstrated in the simple video stabilizer based on our proposed algorithm.
Abstract: The dynamic spectrum allocation solutions such as
cognitive radio networks have been proposed as a key technology to
exploit the frequency segments that are spectrally underutilized.
Cognitive radio users work as secondary users who need to
constantly and rapidly sense the presence of primary users or
licensees to utilize their frequency bands if they are inactive. Short
sensing cycles should be run by the secondary users to achieve
higher throughput rates as well as to provide low level of interference
to the primary users by immediately vacating their channels once
they have been detected. In this paper, the throughput-sensing time
relationship in local and cooperative spectrum sensing has been
investigated under two distinct scenarios, namely, constant primary
user protection (CPUP) and constant secondary user spectrum
usability (CSUSU) scenarios. The simulation results show that the
design of sensing slot duration is very critical and depends on the
number of cooperating users under CPUP scenario whereas under
CSUSU, cooperating more users has no effect if the sensing time
used exceeds 5% of the total frame duration.
Abstract: Xanthan gum is one of the major commercial
biopolymers. Due to its excellent rheological properties xanthan gum
is used in many applications, mainly in food industry. Commercial
production of xanthan gum uses glucose as the carbon substrate;
consequently the price of xanthan production is high. One of the
ways to decrease xanthan price, is using cheaper substrate like
agricultural wastes. Iran is one of the biggest date producer countries.
However approximately 50% of date production is wasted annually.
The goal of this study is to produce xanthan gum from waste date
using Xanthomonas campestris PTCC1473 by submerged
fermentation. In this study the effect of three variables including
phosphor and nitrogen amount and agitation rate in three levels using
response surface methodology (RSM) has been studied. Results
achieved from statistical analysis Design Expert 7.0.0 software
showed that xanthan increased with increasing level of phosphor.
Low level of nitrogen leaded to higher xanthan production. Xanthan
amount, increasing agitation had positive influence. The statistical
model identified the optimum conditions nitrogen amount=3.15g/l,
phosphor amount=5.03 g/l and agitation=394.8 rpm for xanthan. To
model validation, experiments in optimum conditions for xanthan
gum were carried out. The mean of result for xanthan was 6.72±0.26.
The result was closed to the predicted value by using RSM.
Abstract: Titanium gels doped with water-soluble cationic porphyrin were synthesized by the sol–gel polymerization of Ti (OC4H9)4. In this work we investigate the spectroscopic properties along with SEM images of tetra carboxyl phenyl porphyrin when incorporated into porous matrix produced by the sol–gel technique.
Abstract: In this paper, we propose ablock-wise watermarking scheme for color image authentication to resist malicious tampering of digital media. The thresholding technique is incorporated into the scheme such that the tampered region of the color image can be recovered with high quality while the proofing result is obtained. The watermark for each block consists of its dual authentication data and the corresponding feature information. The feature information for recovery iscomputed bythe thresholding technique. In the proofing process, we propose a dual-option parity check method to proof the validity of image blocks. In the recovery process, the feature information of each block embedded into the color image is rebuilt for high quality recovery. The simulation results show that the proposed watermarking scheme can effectively proof the tempered region with high detection rate and can recover the tempered region with high quality.
Abstract: Leptospirosis is recognized as an important zoonosis
in tropical regions well as an important animal disease with
substantial loss in production. In this study, the model for the
transmission of the Leptospirosis disease to human population are
discussed. Model is described the vector population dynamics and
the Leptospirosis transmission to the human population are
discussed. Local analysis of equilibria are given. We confirm the
results by using numerical results.
Abstract: Large volumes of fingerprints are collected and stored
every day in a wide range of applications, including forensics, access
control etc. It is evident from the database of Federal Bureau of
Investigation (FBI) which contains more than 70 million finger
prints. Compression of this database is very important because of this
high Volume. The performance of existing image coding standards
generally degrades at low bit-rates because of the underlying block
based Discrete Cosine Transform (DCT) scheme. Over the past
decade, the success of wavelets in solving many different problems
has contributed to its unprecedented popularity. Due to
implementation constraints scalar wavelets do not posses all the
properties which are needed for better performance in compression.
New class of wavelets called 'Multiwavelets' which posses more
than one scaling filters overcomes this problem. The objective of this
paper is to develop an efficient compression scheme and to obtain
better quality and higher compression ratio through multiwavelet
transform and embedded coding of multiwavelet coefficients through
Set Partitioning In Hierarchical Trees algorithm (SPIHT) algorithm.
A comparison of the best known multiwavelets is made to the best
known scalar wavelets. Both quantitative and qualitative measures of
performance are examined for Fingerprints.
Abstract: Identity verification of authentic persons by their multiview faces is a real valued problem in machine vision. Multiview faces are having difficulties due to non-linear representation in the feature space. This paper illustrates the usability of the generalization of LDA in the form of canonical covariate for face recognition to multiview faces. In the proposed work, the Gabor filter bank is used to extract facial features that characterized by spatial frequency, spatial locality and orientation. Gabor face representation captures substantial amount of variations of the face instances that often occurs due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images of rotated profile views produce Gabor faces with high dimensional features vectors. Canonical covariate is then used to Gabor faces to reduce the high dimensional feature spaces into low dimensional subspaces. Finally, support vector machines are trained with canonical sub-spaces that contain reduced set of features and perform recognition task. The proposed system is evaluated with UMIST face database. The experiment results demonstrate the efficiency and robustness of the proposed system with high recognition rates.
Abstract: 53 college students answered questions regarding the circumstances in which they first heard about the news of Wenchuan earthquake or the news of their acceptance to college which took place approximately one year ago, and answered again two years later. The number of details recalled about their circumstances for both events was high and didn-t decline two years later. However, consistency in reported details over two years was low. Participants were more likely to construct central (e.g., Where were you?) than peripheral information (What were you wearing?), and the confidence of the central information was higher than peripheral information, which indicated that they constructed more when they were more confident.
Abstract: This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.
Abstract: X-ray mammography is the most effective method for
the early detection of breast diseases. However, the typical diagnostic
signs such as microcalcifications and masses are difficult to detect
because mammograms are of low-contrast and noisy. In this paper, a
new algorithm for image denoising and enhancement in Orthogonal
Polynomials Transformation (OPT) is proposed for radiologists to
screen mammograms. In this method, a set of OPT edge coefficients
are scaled to a new set by a scale factor called OPT scale factor. The
new set of coefficients is then inverse transformed resulting in
contrast improved image. Applications of the proposed method to
mammograms with subtle lesions are shown. To validate the
effectiveness of the proposed method, we compare the results to
those obtained by the Histogram Equalization (HE) and the Unsharp
Masking (UM) methods. Our preliminary results strongly suggest
that the proposed method offers considerably improved enhancement
capability over the HE and UM methods.
Abstract: A new fuzzy filter is presented for noise reduction of
images corrupted with additive noise. The filter consists of two
stages. In the first stage, all the pixels of image are processed for
determining noisy pixels. For this, a fuzzy rule based system
associates a degree to each pixel. The degree of a pixel is a real
number in the range [0,1], which denotes a probability that the pixel
is not considered as a noisy pixel. In the second stage, another fuzzy
rule based system is employed. It uses the output of the previous
fuzzy system to perform fuzzy smoothing by weighting the
contributions of neighboring pixel values. Experimental results are
obtained to show the feasibility of the proposed filter. These results
are also compared to other filters by numerical measure and visual
inspection.