Abstract: The main objectif of this paper is to present a tool that
we have developed subject to characterize and modelling indoor radio
channel propagation at millimetric wave. The tool is based on the
ray tracing technique (RTT). As, in realistic environment we cannot
neglect the significant impact of Human Body Shadowing and other
objects in motion on indoor 60 GHz propagation channel. Hence,
our proposed model allows a simulation of propagation in a dynamic
indoor environment. First, we describe a model of human body.
Second, RTT with this model is used to simulate the propagation
of millimeter waves in the presence of persons in motion. Results
of the simulation show that this tool gives results in agreement with
those reported in the literature. Specially, the effects of people motion
on temporal channel properties.
Abstract: An original Direct Numerical Simulation (DNS) method to tackle the problem of particulate flows at moderate to high concentration and finite Reynolds number is presented. Our method is built on the framework established by Glowinski and his coworkers [1] in the sense that we use their Distributed Lagrange Multiplier/Fictitious Domain (DLM/FD) formulation and their operator-splitting idea but differs in the treatment of particle collisions. The novelty of our contribution relies on replacing the simple artificial repulsive force based collision model usually employed in the literature by an efficient Discrete Element Method (DEM) granular solver. The use of our DEM solver enables us to consider particles of arbitrary shape (at least convex) and to account for actual contacts, in the sense that particles actually touch each other, in contrast with the simple repulsive force based collision model. We recently upgraded our serial code, GRIFF 1 [2], to full MPI capabilities. Our new code, PeliGRIFF 2, is developed under the framework of the full MPI open source platform PELICANS [3]. The new MPI capabilities of PeliGRIFF open new perspectives in the study of particulate flows and significantly increase the number of particles that can be considered in a full DNS approach: O(100000) in 2D and O(10000) in 3D. Results on the 2D/3D sedimentation/fluidization of isometric polygonal/polyedral particles with collisions are presented.
Abstract: The parametrical study of Shrouded Contra-rotating
Rotor was done in this paper based on 2D axisymmetric simulations.
The calculations were made with an actuator disk as double rotor
model. It objects to explore and quantify the effects of different shroud
geometry parameters mainly using the performance of power loading
(PL), which could evaluate the whole propulsion system capability as
5 Newtontotal thrust generationfor hover demand. The numerical
results show that:The increase of nozzle radius is desired but limited
by the flow separation, its optimal design is around 1.15 times rotor
radius, the viscosity effects greatly constraint the influence of nozzle
shape, the divergent angle around 10.5° performs best for chosen
nozzle length;The parameters of inlet such as leading edge curvature,
radius and internal shape do not affect thrust great but play an
important role in pressure distribution which could produce most part
of shroud thrust, they should be chosen according to the reduction of
adverse pressure gradients to reduce the risk of boundary separation.
Abstract: The evaluation and measurement of human body
dimensions are achieved by physical anthropometry. This research
was conducted in view of the importance of anthropometric indices
of the face in forensic medicine, surgery, and medical imaging. The
main goal of this research is to optimization of facial feature point by
establishing a mathematical relationship among facial features and
used optimize feature points for age classification. Since selected
facial feature points are located to the area of mouth, nose, eyes and
eyebrow on facial images, all desire facial feature points are extracted
accurately. According this proposes method; sixteen Euclidean
distances are calculated from the eighteen selected facial feature
points vertically as well as horizontally. The mathematical
relationships among horizontal and vertical distances are established.
Moreover, it is also discovered that distances of the facial feature
follows a constant ratio due to age progression. The distances
between the specified features points increase with respect the age
progression of a human from his or her childhood but the ratio of the
distances does not change (d = 1 .618 ) . Finally, according to the
proposed mathematical relationship four independent feature
distances related to eight feature points are selected from sixteen
distances and eighteen feature point-s respectively. These four feature
distances are used for classification of age using Support Vector
Machine (SVM)-Sequential Minimal Optimization (SMO) algorithm
and shown around 96 % accuracy. Experiment result shows the
proposed system is effective and accurate for age classification.
Abstract: Effective evaluation of software development effort is an important issue during project plan. This study provides a model to predict development effort based on the software size estimated with function points. We generalize the average amount of effort spent on each phase of the development, and give the estimates for the effort used in software building, testing, and implementation. Finally, this paper finds a strong correlation between software defects and software size. As the size of software constantly increases, the quality remains to be a matter which requires major concern.
Abstract: Some of the students' problems in writing skill stem
from inadequate preparation for the writing assignment. Students
should be taught how to write well when they arrive in language
classes. Having selected a topic, the students examine and explore the
theme from as large a variety of viewpoints as their background and
imagination make possible. Another strategy is that the students
prepare an Outline before writing the paper. The comparison between
the two mentioned thought provoking techniques was carried out
between the two class groups –students of Islamic Azad University of
Dezful who were studying “Writing 2" as their main course. Each
class group was assigned to write five compositions separately in
different periods of time. Then a t-test for each pair of exams between
the two class groups showed that the t-observed in each pair was
more than the t-critical. Consequently, the first hypothesis which
states those who utilize Brainstorming as a thought provoking
technique in prewriting phase are more successful than those who
outline the papers before writing was verified.
Abstract: Bythe development of the Internet, e-commerce has
got very popular between organizations. E-commerce means buying
and selling products and services over the Internet. One of the
challenging issues in e-commerce is how to attract the customers and
how to satisfy them. Therefore, it is important to keep good
relationship with the customers. This paper proposes a new model to
increase the customer satisfaction by introducing live-operator.
Live-operator is a system which is involved both with the customers
and the organization.In this system the customers feelthatthey receive
the service directly from the organization. This model decreases the
response time and the customer loss. Moreover, it increases customer
trust and the ability of organizations.
Abstract: Corner detection and optical flow are common techniques for feature-based video stabilization. However, these algorithms are computationally expensive and should be performed at a reasonable rate. This paper presents an algorithm for discarding irrelevant feature points and maintaining them for future use so as to improve the computational cost. The algorithm starts by initializing a maintained set. The feature points in the maintained set are examined against its accuracy for modeling. Corner detection is required only when the feature points are insufficiently accurate for future modeling. Then, optical flows are computed from the maintained feature points toward the consecutive frame. After that, a motion model is estimated based on the simplified affine motion model and least square method, with outliers belonging to moving objects presented. Studentized residuals are used to eliminate such outliers. The model estimation and elimination processes repeat until no more outliers are identified. Finally, the entire algorithm repeats along the video sequence with the points remaining from the previous iteration used as the maintained set. As a practical application, an efficient video stabilization can be achieved by exploiting the computed motion models. Our study shows that the number of times corner detection needs to perform is greatly reduced, thus significantly improving the computational cost. Moreover, optical flow vectors are computed for only the maintained feature points, not for outliers, thus also reducing the computational cost. In addition, the feature points after reduction can sufficiently be used for background objects tracking as demonstrated in the simple video stabilizer based on our proposed algorithm.
Abstract: The Information and Communication Technologies
(ICTs), and the Wide World Web (WWW) have fundamentally
altered the practice of teaching and learning world wide. Many
universities, organizations, colleges and schools are trying to apply
the benefits of the emerging ICT. In the early nineties the term
learning object was introduced into the instructional technology
vernacular; the idea being that educational resources could be broken
into modular components for later combination by instructors,
learners, and eventually computes into larger structures that would
support learning [1]. However in many developing countries, the use
of ICT is still in its infancy stage and the concept of learning object
is quite new. This paper outlines the learning object design
considerations for developing countries depending on learning
environment.
Abstract: In this article, we aim to discuss the formulation of two explicit group iterative finite difference methods for time-dependent two dimensional Burger-s problem on a variable mesh. For the non-linear problems, the discretization leads to a non-linear system whose Jacobian is a tridiagonal matrix. We discuss the Newton-s explicit group iterative methods for a general Burger-s equation. The proposed explicit group methods are derived from the standard point and rotated point Crank-Nicolson finite difference schemes. Their computational complexity analysis is discussed. Numerical results are given to justify the feasibility of these two proposed iterative methods.
Abstract: Leptospirosis is recognized as an important zoonosis
in tropical regions well as an important animal disease with
substantial loss in production. In this study, the model for the
transmission of the Leptospirosis disease to human population are
discussed. Model is described the vector population dynamics and
the Leptospirosis transmission to the human population are
discussed. Local analysis of equilibria are given. We confirm the
results by using numerical results.
Abstract: Large volumes of fingerprints are collected and stored
every day in a wide range of applications, including forensics, access
control etc. It is evident from the database of Federal Bureau of
Investigation (FBI) which contains more than 70 million finger
prints. Compression of this database is very important because of this
high Volume. The performance of existing image coding standards
generally degrades at low bit-rates because of the underlying block
based Discrete Cosine Transform (DCT) scheme. Over the past
decade, the success of wavelets in solving many different problems
has contributed to its unprecedented popularity. Due to
implementation constraints scalar wavelets do not posses all the
properties which are needed for better performance in compression.
New class of wavelets called 'Multiwavelets' which posses more
than one scaling filters overcomes this problem. The objective of this
paper is to develop an efficient compression scheme and to obtain
better quality and higher compression ratio through multiwavelet
transform and embedded coding of multiwavelet coefficients through
Set Partitioning In Hierarchical Trees algorithm (SPIHT) algorithm.
A comparison of the best known multiwavelets is made to the best
known scalar wavelets. Both quantitative and qualitative measures of
performance are examined for Fingerprints.
Abstract: Identity verification of authentic persons by their multiview faces is a real valued problem in machine vision. Multiview faces are having difficulties due to non-linear representation in the feature space. This paper illustrates the usability of the generalization of LDA in the form of canonical covariate for face recognition to multiview faces. In the proposed work, the Gabor filter bank is used to extract facial features that characterized by spatial frequency, spatial locality and orientation. Gabor face representation captures substantial amount of variations of the face instances that often occurs due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images of rotated profile views produce Gabor faces with high dimensional features vectors. Canonical covariate is then used to Gabor faces to reduce the high dimensional feature spaces into low dimensional subspaces. Finally, support vector machines are trained with canonical sub-spaces that contain reduced set of features and perform recognition task. The proposed system is evaluated with UMIST face database. The experiment results demonstrate the efficiency and robustness of the proposed system with high recognition rates.
Abstract: 53 college students answered questions regarding the circumstances in which they first heard about the news of Wenchuan earthquake or the news of their acceptance to college which took place approximately one year ago, and answered again two years later. The number of details recalled about their circumstances for both events was high and didn-t decline two years later. However, consistency in reported details over two years was low. Participants were more likely to construct central (e.g., Where were you?) than peripheral information (What were you wearing?), and the confidence of the central information was higher than peripheral information, which indicated that they constructed more when they were more confident.
Abstract: When acid is pumped into damaged reservoirs for
damage removal/stimulation, distorted inflow of acid into the
formation occurs caused by acid preferentially traveling into highly
permeable regions over low permeable regions, or (in general) into
the path of least resistance. This can lead to poor zonal coverage and
hence warrants diversion to carry out an effective placement of acid.
Diversion is desirably a reversible technique of temporarily reducing
the permeability of high perm zones, thereby forcing the acid into
lower perm zones.
The uniqueness of each reservoir can pose several challenges to
engineers attempting to devise optimum and effective diversion
strategies. Diversion techniques include mechanical placement and/or
chemical diversion of treatment fluids, further sub-classified into ball
sealers, bridge plugs, packers, particulate diverters, viscous gels,
crosslinked gels, relative permeability modifiers (RPMs), foams,
and/or the use of placement techniques, such as coiled tubing (CT)
and the maximum pressure difference and injection rate (MAPDIR)
methodology.
It is not always realized that the effectiveness of diverters greatly
depends on reservoir properties, such as formation type, temperature,
reservoir permeability, heterogeneity, and physical well
characteristics (e.g., completion type, well deviation, length of
treatment interval, multiple intervals, etc.). This paper reviews the
mechanisms by which each variety of diverter functions and
discusses the effect of various reservoir properties on the efficiency
of diversion techniques. Guidelines are recommended to help
enhance productivity from zones of interest by choosing the best
methods of diversion while pumping an optimized amount of
treatment fluid. The success of an overall acid treatment often
depends on the effectiveness of the diverting agents.
Abstract: This paper presents a new STAKCERT KDD
processes for worm detection. The enhancement introduced in the
data-preprocessing resulted in the formation of a new STAKCERT
model for worm detection. In this paper we explained in detail how
all the processes involved in the STAKCERT KDD processes are
applied within the STAKCERT model for worm detection. Based on
the experiment conducted, the STAKCERT model yielded a 98.13%
accuracy rate for worm detection by integrating the STAKCERT
KDD processes.
Abstract: This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.
Abstract: In this paper, the solubility of CO2 in AMP solution
have been measured at temperature range of ( 293, 303 ,313,323)
K.The amine concentration ranges studied are (2.0, 2.8, and 3.4) M.
A solubility apparatus was used to measure the solubility of CO2 in
AMP solution on samples of flue gases from Thermal and Central
Power Plants of Esfahan Steel Company. The modified Kent
Eisenberg model was used to correlate and predict the vapor-liquid
equilibria of the (CO2 + AMP + H2O) system. The model predicted
results are in good agreement with the experimental vapor-liquid
equilibrium measurements.
Abstract: A new fuzzy filter is presented for noise reduction of
images corrupted with additive noise. The filter consists of two
stages. In the first stage, all the pixels of image are processed for
determining noisy pixels. For this, a fuzzy rule based system
associates a degree to each pixel. The degree of a pixel is a real
number in the range [0,1], which denotes a probability that the pixel
is not considered as a noisy pixel. In the second stage, another fuzzy
rule based system is employed. It uses the output of the previous
fuzzy system to perform fuzzy smoothing by weighting the
contributions of neighboring pixel values. Experimental results are
obtained to show the feasibility of the proposed filter. These results
are also compared to other filters by numerical measure and visual
inspection.
Abstract: To understand complex living system an effort has
made by mechanical engineers and dentists to deliver prompt
products and services to patients concerned about their aesthetic look.
Since two decades various bracket systems have designed involving
techniques like milling, injection molding which are technically not
flexible for the customized dental product development. The aim of
this paper to design, develop a customized system which is
economical and mainly emphasizes the expertise design and
integration of engineering and dental fields. A custom made selfadjustable
lingual bracket and customized implants are designed and
developed using computer aided design (CAD) and rapid prototyping
technology (RPT) to improve the smiles and to overcome the
difficulties associated with conventional ones. Lengthy orthodontic
treatment usually not accepted by the patients because the patient
compliance is lost. Patient-s compliance can be improved by
facilitating faster tooth movements by designing a localized dental
vibrator using advanced engineering principles.