Abstract: The habitat where the present study has been carried
out is productive in relation to nutrient quality and they may perform
several useful functions, but are also threatened for their existence.
Hence, the proposed work, will add much new information about
biodiversity of macrophytes in drains and their embankment. All the
species were identified with their different stages of growth which
encountered on the three selected sites (I, II and III). The number of
species occurring at each site is grouped seasonally, i.e. summer,
rainy and winter season and the species were further recorded for the
study of phytosociology. Phytosociological characters such as
frequency, density and abundance were influenced by the climatic,
anthropogenic and biotic stresses prevailing at the three study sites.
All the species present at the study sites have shown maximum
values of frequency, density and abundance in rainy season in
comparison to that of summer and winter seasons.
Abstract: In this paper we analyze the core issues affecting
software architecture in enterprise projects where a large number of
people at different backgrounds are involved and complex business,
management and technical problems exist. We first give general
features of typical enterprise projects and then present foundations of
software architectures. The detailed analysis of core issues affecting
software architecture in software development phases is given. We
focus on three main areas in each development phase: people,
process, and management related issues, structural (product) issues,
and technology related issues. After we point out core issues and
problems in these main areas, we give recommendations for
designing good architecture. We observed these core issues and the
importance of following the best software development practices and
also developed some novel practices in many big enterprise
commercial and military projects in about 10 years of experience.
Abstract: An electrocardiogram (ECG) feature extraction system
based on the calculation of the complex resonance frequency
employing Prony-s method is developed. Prony-s method is applied
on five different classes of ECG signals- arrhythmia as a finite sum
of exponentials depending on the signal-s poles and the resonant
complex frequencies. Those poles and resonance frequencies of the
ECG signals- arrhythmia are evaluated for a large number of each
arrhythmia. The ECG signals of lead II (ML II) were taken from
MIT-BIH database for five different types. These are the ventricular
couplet (VC), ventricular tachycardia (VT), ventricular bigeminy
(VB), and ventricular fibrillation (VF) and the normal (NR). This
novel method can be extended to any number of arrhythmias.
Different classification techniques were tried using neural networks
(NN), K nearest neighbor (KNN), linear discriminant analysis (LDA)
and multi-class support vector machine (MC-SVM).
Abstract: The dramatic increasing of sea-freight container
transportations and the developing trends for using containers in the
multimodal handling systems through the sea, rail, road and land in
nowadays market cause general managers of container terminals to
face challenges such as increasing demand, competitive situation,
new investments and expansion of new activities and need to use new
methods to fulfil effective operations both along quayside and within
the yard. Among these issues, minimizing the turnaround time of
vessels is considered to be the first aim of every container port
system. Regarding the complex structure of container ports, this
paper presents a simulation model that calculates the number of
trucks needed in the Iranian Shahid Rajaee Container Port for
handling containers between the berth and the yard. In this research,
some important criteria such as vessel turnaround time, gantry crane
utilization and truck utilization have been considered. By analyzing
the results of the model, it has been shown that increasing the number
of trucks to 66 units has a significant effect on the performance
indices of the port and can increase the capacity of loading and
unloading up to 10.8%.
Abstract: The Multi-Layered Perceptron (MLP) Neural
networks have been very successful in a number of signal processing
applications. In this work we have studied the possibilities and the
met difficulties in the application of the MLP neural networks for the
prediction of daily solar radiation data. We have used the Polack-Ribière algorithm for training the neural networks. A comparison, in
term of the statistical indicators, with a linear model most used in
literature, is also performed, and the obtained results show that the
neural networks are more efficient and gave the best results.
Abstract: The paper is devoted to stochastic analysis of finite
dimensional difference equation with dependent on ergodic Markov
chain increments, which are proportional to small parameter ". A
point-form solution of this difference equation may be represented
as vertexes of a time-dependent continuous broken line given on the
segment [0,1] with "-dependent scaling of intervals between vertexes.
Tending " to zero one may apply stochastic averaging and diffusion
approximation procedures and construct continuous approximation of
the initial stochastic iterations as an ordinary or stochastic Ito differential
equation. The paper proves that for sufficiently small " these
equations may be successfully applied not only to approximate finite
number of iterations but also for asymptotic analysis of iterations,
when number of iterations tends to infinity.
Abstract: The speech signal conveys information about the
identity of the speaker. The area of speaker identification is
concerned with extracting the identity of the person speaking the
utterance. As speech interaction with computers becomes more
pervasive in activities such as the telephone, financial transactions
and information retrieval from speech databases, the utility of
automatically identifying a speaker is based solely on vocal
characteristic. This paper emphasizes on text dependent speaker
identification, which deals with detecting a particular speaker from a
known population. The system prompts the user to provide speech
utterance. System identifies the user by comparing the codebook of
speech utterance with those of the stored in the database and lists,
which contain the most likely speakers, could have given that speech
utterance. The speech signal is recorded for N speakers further the
features are extracted. Feature extraction is done by means of LPC
coefficients, calculating AMDF, and DFT. The neural network is
trained by applying these features as input parameters. The features
are stored in templates for further comparison. The features for the
speaker who has to be identified are extracted and compared with the
stored templates using Back Propogation Algorithm. Here, the
trained network corresponds to the output; the input is the extracted
features of the speaker to be identified. The network does the weight
adjustment and the best match is found to identify the speaker. The
number of epochs required to get the target decides the network
performance.
Abstract: Wheat gluten hydrolyzates (WGHs) and anchovy fine
powder hydrolyzates (AFPHs) were produced at 300 MPa using
combinations of Flavourzyme 500MG (F), Alcalase 2.4L (A),
Marugoto E (M) and Protamex (P), and then were compared to those
produced at ambient pressure concerning the contents of soluble solid
(SS), soluble nitrogen and electrophoretic profiles. The contents of SS
in the WGHs and AFPHs increased up to 87.2% according to the
increase in enzyme number both at high and ambient pressure. Based
on SS content, the optimum enzyme combinations for one-, two-,
three- and four-enzyme hydrolysis were determined as F, FA, FAM
and FAMP, respectively. Similar trends were found for the contents of
total soluble nitrogen (TSN) and TCA-soluble nitrogen (TCASN). The
contents of SS, TSN and TCASN in the hydrolyzates together with
electrophoretic mobility maps indicates that the high-pressure
treatment of this study accelerated protein hydrolysis compared to
ambient-pressure treatment.
Abstract: The necessity of updating the numerical models inputs, because of geometrical and resistive variations in rivers subject to solid transport phenomena, requires detailed control and monitoring activities. The human employment and financial resources of these activities moves the research towards the development of expeditive methodologies, able to evaluate the outflows through the measurement of more easily acquirable sizes. Recent studies highlighted the dependence of the entropic parameter on the kinematical and geometrical flow conditions. They showed a meaningful variability according to the section shape, dimension and slope. Such dependences, even if not yet well defined, could reduce the difficulties during the field activities, and also the data elaboration time. On the basis of such evidences, the relationships between the entropic parameter and the geometrical and resistive sizes, obtained through a large and detailed laboratory experience on steady free surface flows in conditions of macro and intermediate homogeneous roughness, are analyzed and discussed.
Abstract: The turbulent mixing of coolant streams of different
temperature and density can cause severe temperature fluctuations in
piping systems in nuclear reactors. In certain periodic contraction
cycles these conditions lead to thermal fatigue. The resulting aging
effect prompts investigation in how the mixing of flows over a sharp
temperature/density interface evolves. To study the fundamental
turbulent mixing phenomena in the presence of density gradients,
isokinetic (shear-free) mixing experiments are performed in a square
channel with Reynolds numbers ranging from 2-500 to 60-000.
Sucrose is used to create the density difference. A Wire Mesh Sensor
(WMS) is used to determine the concentration map of the flow in the
cross section. The mean interface width as a function of velocity,
density difference and distance from the mixing point are analyzed
based on traditional methods chosen for the purposes of
atmospheric/oceanic stratification analyses. A definition of the
mixing layer thickness more appropriate to thermal fatigue and based
on mixedness is devised. This definition shows that the thermal
fatigue risk assessed using simple mixing layer growth can be
misleading and why an approach that separates the effects of large
scale (turbulent) and small scale (molecular) mixing is necessary.
Abstract: Saudi Arabia is an arid country which depends on
costly desalination plants to satisfy the growing residential water
demand. Prediction of water demand is usually a challenging task
because the forecast model should consider variations in economic
progress, climate conditions and population growth. The task is
further complicated knowing that Mecca city is visited regularly by
large numbers during specific months in the year due to religious
occasions. In this paper, a neural networks model is proposed to
handle the prediction of the monthly and yearly water demand for
Mecca city, Saudi Arabia. The proposed model will be developed
based on historic records of water production and estimated visitors-
distribution. The driving variables for the model include annuallyvarying
variables such as household income, household density, and
city population, and monthly-varying variables such as expected
number of visitors each month and maximum monthly temperature.
Abstract: Numerical simulations are performed for laminar
continuous and pulsed jets impinging on a surface in order to
investigate the effects of pulsing frequency on the heat transfer
characteristics. The time-averaged Nusselt number of pulsed jets is
larger in the impinging jet region as compared to the continuous jet,
while it is smaller in the outer wall jet region. At the stagnation point,
the mean and RMS Nusselt numbers become larger and smaller,
respectively, as the pulsing frequency increases. Unsteady behaviors
of vortical fluid motions and temperature field are also investigated to
understand the underlying mechanisms of heat transfer enhancement.
Abstract: The increased number of automobiles in recent years
has resulted in great demand for fossil fuel. This has led to the
development of automobile by using alternative fuels which include
gaseous fuels, biofuels and vegetables oils as fuel. Energy from
biomass and more specific bio-diesel is one of the opportunities that
could cover the future demand of fossil fuel shortage. Biomass in the
form of cashew nut shell represents a new energy source and
abundant source of energy in India. The bio-fuel is derived from
cashew nut shell oil and its blend with diesel are promising
alternative fuel for diesel engine. In this work the pyrolysis Cashew
Nut Shell Liquid (CNSL)-Diesel Blends (CDB) was used to run the
Direct Injection (DI) diesel engine. The experiments were conducted
with various blends of CNSL and Diesel namely B20, B40, B60, B80
and B100. The results are compared with neat diesel operation. The
brake thermal efficiency was decreased for blends of CNSL and
Diesel except the lower blends of B20. The brake thermal efficiency
of B20 is nearly closer to that of diesel fuel. Also the emission level
of the all CNSL and Diesel blends was increased compared to neat
diesel. The higher viscosity and lower volatility of CNSL leads to
poor mixture formation and hence lower brake thermal efficiency and
higher emission levels. The higher emission level can be reduced by
adding suitable additives and oxygenates with CNSL and Diesel
blends.
Abstract: Network layer multicast, i.e. IP multicast, even after
many years of research, development and standardization, is not
deployed in large scale due to both technical (e.g. upgrading of
routers) and political (e.g. policy making and negotiation) issues.
Researchers looked for alternatives and proposed application/overlay
multicast where multicast functions are handled by end hosts, not
network layer routers. Member hosts wishing to receive multicast
data form a multicast delivery tree. The intermediate hosts in the tree
act as routers also, i.e. they forward data to the lower hosts in the
tree. Unlike IP multicast, where a router cannot leave the tree until all
members below it leave, in overlay multicast any member can leave
the tree at any time thus disjoining the tree and disrupting the data
dissemination. All the disrupted hosts have to rejoin the tree. This
characteristic of the overlay multicast causes multicast tree unstable,
data loss and rejoin overhead. In this paper, we propose that each node
sets its leaving time from the tree and sends join request to a number
of nodes in the tree. The nodes in the tree will reject the request if
their leaving time is earlier than the requesting node otherwise they
will accept the request. The node can join at one of the accepting
nodes. This makes the tree more stable as the nodes will join the tree
according to their leaving time, earliest leaving time node being at the
leaf of the tree. Some intermediate nodes may not follow their leaving
time and leave earlier than their leaving time thus disrupting the tree.
For this, we propose a proactive recovery mechanism so that disrupted
nodes can rejoin the tree at predetermined nodes immediately. We
have shown by simulation that there is less overhead when joining
the multicast tree and the recovery time of the disrupted nodes is
much less than the previous works. Keywords
Abstract: This paper presents the significant factor and give
some suggestion that should know before design. The main objective of this paper is guide the first step for someone who attends to design of grounding system before study in details later. The overview of
grounding system can protect damage from fault such as can save a human life and power system equipment. The unsafe conditions have
three cases. Case 1) maximum touch voltage exceeds the safety
criteria. In this case, the conductor compression ratio of the ground gird should be first adjusted to have optimal spacing of ground grid
conductors. If it still over limit, earth resistivity should be consider afterward. Case 2) maximum step voltage exceeds the safety criteria.
In this case, increasing the number of ground grid conductors around
the boundary can solve this problem. Case 3) both of maximum touch
and step voltage exceed the safety criteria. In this case, follow the solutions explained in case 1 and case 2. Another suggestion, vary depth of ground grid until maximum step and touch voltage do not
exceed the safety criteria.
Abstract: In this paper, a tooth shape optimization method for
cogging torque reduction in Permanent Magnet (PM) motors is
developed by using the Reduced Basis Technique (RBT) coupled by
Finite Element Analysis (FEA) and Design of Experiments (DOE)
methods. The primary objective of the method is to reduce the
enormous number of design variables required to define the tooth
shape. RBT is a weighted combination of several basis shapes. The
aim of the method is to find the best combination using the weights
for each tooth shape as the design variables. A multi-level design
process is developed to find suitable basis shapes or trial shapes at
each level that can be used in the reduced basis technique. Each level
is treated as a separated optimization problem until the required
objective – minimum cogging torque – is achieved. The process is
started with geometrically simple basis shapes that are defined by
their shape co-ordinates. The experimental design of Taguchi method
is used to build the approximation model and to perform
optimization. This method is demonstrated on the tooth shape
optimization of a 8-poles/12-slots PM motor.
Abstract: In this paper a new approach to face recognition is
presented that achieves double dimension reduction, making the
system computationally efficient with better recognition results and
out perform common DCT technique of face recognition. In pattern
recognition techniques, discriminative information of image
increases with increase in resolution to a certain extent, consequently
face recognition results change with change in face image resolution
and provide optimal results when arriving at a certain resolution
level. In the proposed model of face recognition, initially image
decimation algorithm is applied on face image for dimension
reduction to a certain resolution level which provides best
recognition results. Due to increased computational speed and feature
extraction potential of Discrete Cosine Transform (DCT), it is
applied on face image. A subset of coefficients of DCT from low to
mid frequencies that represent the face adequately and provides best
recognition results is retained. A tradeoff between decimation factor,
number of DCT coefficients retained and recognition rate with
minimum computation is obtained. Preprocessing of the image is
carried out to increase its robustness against variations in poses and
illumination level. This new model has been tested on different
databases which include ORL , Yale and EME color database.
Abstract: Finite impulse response (FIR) filters have the advantage of linear phase, guaranteed stability, fewer finite precision errors, and efficient implementation. In contrast, they have a major disadvantage of high order need (more coefficients) than IIR counterpart with comparable performance. The high order demand imposes more hardware requirements, arithmetic operations, area usage, and power consumption when designing and fabricating the filter. Therefore, minimizing or reducing these parameters, is a major goal or target in digital filter design task. This paper presents an algorithm proposed for modifying values and the number of non-zero coefficients used to represent the FIR digital pulse shaping filter response. With this algorithm, the FIR filter frequency and phase response can be represented with a minimum number of non-zero coefficients. Therefore, reducing the arithmetic complexity needed to get the filter output. Consequently, the system characteristic i.e. power consumption, area usage, and processing time are also reduced. The proposed algorithm is more powerful when integrated with multiplierless algorithms such as distributed arithmetic (DA) in designing high order digital FIR filters. Here the DA usage eliminates the need for multipliers when implementing the multiply and accumulate unit (MAC) and the proposed algorithm will reduce the number of adders and addition operations needed through the minimization of the non-zero values coefficients to get the filter output.
Abstract: The overall penumbra is usually defined as the
distance, p20–80, separating the 20% and 80% of the dose on the beam axis at the depth of interest. This overall penumbra accounts
also for the fact that some photons emitted by the distal parts of the source are only partially attenuated by the collimator. Medulloblastoma is the most common type of childhood brain tumor
and often spreads to the spine. Current guidelines call for surgery to remove as much of the tumor as possible, followed by radiation of the brain and spinal cord, and finally treatment with chemotherapy.
The purpose of this paper was to present results on an Uniformity of dose distribution in radiation fields surrounding the spine using film
dosimetry and comparison with 3D treatment planning software.
Abstract: In this paper, we propose an algorithm to compute
initial cluster centers for K-means clustering. Data in a cell is
partitioned using a cutting plane that divides cell in two smaller cells.
The plane is perpendicular to the data axis with the highest variance
and is designed to reduce the sum squared errors of the two cells as
much as possible, while at the same time keep the two cells far apart
as possible. Cells are partitioned one at a time until the number of
cells equals to the predefined number of clusters, K. The centers of
the K cells become the initial cluster centers for K-means. The
experimental results suggest that the proposed algorithm is effective,
converge to better clustering results than those of the random
initialization method. The research also indicated the proposed
algorithm would greatly improve the likelihood of every cluster
containing some data in it.