Abstract: The purpose of this study was to analyze relationship
between gender, BMI, and lifestyle with bone mineral density
(BMD) of adolescent in urban areas . The place of this study in
Jakarta State University, Indonesia. The number of samples involved
as many as 200 people, consisting of 100 men and 100 women. BMD
was measured using Quantitative Ultrasound Bone Densitometry.
While the questionnaire used to collect data on age, gender, and
lifestyle (calcium intake, smoking habits, alcohol consumption, tea,
coffee, sports, and sun exposure). Mean age of men and women,
respectively as much as 20.7 ± 2.18 years and 21 ± 1.61 years. Mean
BMD values of men was 1.084 g/cm ² ± 0.11 while women was
0.976 g/cm ² ± 0.10. Men and women with normal BMD respectively
as much as 46.7% and 16.7%. Men and women affected by
osteopenia respectively as much as 50% and 80%. Men and women
affected by osteoporosis respectively as much as 3.3% and 3.3%.
Mean BMI of men and women, respectively as much as 21.4 ± 2.07
kg/m2 and 20.9 ± 2.06 kg/m2. Mean lifestyle score of men and
women , respectively as much as 71.9 ± 5.84 and 70.1 ± 5.67
(maximum score 100). Based on Spearman and Pearson Correlation
test, there were relationship significantly between gender and
lifestyle with BMD.
Abstract: Response surface methodology (RSM) is a very
efficient tool to provide a good practical insight into developing new
process and optimizing them. This methodology could help
engineers to raise a mathematical model to represent the behavior of
system as a convincing function of process parameters.
Through this paper the sequential nature of the RSM surveyed for process
engineers and its relationship to design of experiments (DOE), regression
analysis and robust design reviewed. The proposed four-step procedure in
two different phases could help system analyst to resolve the parameter
design problem involving responses. In order to check accuracy of the
designed model, residual analysis and prediction error sum of squares
(PRESS) described.
It is believed that the proposed procedure in this study can resolve a
complex parameter design problem with one or more responses. It can be
applied to those areas where there are large data sets and a number of
responses are to be optimized simultaneously. In addition, the proposed
procedure is relatively simple and can be implemented easily by using
ready-made standard statistical packages.
Abstract: Earth reinforcing techniques have become useful and economical to solve problems related to difficult grounds and provide satisfactory foundation performance. In this context, this paper uses radial basis function neural network (RBFNN) for predicting the bearing pressure of strip footing on reinforced granular bed overlying weak soil. The inputs for the neural network models included plate width, thickness of granular bed and number of layers of reinforcements, settlement ratio, water content, dry density, cohesion and angle of friction. The results indicated that RBFNN model exhibited more than 84 % prediction accuracy, thereby demonstrating its application in a geotechnical problem.
Abstract: Selective harmonic elimination-pulse width modulation techniques offer a tight control of the harmonic spectrum of a given voltage waveform generated by a power electronic converter along with a low number of switching transitions. Traditional optimization methods suffer from various drawbacks, such as prolonged and tedious computational steps and convergence to local optima; thus, the more the number of harmonics to be eliminated, the larger the computational complexity and time. This paper presents a novel method for output voltage harmonic elimination and voltage control of PWM AC/AC voltage converters using the principle of hybrid Real-Coded Genetic Algorithm-Pattern Search (RGA-PS) method. RGA is the primary optimizer exploiting its global search capabilities, PS is then employed to fine tune the best solution provided by RGA in each evolution. The proposed method enables linear control of the fundamental component of the output voltage and complete elimination of its harmonic contents up to a specified order. Theoretical studies have been carried out to show the effectiveness and robustness of the proposed method of selective harmonic elimination. Theoretical results are validated through simulation studies using PSIM software package.
Abstract: Nowadays, HPC, Grid and Cloud systems are evolving
very rapidly. However, the development of infrastructure solutions
related to HPC is lagging behind. While the existing infrastructure is
sufficient for simple cases, many computational problems have more
complex requirements.Such computational experiments use different
resources simultaneously to start a large number of computational
jobs.These resources are heterogeneous. They have different
purposes, architectures, performance and used software.Users need a
convenient tool that allows to describe and to run complex
computational experiments under conditions of HPC environment.
This paper introduces a modularworkflow system called SEGL
which makes it possible to run complex computational experiments
under conditions of a real HPC organization. The system can be used
in a great number of organizations, which provide HPC power.
Significant requirements to this system are high efficiency and
interoperability with the existing HPC infrastructure of the
organization without any changes.
Abstract: This paper proposes the stochastic tabu search (STS)
for improving the measurement scheme for power system state
estimation. If the original measured scheme is not observable, the
additional measurements with minimum number of measurements are
added into the system by STS so that there is no critical measurement
pair. The random bit flipping and bit exchanging perturbations are
used for generating the neighborhood solutions in STS. The Pδ
observable concept is used to determine the network observability.
Test results of 10 bus, IEEE 14 and 30 bus systems are shown that
STS can improve the original measured scheme to be observable
without critical measurement pair. Moreover, the results of STS are
superior to deterministic tabu search (DTS) in terms of the best
solution hit.
Abstract: To extract the important physiological factors related to
diabetes from an oral glucose tolerance test (OGTT) by mathematical
modeling, highly informative but convenient protocols are required.
Current models require a large number of samples and extended
period of testing, which is not practical for daily use. The purpose
of this study is to make model assessments possible even from a
reduced number of samples taken over a relatively short period.
For this purpose, test values were extrapolated using a support
vector machine. A good correlation was found between reference and
extrapolated values in evaluated 741 OGTTs. This result indicates
that a reduction in the number of clinical test is possible through a
computational approach.
Abstract: Array signal processing involves signal enumeration and source localization. Array signal processing is centered on the ability to fuse temporal and spatial information captured via sampling signals emitted from a number of sources at the sensors of an array in order to carry out a specific estimation task: source characteristics (mainly localization of the sources) and/or array characteristics (mainly array geometry) estimation. Array signal processing is a part of signal processing that uses sensors organized in patterns or arrays, to detect signals and to determine information about them. Beamforming is a general signal processing technique used to control the directionality of the reception or transmission of a signal. Using Beamforming we can direct the majority of signal energy we receive from a group of array. Multiple signal classification (MUSIC) is a highly popular eigenstructure-based estimation method of direction of arrival (DOA) with high resolution. This Paper enumerates the effect of missing sensors in DOA estimation. The accuracy of the MUSIC-based DOA estimation is degraded significantly both by the effects of the missing sensors among the receiving array elements and the unequal channel gain and phase errors of the receiver.
Abstract: The main problems of data centric and open source
project are large number of developers and changes of core
framework. Model-View-Control (MVC) design pattern significantly
improved the development and adjustments of complex projects.
Entity framework as a Model layer in MVC architecture has
simplified communication with the database. How often are the new
technologies used and whether they have potentials for designing
more efficient Enterprise Resource Planning (ERP) system that will
be more suited to accountants?
Abstract: Yield and Crop Water Productivity are crucial issues
in sustainable agriculture, especially in high-demand resource crops such as sweet corn. This study was conducted to investigate
agronomic responses such as plant growth, yield and soil parameters (EC and Nitrate accumulation) to several deficit irrigation treatments
(100, 75, 50, 25 and 0% of ETm) applied during vegetative growth
stage, rainfed treatment was also tested.
The finding of this research indicates that under deficit irrigation
during vegetative growth stage applying 75% of ETm lead to increasing of 19.4% in terms of fresh ear yield, 9.4% in terms of dry grain yield, 10.5% in terms of number of ears per plant, 11.5% for
the 1000 grains weight and 19% in terms of crop water productivity compared with fully irrigated treatment. While those parameters in
addition to root, shoot and plant height has been affected by deficit
irrigation during vegetative growth stage when increasing water stress degree more than 50% of ETm.
Abstract: Air bending is one of the important metal forming
processes, because of its simplicity and large field application.
Accuracy of analytical and empirical models reported for the analysis
of bending processes is governed by simplifying assumption and do
not consider the effect of dynamic parameters. Number of researches
is reported on the finite element analysis (FEA) of V-bending, Ubending,
and air V-bending processes. FEA of bending is found to be
very sensitive to many physical and numerical parameters. FE
models must be computationally efficient for practical use. Reported
work shows the 3D FEA of air bending process using Hyperform LSDYNA
and its comparison with, published 3D FEA results of air
bending in Ansys LS-DYNA and experimental results. Observing the
planer symmetry and based on the assumption of plane strain
condition, air bending problem was modeled in 2D with symmetric
boundary condition in width. Stress-strain results of 2D FEA were
compared with 3D FEA results and experiments. Simplification of
air bending problem from 3D to 2D resulted into tremendous
reduction in the solution time with only marginal effect on stressstrain
results. FE model simplification by studying the problem
symmetry is more efficient and practical approach for solution of
more complex large dimensions slow forming processes.
Abstract: Different methods containing biometric algorithms are
presented for the representation of eigenfaces detection including
face recognition, are identification and verification. Our theme of this
research is to manage the critical processing stages (accuracy, speed,
security and monitoring) of face activities with the flexibility of
searching and edit the secure authorized database. In this paper we
implement different techniques such as eigenfaces vector reduction
by using texture and shape vector phenomenon for complexity
removal, while density matching score with Face Boundary Fixation
(FBF) extracted the most likelihood characteristics in this media
processing contents. We examine the development and performance
efficiency of the database by applying our creative algorithms in both
recognition and detection phenomenon. Our results show the
performance accuracy and security gain with better achievement than
a number of previous approaches in all the above processes in an
encouraging mode.
Abstract: The effect of the number of quantum dot (QD) layers
on the saturated gain of doped QD semiconductor optical amplifiers
(SOAs) has been studied using multi-population coupled rate
equations. The developed model takes into account the effect of
carrier coupling between adjacent layers. It has been found that
increasing the number of QD layers (K) increases the unsaturated
optical gain for K
Abstract: Behavior of turbulent jet is relying on jet parameters,
environmental and geometric parameters. In this research, it has
attempt to Study effect of jet parameters of internal angle on
maximum effective length and velocity on centerline from nozzle
experimentally. Toward this end, four internal angles 30, 45, 60 and
90-degree are considered for this study in a flume with 600cm as
long, 100cm as high and 150cm in width. Various discharges were
used to evaluate effective length for a wide range of densimetric
Froude numbers F0, from 17.9 to 39.4 that is defined at the nozzle. As
a result, It is revealed that both velocity on centerline and effective
length decreases when nozzle angle decreased from 90° to 30°. The
results show that, for all range of Fr0 the Um/U0 ratio for nozzle with
α=90° on centerline increases 20% - 27% than nozzle with α=30° that
has lowest velocity on centerline than other nozzle.
Abstract: The purpose of the present study is to analyze the
effect of the target plate-s curvature on the heat transfer in laminar
confined impinging jet flows. Numerical results from two
dimensional compressible finite volume solver are compared
between three different shapes of impinging plates: Flat, Concave
and Convex plates. The remarkable result of this study proves that
the stagnation Nusselt number in laminar range of Reynolds number
based on the slot width is maximum in convex surface and is
minimum in concave plate. These results refuse the previous data in
literature stating the amount of the stagnation Nusselt number is
greater in concave surface related to flat plate configuration.
Abstract: In this paper, a recursive algorithm for the
computation of 2-D DCT using Ramanujan Numbers is proposed.
With this algorithm, the floating-point multiplication is completely
eliminated and hence the multiplierless algorithm can be
implemented using shifts and additions only. The orthogonality of
the recursive kernel is well maintained through matrix factorization
to reduce the computational complexity. The inherent parallel
structure yields simpler programming and hardware implementation
and provides
log 1
2
3
2 N N-N+
additions and
N N
2 log
2 shifts which is
very much less complex when compared to other recent multiplierless
algorithms.
Abstract: In this paper we explore the application of a formal proof system to verification problems in cryptography. Cryptographic properties concerning correctness or security of some cryptographic algorithms are of great interest. Beside some basic lemmata, we explore an implementation of a complex function that is used in cryptography. More precisely, we describe formal properties of this implementation that we computer prove. We describe formalized probability distributions (o--algebras, probability spaces and condi¬tional probabilities). These are given in the formal language of the formal proof system Isabelle/HOL. Moreover, we computer prove Bayes' Formula. Besides we describe an application of the presented formalized probability distributions to cryptography. Furthermore, this paper shows that computer proofs of complex cryptographic functions are possible by presenting an implementation of the Miller- Rabin primality test that admits formal verification. Our achievements are a step towards computer verification of cryptographic primitives. They describe a basis for computer verification in cryptography. Computer verification can be applied to further problems in crypto-graphic research, if the corresponding basic mathematical knowledge is available in a database.
Abstract: Bus networks design is an important problem in
public transportation. The main step to this design, is determining the
number of required terminals and their locations. This is an especial
type of facility location problem, a large scale combinatorial
optimization problem that requires a long time to be solved.
The genetic algorithm (GA) is a search and optimization technique
which works based on evolutionary principle of natural
chromosomes. Specifically, the evolution of chromosomes due to the
action of crossover, mutation and natural selection of chromosomes
based on Darwin's survival-of-the-fittest principle, are all artificially
simulated to constitute a robust search and optimization procedure.
In this paper, we first state the problem as a mixed integer
programming (MIP) problem. Then we design a new crossover and
mutation for bus terminal location problem (BTLP). We tested the
different parameters of genetic algorithm (for a sample problem) and
obtained the optimal parameters for solving BTLP with numerical try
and error.
Abstract: One of the main image representations in Mathematical Morphology is the 3D Shape Decomposition Representation, useful for Image Compression and Representation,and Pattern Recognition. The 3D Morphological Shape Decomposition representation can be generalized a number of times,to extend the scope of its algebraic characteristics as much as possible. With these generalizations, the Morphological Shape Decomposition 's role to serve as an efficient image decomposition tool is extended to grayscale images.This work follows the above line, and further develops it. Anew evolutionary branch is added to the 3D Morphological Shape Decomposition's development, by the introduction of a 3D Multi Structuring Element Morphological Shape Decomposition, which permits 3D Morphological Shape Decomposition of 3D binary images (grayscale images) into "multiparameter" families of elements. At the beginning, 3D Morphological Shape Decomposition representations are based only on "1 parameter" families of elements for image decomposition.This paper addresses the gray scale inter frame interpolation by means of mathematical morphology. The new interframe interpolation method is based on generalized morphological 3D Shape Decomposition. This article will present the theoretical background of the morphological interframe interpolation, deduce the new representation and show some application examples.Computer simulations could illustrate results.
Abstract: In general the images used for compression are of
different types like dark image, high intensity image etc. When these
images are compressed using Counter Propagation Neural Network,
it takes longer time to converge. The reason for this is that the given
image may contain a number of distinct gray levels with narrow
difference with their neighborhood pixels. If the gray levels of the
pixels in an image and their neighbors are mapped in such a way that
the difference in the gray levels of the neighbor with the pixel is
minimum, then compression ratio as well as the convergence of the
network can be improved. To achieve this, a Cumulative Distribution
Function is estimated for the image and it is used to map the image
pixels. When the mapped image pixels are used the Counter
Propagation Neural Network yield high compression ratio as well as
it converges quickly.