Abstract: This paper considers the development of a two-point
predictor-corrector block method for solving delay differential
equations. The formulae are represented in divided difference form
and the algorithm is implemented in variable stepsize variable order
technique. The block method produces two new values at a single
integration step. Numerical results are compared with existing
methods and it is evident that the block method performs very well.
Stability regions of the block method are also investigated.
Abstract: This study examined the effects of eight weeks of
whole-body vibration training (WBVT) on vertical and decuple jump
performance in handball athletes. Sixteen collegiate Level I handball
athletes volunteered for this study. They were divided equally as
control group and experimental group (EG). During the period of the
study, all athletes underwent the same handball specific training, but
the EG received additional WBVT (amplitude: 2 mm, frequency: 20 -
40 Hz) three time per week for eight consecutive weeks. The vertical
jump performance was evaluated according to the maximum height of
squat jump (SJ) and countermovement jump (CMJ). Single factor
ANCOVA was used to examine the differences in each parameter
between the groups after training with the pretest values as a covariate.
The statistic significance was set at p < .05. After 8 weeks WBVT, the
EG had significantly improved the maximal height of SJ (40.92 ± 2.96
cm vs. 48.40 ± 4.70 cm, F = 5.14, p < .05) and the maximal height
CMJ (47.25 ± 7.48 cm vs. 52.20 ± 6.25 cm, F = 5.31, p < .05). 8 weeks
of additional WBVT could improve the vertical and decuple jump
performance in handball athletes. Enhanced motor unit
synchronization and firing rates, facilitated muscular contraction
stretch-shortening cycle, and improved lower extremity
neuromuscular coordination could account for these enhancements.
Abstract: It is observed that the Weighted least-square (WLS)
technique, including the modifications, results in equiripple error
curve. The resultant error as a percent of the ideal value is highly
non-uniformly distributed over the range of frequencies for which the
differentiator is designed. The present paper proposes a modification
to the technique so that the optimization procedure results in lower
maximum relative error compared to the ideal values. Simulation
results for first order as well as higher order differentiators are given
to illustrate the excellent performance of the proposed method.
Abstract: This paper will discuss about an active power generator scheduling method in order to increase the limit level of steady state systems. Some power generator optimization methods such as Langrange, PLN (Indonesian electricity company) Operation, and the proposed Z-Thevenin-based method will be studied and compared in respect of their steady state aspects. A method proposed in this paper is built upon the thevenin equivalent impedance values between each load respected to each generator. The steady state stability index obtained with the REI DIMO method. This research will review the 500kV-Jawa-Bali interconnection system. The simulation results show that the proposed method has the highest limit level of steady state stability compared to other optimization techniques such as Lagrange, and PLN operation. Thus, the proposed method can be used to create the steady state stability limit of the system especially in the peak load condition.
Abstract: As the majority of faults are found in a few of its
modules so there is a need to investigate the modules that are
affected severely as compared to other modules and proper
maintenance need to be done in time especially for the critical
applications. As, Neural networks, which have been already applied
in software engineering applications to build reliability growth
models predict the gross change or reusability metrics. Neural
networks are non-linear sophisticated modeling techniques that are
able to model complex functions. Neural network techniques are
used when exact nature of input and outputs is not known. A key
feature is that they learn the relationship between input and output
through training. In this present work, various Neural Network Based
techniques are explored and comparative analysis is performed for
the prediction of level of need of maintenance by predicting level
severity of faults present in NASA-s public domain defect dataset.
The comparison of different algorithms is made on the basis of Mean
Absolute Error, Root Mean Square Error and Accuracy Values. It is
concluded that Generalized Regression Networks is the best
algorithm for classification of the software components into different
level of severity of impact of the faults. The algorithm can be used to
develop model that can be used for identifying modules that are
heavily affected by the faults.
Abstract: Due to the high percentage of induction motors in industrial market, there exist a large opportunity for energy savings. Replacement of working induction motors with more efficient ones can be an important resource for energy savings. A calculation of energy savings and payback periods, as a result of such a replacement, based on nameplate motor efficiency or manufacture-s data can lead to large errors [1]. Efficiency of induction motors (IMs) can be extracted using some procedures that use the no-load test results. In the cases that we must estimate the efficiency on-line, some of these procedures can-t be efficient. In some cases the efficiency estimates using the rating values of the motor, but these procedures can have errors due to the different working condition of the motor. In this paper the efficiency of an IM estimated by using the genetic algorithm. The results are compared with the measured values of the torque and power. The results show smaller errors for this procedure compared with the conventional classical procedures, hence the cost of the equipments is reduced and on-line estimation of the efficiency can be made.
Abstract: The dental composites are preferably used as filling
materials due to their esthetic appearances. Nevertheless one of the
major problems, during the application of the dental composites, is
shape change named as “polymerisation shrinkage" affecting clinical
success of the dental restoration while photo-polymerisation.
Polymerisation shrinkage of composites arises basically from the
formation of a polymer due to the monomer transformation which
composes of an organic matrix phase. It was sought, throughout this
study, to detect and evaluate the structural polymerisation shrinkage
of prepared dental composites in order to optimize the effects of
various fillers included in hydroxyapatite (HA)-reinforced dental
composites and hence to find a means to modify the properties of
these dental composites prepared with defined parameters. As a
result, the shrinkage values of the experimental dental composites
were decreased by increasing the filler content of composites and the
composition of different fillers used had effect on the shrinkage of
the prepared composite systems.
Abstract: Traffic Density provides an indication of the level of
service being provided to the road users. Hence, there is a need to
study the traffic flow characteristics with specific reference to
density in detail. When the length and speed of the vehicles in a
traffic stream vary significantly, the concept of occupancy, rather
than density, is more appropriate to describe traffic concentration.
When the concept of occupancy is applied to heterogeneous traffic
condition, it is necessary to consider the area of the road space and
the area of the vehicles as the bases. Hence, a new concept named,
'area-occupancy' is proposed here. It has been found that the
estimated area-occupancy gives consistent values irrespective of
change in traffic composition.
Abstract: The modeling of inelastic behavior of plastic materials requires measurements providing information on material response to different multiaxial loading conditions. Different triaxiality conditions and values of Lode parameters have to be
covered for complex description of the material plastic behavior.
Samples geometries providing material plastic behavoiur over the range of interest are proposed with the use of FEM analysis. Round samples with 3 different notches and smooth surface are used
together with butterfly type of samples tested at angle ranging for 0 to
90°. Identification of ductile damage parameters is carried out on
the basis of obtained experimental data for austenitic stainless steel.
The obtained material plastic damage parameters are subsequently applied to FEM simulation of notched CT normally samples used for
fracture mechanics testing and results from the simulation are
compared with real tests.
Abstract: In this paper, we present a novel objective nonreference performance assessment algorithm for image fusion. It takes into account local measurements to estimate how well the important information in the source images is represented by the fused image. The metric is based on the Universal Image Quality Index and uses the similarity between blocks of pixels in the input images and the fused image as the weighting factors for the metrics. Experimental results confirm that the values of the proposed metrics correlate well with the subjective quality of the fused images, giving a significant improvement over standard measures based on mean squared error and mutual information.
Abstract: We have developed a computer program consisting of
6 subtests assessing the children hand dexterity applicable in the
rehabilitation medicine. We have carried out a normative study on a
representative sample of 285 children aged from 7 to 15 (mean age
11.3) and we have proposed clinical standards for three age groups
(7-9, 9-11, 12-15 years). We have shown statistical significance of
differences among the corresponding mean values of the task time
completion. We have also found a strong correlation between the task
time completion and the age of the subjects, as well as we have
performed the test-retest reliability checks in the sample of 84
children, giving the high values of the Pearson coefficients for the
dominant and non-dominant hand in the range 0.740.97 and
0.620.93, respectively.
A new MATLAB-based programming tool aiming at analysis of
cardiologic RR intervals and blood pressure descriptors, is worked
out, too. For each set of data, ten different parameters are extracted: 2
in time domain, 4 in frequency domain and 4 in Poincaré plot
analysis. In addition twelve different parameters of baroreflex
sensitivity are calculated. All these data sets can be visualized in time
domain together with their power spectra and Poincaré plots. If
available, the respiratory oscillation curves can be also plotted for
comparison. Another application processes biological data obtained
from BLAST analysis.
Abstract: In this paper usefulness of quasi-Newton iteration
procedure in parameters estimation of the conditional variance
equation within BHHH algorithm is presented. Analytical solution of
maximization of the likelihood function using first and second
derivatives is too complex when the variance is time-varying. The
advantage of BHHH algorithm in comparison to the other
optimization algorithms is that requires no third derivatives with
assured convergence. To simplify optimization procedure BHHH
algorithm uses the approximation of the matrix of second derivatives
according to information identity. However, parameters estimation in
a/symmetric GARCH(1,1) model assuming normal distribution of
returns is not that simple, i.e. it is difficult to solve it analytically.
Maximum of the likelihood function can be founded by iteration
procedure until no further increase can be found. Because the
solutions of the numerical optimization are very sensitive to the
initial values, GARCH(1,1) model starting parameters are defined.
The number of iterations can be reduced using starting values close
to the global maximum. Optimization procedure will be illustrated in
framework of modeling volatility on daily basis of the most liquid
stocks on Croatian capital market: Podravka stocks (food industry),
Petrokemija stocks (fertilizer industry) and Ericsson Nikola Tesla
stocks (information-s-communications industry).
Abstract: In this study, some physical and mechanical properties
of jujube fruits, were measured and compared at constant moisture
content of 15.5% w.b. The results showed that the mean length, width
and thickness of jujube fruits were 18.88, 16.79 and 15.9 mm,
respectively. The mean projected areas of jujube perpendicular to
length, width, and thickness were 147.01, 224.08 and 274.60 mm2,
respectively. The mean mass and volume were 1.51 g and 2672.80
mm3, respectively. The arithmetic mean diameter, geometric mean
diameter and equivalent diameter varied from 14.53 to 20 mm, 14.5
to 19.94 mm, and 14.52 to 19.97 mm, respectively. The sphericity,
aspect ratio and surface area of jujube fruits were 0.91, 0.89 and
926.28 mm2, respectively. Whole fruit density, bulk density and
porosity of jujube fruits were measured and found to be 1.52 g/cm3,
0.3 g/cm3 and 79.3%, respectively. The angle of repose of jujube fruit
was 14.66° (±0.58°). The static coefficient of friction on galvanized
iron steel was higher than that on plywood and lower than that on
glass surface. The values of rupture force, deformation, hardness and
energy absorbed were found to be between 11.13-19.91N, 2.53-
4.82mm, 3.06-5.81N mm and 20.13-39.08 N/mm, respectively.
Abstract: A nucleotide sequence can be expressed as a numerical sequence when each nucleotide is assigned its proton number. A resulting gene numerical sequence can be investigated for its fractal dimension in terms of evolution and chemical properties for comparative studies. We have investigated such nucleotide fluctuation in the 16S rRNA gene of archaea thermophiles. The studied archaea thermophiles were archaeoglobus fulgidus, methanothermobacter thermautotrophicus, methanocaldococcus jannaschii, pyrococcus horikoshii, and thermoplasma acidophilum. The studied five archaea-euryarchaeota thermophiles have fractal dimension values ranging from 1.93 to 1.97. Computer simulation shows that random sequences would have an average of about 2 with a standard deviation about 0.015. The fractal dimension was found to correlate (negative correlation) with the thermophile-s optimal growth temperature with R2 value of 0.90 (N =5). The inclusion of two aracheae-crenarchaeota thermophiles reduces the R2 value to 0.66 (N = 7). Further inclusion of two bacterial thermophiles reduces the R2 value to 0.50 (N =9). The fractal dimension is correlated (positive) to the sequence GC content with an R2 value of 0.89 for the five archaea-euryarchaeota thermophiles (and 0.74 for the entire set of N = 9), although computer simulation shows little correlation. The highest correlation (positive) was found to be between the fractal dimension and di-nucleotide Shannon entropy. However Shannon entropy and sequence GC content were observed to correlate with optimal growth temperature having an R2 of 0.8 (negative), and 0.88 (positive), respectively, for the entire set of 9 thermophiles; thus the correlation lacks species specificity. Together with another correlation study of bacterial radiation dosage with RecA repair gene sequence fractal dimension, it is postulated that fractal dimension analysis is a sensitive tool for studying the relationship between genotype and phenotype among closely related sequences.
Abstract: In this paper, we proposed a method to reduce
quantization error. In order to reduce quantization error, low pass
filtering is applied on neighboring samples of current block in
H.264/AVC. However, it has a weak point that low pass filtering is
performed regardless of prediction direction. Since it doesn-t consider
prediction direction, it may not reduce quantization error effectively.
Proposed method considers prediction direction for low pass filtering
and uses a threshold condition for reducing flag bit. We compare our
experimental result with conventional method in H.264/AVC and we
can achieve the average bit-rate reduction of 1.534% by applying the
proposed method. Bit-rate reduction between 0.580% and 3.567% are
shown for experimental results.
Abstract: This paper presents a numerical analysis of the
performance of a three-bladed Darrieus vertical-axis wind turbine
based on the DU91-W2-250 airfoil. A complete campaign of 2-D
simulations, performed for several values of tip speed ratio and based
on RANS unsteady calculations, has been performed to obtain the
rotor torque and power curves. Rotor performances have been
compared with the results of a previous work based on the use of the
NACA 0021 airfoil. Both the power coefficient and the torque
coefficient have been determined as a function of the tip speed ratio.
The flow field around rotor blades has also been analyzed. As a final
result, the performance of the DU airfoil based rotor appears to be
lower than the one based on the NACA 0021 blade section. This
behavior could be due to the higher stall characteristics of the NACA
profile, being the separation zone at the trailing edge more extended
for the DU airfoil.
Abstract: The paper provides a numerical investigation of the
entropy generation analysis due to natural convection in an inclined
square porous cavity. The coupled equations of mass, momentum,
energy and species conservation are solved using the Control Volume
Finite-Element Method. Effect of medium permeability and
inclination angle on entropy generation is analysed. It was found that
according to the Darcy number and the porous thermal Raleigh
number values, the entropy generation could be mainly due to heat
transfer or to fluid friction irreversibility and that entropy generation
reaches extremum values for specific inclination angles.
Abstract: With optimized bandwidth and latency discrepancy ratios, Node Gain Scores (NGSs) are determined and used as a basis for shaping the max-heap overlay. The NGSs - determined as the respective bandwidth-latency-products - govern the construction of max-heap-form overlays. Each NGS is earned as a synergy of discrepancy ratio of the bandwidth requested with respect to the estimated available bandwidth, and latency discrepancy ratio between the nodes and the source node. The tree leads to enhanceddelivery overlay multicasting – increasing packet delivery which could, otherwise, be hindered by induced packet loss occurring in other schemes not considering the synergy of these parameters on placing the nodes on the overlays. The NGS is a function of four main parameters – estimated available bandwidth, Ba; individual node's requested bandwidth, Br; proposed node latency to its prospective parent (Lp); and suggested best latency as advised by source node (Lb). Bandwidth discrepancy ratio (BDR) and latency discrepancy ratio (LDR) carry weights of α and (1,000 - α ) , respectively, with arbitrary chosen α ranging between 0 and 1,000 to ensure that the NGS values, used as node IDs, maintain a good possibility of uniqueness and balance between the most critical factor between the BDR and the LDR. A max-heap-form tree is constructed with assumption that all nodes possess NGS less than the source node. To maintain a sense of load balance, children of each level's siblings are evenly distributed such that a node can not accept a second child, and so on, until all its siblings able to do so, have already acquired the same number of children. That is so logically done from left to right in a conceptual overlay tree. The records of the pair-wise approximate available bandwidths as measured by a pathChirp scheme at individual nodes are maintained. Evaluation measures as compared to other schemes – Bandwidth Aware multicaSt architecturE (BASE), Tree Building Control Protocol (TBCP), and Host Multicast Tree Protocol (HMTP) - have been conducted. This new scheme generally performs better in terms of trade-off between packet delivery ratio; link stress; control overhead; and end-to-end delays.
Abstract: In this work, statistical experimental design was
applied for the optimization of medium constituents for Gentamicin
production by Micromsonospora echinospora subs pallida (MTCC
708) in a batch reactor and the results are compared with the ANN
predicted values. By central composite design, 50 experiments are
carried out for five test variables: Starch, Soya bean meal, K2HPO4,
CaCO3 and FeSO4. The optimum condition was found to be: Starch
(8.9,g/L), Soya bean meal (3.3 g/L), K2HPO4 (0.8 g/L), CaCO3 (4
g/L) and FeSO4 (0.03 g/L). At these optimized conditions, the yield
of gentamicin was found to be 1020 mg/L. The R2 values were found
to be 1 for ANN training set, 0.9953 for ANN test set, and 0.9286 for
RSM.
Abstract: Solar sunspot rotation, latitudinal bands are studied based on intelligent computation methods. A combination of image fusion method with together tree decomposition is used to obtain quantitative values about the latitudes of trajectories on sun surface that sunspots rotate around them. Daily solar images taken with SOlar and Heliospheric (SOHO) satellite are fused for each month separately .The result of fused image is decomposed with Quad Tree decomposition method in order to achieve the precise information about latitudes of sunspot trajectories. Such analysis is useful for gathering information about the regions on sun surface and coordinates in space that is more expose to solar geomagnetic storms, tremendous flares and hot plasma gases permeate interplanetary space and help human to serve their technical systems. Here sunspot images in September, November and October in 2001 are used for studying the magnetic behavior of sun.