Abstract: Finding the shortest path between two positions is a
fundamental problem in transportation, routing, and communications
applications. In robot motion planning, the robot should pass around
the obstacles touching none of them, i.e. the goal is to find a
collision-free path from a starting to a target position. This task has
many specific formulations depending on the shape of obstacles,
allowable directions of movements, knowledge of the scene, etc.
Research of path planning has yielded many fundamentally different
approaches to its solution, mainly based on various decomposition
and roadmap methods. In this paper, we show a possible use of
visibility graphs in point-to-point motion planning in the Euclidean
plane and an alternative approach using Voronoi diagrams that
decreases the probability of collisions with obstacles. The second
application area, investigated here, is focused on problems of finding
minimal networks connecting a set of given points in the plane using
either only straight connections between pairs of points (minimum
spanning tree) or allowing the addition of auxiliary points to the set
to obtain shorter spanning networks (minimum Steiner tree).
Abstract: Aim of this work was to compare the efficacy of two
loading methods of proteins onto polymeric nanocarriers: adsorption
and encapsulation methods. Preliminary studies of protein loading
were done using Bovine Serum Albumin (BSA) as model protein.
Nanocarriers were prepared starting from polylactic co-glycolic acid
(PLGA) polymer; production methods used are two different variants
of emulsion evaporation method. Nanoparticles obtained were
analyzed in terms of dimensions by Dynamic Light Scattering and
Loading Efficiency of BSA by Bradford Assay. Loaded
nanoparticles were then submitted to in-vitro protein dissolution test
in order to study the effect of the delivery system on the release rate
of the protein.
Abstract: Twenty four New Zealand white rabbits (12 does and
12 bucks) and twenty four Flanders (12 does and 12 bucks) rabbits,
allotted into two feeding regime (6 for each breed, 3 males and 3
females) first one fed commercial ration and second one fed
commercial diet plus sodium butyrate (300 g/ton). The obtained
results showed that at end of 8th week experimental period New
Zealand white rabbits were heavier body weight than Flanders rabbits
(1934.55+39.05 vs. 1802.5+30.99 g); significantly high body weight
gain during experimental period especially during 8th week
(136.1+3.5 vs. 126.8+1.8 g/week); better feed conversion ratio during
all weeks of experiment from first week (3.07+0.16 vs. 3.12+0.10)
till the 8th week of experiment (5.54+0.16 vs. 5.76+0.07) with
significantly high dressing percentages (0.54+0.01 vs. 0.52+0.01).
Also all carcass cuts were significantly high in New Zealand white
rabbits than Flanders. Females rabbits (at the same age) were lower
body weight than males from start of experiment (941.1+39.8
vs.972.1+33.5 g) till the end of experiment (1833.64+37.69 vs.
1903.41+36.93 g); gained less during all weeks of experiment except
during 8th week (132.1+2.3 vs. 130.9+3.4 g/week), with lower
dressing percentage (0.52+0.01 vs. 0.53+0.01) and lighter carcass
cuts than males, however, they had better feed conversion ratio
during 1st week, 7th week and 8th week of experiment. Addition of
300g sodium butyrate/ton of rabbit increased the body weight of
rabbits at the end of experimental period (1882.71+26.45 vs.
1851.5+49.82 g); improve body weight gain at 3rd, 4th, 5th, 6th and
7th week of experiment and significantly improve feed conversion
ratio during all weeks of the experiment from 1st week (2.85+0.07
vs. 3.30+0.15) till the 8th week of the experiment (5.51+0.12 vs.
5.77+0.12). Also the dressing percentage was higher in Sodium
butyrate fed groups than control one (0.53+0.01 vs. 0.52+0.01) and
the most important results of feeding sodium butyrate is the reducing
of the mortality percentage in rabbits during 8 week experiment to
zero percentage as compared with 16% in control group.
Abstract: This paper starts with a critical view of beautiful female images in the mass media being frequently generated by a stereotypical Korean concept of beauty. Several female beauty myths have evolved in Korea during the present decade. Nearly all of them have formed due to a deeply-ingrained androcentric ideology which objectifies women. Mass media causes the public to hold a distorted concept about female beauty. There is a huge gap between women in reality and representative women in the mass media. It is essential to have an unbiased perception of female images presented in the mass media. Due to cosmetic advertisements projecting contemporary images of female beauty to promote products, cosmetics images will be examined in regard to female beauty myths portrayed by the mass media. This paper will analyze features of female beauty myths in Korea and their intrinsic characteristics.
Abstract: In this paper, a self starting two step continuous block
hybrid formulae (CBHF) with four Off-step points is developed using
collocation and interpolation procedures. The CBHF is then used to
produce multiple numerical integrators which are of uniform order
and are assembled into a single block matrix equation. These
equations are simultaneously applied to provide the approximate
solution for the stiff ordinary differential equations. The order of
accuracy and stability of the block method is discussed and its
accuracy is established numerically.
Abstract: With the advancement of wireless sensor network technology,
its practical utilization is becoming an important challange.
This paper overviews my past environmental monitoring project,
and discusses the process of starting the monitoring by classifying
it into four steps. The steps to start environmental monitoring can
be complicated, but not well discussed by researchers of wireless
sensor network technology. This paper demonstrates our activity and
challenges in each of the four steps to ease the process, and argues
future challenges to enable quick start of environmental monitoring.
Abstract: In this paper, naturally immobilized lipase, Carica
papaya lipase, catalyzed biodiesel production from fish oil was
studied. The refined fish oil, extracted from the discarded parts of
fish, was used as a starting material for biodiesel production. The
effects of molar ratio of oil: methanol, lipase dosage, initial water
activity of lipase, temperature and solvent were investigated. It was
found that Carica papaya lipase was suitable for methanolysis of fish
oil to produce methyl ester. The maximum yield of methyl ester
could reach up to 83% with the optimal reaction conditions: oil:
methanol molar ratio of 1: 4, 20% (based on oil) of lipase, initial
water activity of lipase at 0.23 and 20% (based on oil) of tert-butanol
at 40oC after 18 h of reaction time. There was negligible loss in
lipase activity even after repeated use for 30 cycles.
Abstract: Using Dynamic Bayesian Networks (DBN) to model genetic regulatory networks from gene expression data is one of the major paradigms for inferring the interactions among genes. Averaging a collection of models for predicting network is desired, rather than relying on a single high scoring model. In this paper, two kinds of model searching approaches are compared, which are Greedy hill-climbing Search with Restarts (GSR) and Markov Chain Monte Carlo (MCMC) methods. The GSR is preferred in many papers, but there is no such comparison study about which one is better for DBN models. Different types of experiments have been carried out to try to give a benchmark test to these approaches. Our experimental results demonstrated that on average the MCMC methods outperform the GSR in accuracy of predicted network, and having the comparable performance in time efficiency. By proposing the different variations of MCMC and employing simulated annealing strategy, the MCMC methods become more efficient and stable. Apart from comparisons between these approaches, another objective of this study is to investigate the feasibility of using DBN modeling approaches for inferring gene networks from few snapshots of high dimensional gene profiles. Through synthetic data experiments as well as systematic data experiments, the experimental results revealed how the performances of these approaches can be influenced as the target gene network varies in the network size, data size, as well as system complexity.
Abstract: Recently, grid computing has been widely focused on
the science, industry, and business fields, which are required a vast
amount of computing. Grid computing is to provide the environment
that many nodes (i.e., many computers) are connected with each
other through a local/global network and it is available for many
users. In the environment, to achieve data processing among nodes
for any applications, each node executes mutual authentication by
using certificates which published from the Certificate Authority
(for short, CA). However, if a failure or fault has occurred in the
CA, any new certificates cannot be published from the CA. As
a result, a new node cannot participate in the gird environment.
In this paper, an off-the-shelf scheme for dependable grid systems
using virtualization techniques is proposed and its implementation is
verified. The proposed approach using the virtualization techniques
is to restart an application, e.g., the CA, if it has failed. The system
can tolerate a failure or fault if it has occurred in the CA. Since
the proposed scheme is implemented at the application level easily,
the cost of its implementation by the system builder hardly takes
compared it with other methods. Simulation results show that the
CA in the system can recover from its failure or fault.
Abstract: The protection of the contents of digital products is
referred to as content authentication. In some applications, to be able
to authenticate a digital product could be extremely essential. For
example, if a digital product is used as a piece of evidence in the
court, its integrity could mean life or death of the accused. Generally,
the problem of content authentication can be solved using semifragile
digital watermarking techniques. Recently many authors have
proposed Computer Generated Hologram Watermarking (CGHWatermarking)
techniques. Starting from these studies, in this paper
a semi-fragile Computer Generated Hologram coding technique is
proposed, which is able to detect malicious tampering while
tolerating some incidental distortions. The proposed technique uses
as watermark an encrypted image, and it is well suitable for digital
image authentication.
Abstract: In this study, Li4SiO4 powder was successfully
synthesized via sol gel method followed by drying at 150oC. Lithium
oxide, Li2O and silicon oxide, SiO2 were used as the starting
materials with citric acid as the chelating agent. The obtained powder
was then sintered at various temperatures. Crystallographic phase
analysis, morphology and ionic conductivity were investigated
systematically employing X-ray diffraction, Fourier Transform
Infrared, Scanning Electron Microscopy and AC impedance
spectroscopy. XRD result showed the formation of pure monoclinic
Li4SiO4 crystal structure with lattice parameters a = 5.140 Å, b =
6.094 Å, c = 5.293 Å, β = 90o in the sample sintered at 750oC. This
observation was confirmed by FTIR analysis. The bulk conductivity
of this sample at room temperature was 3.35 × 10-6 S cm-1 and the
highest bulk conductivity of 1.16 × 10-4 S cm-1 was obtained at
100°C. The results indicated that, the Li4SiO4 compound has
potential to be used as host for LISICON structured solid electrolyte
for low temperature application.
Abstract: As seen in literature, about 70% of the improvement initiatives fail, and a significant number do not even get started. This paper analyses the problem of failing initiatives on Software Process Improvement (SPI), and proposes good practices supported by motivational tools that can help minimizing failures. It elaborates on the hypothesis that human factors are poorly addressed by deployers, especially because implementation guides usually emphasize only technical factors. This research was conducted with SPI deployers and analyses 32 SPI initiatives. The results indicate that although human factors are not commonly highlighted in guidelines, the successful initiatives usually address human factors implicitly. This research shows that practices based on human factors indeed perform a crucial role on successful implantations of SPI, proposes change management as a theoretical framework to introduce those practices in the SPI context and suggests some motivational tools based on SPI deployers experience to support it.
Abstract: The building sector is the largest energy consumer and
CO2 emitter in the European Union (EU) and therefore the active
reduction of energy consumption and elimination of energy wastage
are among the main goals in it. Healthy housing and energy
efficiency are affected by many factors which set challenges to
monitoring, control and research of indoor air quality (IAQ) and
energy consumption, especially in old buildings. These challenges
include measurement and equipment costs, for example.
Additionally, the measurement results are difficult to interpret and
their usage in the ventilation control is also limited when taking into
account the energy efficiency of housing at the same time. The main
goal of this study is to develop a cost-effective building monitoring
and control system especially for old buildings. The starting point or
keyword of the development process is a wireless system; otherwise
the installation costs become too high. As the main result, this paper
describes an idea of a wireless building monitoring and control
system. The first prototype of the system has been installed in 10
residential buildings and in 10 school buildings located in the City of
Kuopio, Finland.
Abstract: In this paper usefulness of quasi-Newton iteration
procedure in parameters estimation of the conditional variance
equation within BHHH algorithm is presented. Analytical solution of
maximization of the likelihood function using first and second
derivatives is too complex when the variance is time-varying. The
advantage of BHHH algorithm in comparison to the other
optimization algorithms is that requires no third derivatives with
assured convergence. To simplify optimization procedure BHHH
algorithm uses the approximation of the matrix of second derivatives
according to information identity. However, parameters estimation in
a/symmetric GARCH(1,1) model assuming normal distribution of
returns is not that simple, i.e. it is difficult to solve it analytically.
Maximum of the likelihood function can be founded by iteration
procedure until no further increase can be found. Because the
solutions of the numerical optimization are very sensitive to the
initial values, GARCH(1,1) model starting parameters are defined.
The number of iterations can be reduced using starting values close
to the global maximum. Optimization procedure will be illustrated in
framework of modeling volatility on daily basis of the most liquid
stocks on Croatian capital market: Podravka stocks (food industry),
Petrokemija stocks (fertilizer industry) and Ericsson Nikola Tesla
stocks (information-s-communications industry).
Abstract: Mobile marketing through mobile messaging service
has highly impressive growth as it enables e-business firms to
communicate with their customers effectively. Educational
institutions hence start using this service to enhance communication
with their students. Previous studies, however, have limited
understanding of applying mobile messaging service in education.
This study proposes a theoretical model to understand the drivers of
students- intentions to use the university-s mobile messaging service.
The model indicates that social influence, perceived control and
attitudes affect students- intention to use the university-s mobile
messaging service. It also provides five antecedents of students-
attitudes–perceived utility (information utility, entertainment utility,
and social utility), innovativeness, information seeking, transaction
specificity (content specificity, sender specificity, and time
specificity) and privacy concern. The proposed model enables
universities to understand what students concern about the use of a
mobile messaging service in universities and handle the service more
effectively. The paper discusses the model development and
concludes with limitations and implications of the proposed model.
Abstract: This paper analyses the unsteady, two-dimensional
stagnation point flow of an incompressible viscous fluid over a flat
sheet when the flow is started impulsively from rest and at the same
time, the sheet is suddenly stretched in its own plane with a velocity
proportional to the distance from the stagnation point. The partial
differential equations governing the laminar boundary layer forced
convection flow are non-dimensionalised using semi-similar
transformations and then solved numerically using an implicit finitedifference
scheme known as the Keller-box method. Results
pertaining to the flow and heat transfer characteristics are computed
for all dimensionless time, uniformly valid in the whole spatial region
without any numerical difficulties. Analytical solutions are also
obtained for both small and large times, respectively representing the
initial unsteady and final steady state flow and heat transfer.
Numerical results indicate that the velocity ratio parameter is found
to have a significant effect on skin friction and heat transfer rate at
the surface. Furthermore, it is exposed that there is a smooth
transition from the initial unsteady state flow (small time solution) to
the final steady state (large time solution).
Abstract: This paper presents an information retrieval model on
XML documents based on tree matching. Queries and documents are
represented by extended trees. An extended tree is built starting from
the original tree, with additional weighted virtual links between each
node and its indirect descendants allowing to directly reach each
descendant. Therefore only one level separates between each node
and its indirect descendants. This allows to compare the user query
and the document with flexibility and with respect to the structural
constraints of the query. The content of each node is very important to
decide weither a document element is relevant or not, thus the content
should be taken into account in the retrieval process. We separate
between the structure-based and the content-based retrieval processes.
The content-based score of each node is commonly based on the
well-known Tf × Idf criteria. In this paper, we compare between
this criteria and another one we call Tf × Ief. The comparison
is based on some experiments into a dataset provided by INEX1 to
show the effectiveness of our approach on one hand and those of
both weighting functions on the other.
Abstract: This paper presents a heuristic approach to solve the Generalized Assignment Problem (GAP) which is NP-hard. It is worth mentioning that many researches used to develop algorithms for identifying the redundant constraints and variables in linear programming model. Some of the algorithms are presented using intercept matrix of the constraints to identify redundant constraints and variables prior to the start of the solution process. Here a new heuristic approach based on the dominance property of the intercept matrix to find optimal or near optimal solution of the GAP is proposed. In this heuristic, redundant variables of the GAP are identified by applying the dominance property of the intercept matrix repeatedly. This heuristic approach is tested for 90 benchmark problems of sizes upto 4000, taken from OR-library and the results are compared with optimum solutions. Computational complexity is proved to be O(mn2) of solving GAP using this approach. The performance of our heuristic is compared with the best state-ofthe- art heuristic algorithms with respect to both the quality of the solutions. The encouraging results especially for relatively large size test problems indicate that this heuristic approach can successfully be used for finding good solutions for highly constrained NP-hard problems.
Abstract: Noise level has critical effects on the diagnostic
performance of signal-averaged electrocardiogram (SAECG), because
the true starting and end points of QRS complex would be masked by
the residual noise and sensitive to the noise level. Several studies and
commercial machines have used a fixed number of heart beats
(typically between 200 to 600 beats) or set a predefined noise level
(typically between 0.3 to 1.0 μV) in each X, Y and Z lead to perform
SAECG analysis. However different criteria or methods used to
perform SAECG would cause the discrepancies of the noise levels
among study subjects. According to the recommendations of 1991
ESC, AHA and ACC Task Force Consensus Document for the use of
SAECG, the determinations of onset and offset are related closely to
the mean and standard deviation of noise sample. Hence this study
would try to perform SAECG using consistent root-mean-square
(RMS) noise levels among study subjects and analyze the noise level
effects on SAECG. This study would also evaluate the differences
between normal subjects and chronic renal failure (CRF) patients in
the time-domain SAECG parameters.
The study subjects were composed of 50 normal Taiwanese and 20
CRF patients. During the signal-averaged processing, different RMS
noise levels were adjusted to evaluate their effects on three time
domain parameters (1) filtered total QRS duration (fQRSD), (2) RMS
voltage of the last QRS 40 ms (RMS40), and (3) duration of the low
amplitude signals below 40 μV (LAS40). The study results
demonstrated that the reduction of RMS noise level can increase
fQRSD and LAS40 and decrease the RMS40, and can further increase
the differences of fQRSD and RMS40 between normal subjects and
CRF patients. The SAECG may also become abnormal due to the
reduction of RMS noise level. In conclusion, it is essential to establish
diagnostic criteria of SAECG using consistent RMS noise levels for
the reduction of the noise level effects.
Abstract: Austenite and Martensite indicate the phases of solids undergoing phase transformation which we usually associate with materials and not with living organisms. This article provides an overview of bacterial proteins and structures that are undergoing phase transformation and suggests its probable effect on mechanical behavior. The context is mainly within the role of phase transformations occurring in the flagellum of bacteria. The current knowledge of molecular mechanism leading to phase variation in living organisms is reviewed. Since in bacteria, each flagellum is driven by a separate motor, similarity to a Differential drive in case of four-wheeled vehicles is suggested. It also suggests the application of the mechanism in which bacteria changes its direction of movement to facilitate single point turning of a multi-wheeled vehicle. Finally, examples are presented to illustrate that the motion due to phase transformation of flagella in bacteria can start a whole new research on motion mechanisms.