Abstract: Explosive forming is one of the unconventional
techniques in which, most commonly, the water is used as the
pressure transmission medium. One of the newest methods in
explosive forming is gas detonation forming which uses a normal
shock wave derived of gas detonation, to form sheet metals. For this
purpose a detonation is developed from the reaction of H2+O2
mixture in a long cylindrical detonation tube. The detonation wave
goes through the detonation tube and acts as a blast load on the steel
blank and forms it. Experimental results are compared with a finite
element model; and the comparison of the experimental and
numerical results obtained from strain, thickness variation and
deformed geometry is carried out. Numerical and experimental
results showed approximately 75 – 90 % similarity in formability of
desired shape. Also optimum percent of gas mixture obtained when
we mix 68% H2 with 32% O2.
Abstract: This paper presents preliminary results on modeling
and control of a quadrotor UAV. With aerodynamic concepts, a
mathematical model is firstly proposed to describe the dynamics
of the quadrotor UAV. Parameters of this model are identified by
experiments with Matlab Identify Toolbox. A group of PID controllers
are then designed based on the developed model. To verify
the developed model and controllers, simulations and experiments for
altitude control, position control and trajectory tracking are carried
out. The results show that the quadrotor UAV well follows the
referenced commands, which clearly demonstrates the effectiveness
of the proposed approach.
Abstract: Both the minimum energy consumption and
smoothness, which is quantified as a function of jerk, are generally
needed in many dynamic systems such as the automobile and the
pick-and-place robot manipulator that handles fragile equipments.
Nevertheless, many researchers come up with either solely
concerning on the minimum energy consumption or minimum jerk
trajectory. This research paper proposes a simple yet very interesting
when combining the minimum energy and jerk of indirect jerks
approaches in designing the time-dependent system yielding an
alternative optimal solution. Extremal solutions for the cost functions
of the minimum energy, the minimum jerk and combining them
together are found using the dynamic optimization methods together
with the numerical approximation. This is to allow us to simulate
and compare visually and statistically the time history of state inputs
employed by combining minimum energy and jerk designs. The
numerical solution of minimum direct jerk and energy problem are
exactly the same solution; however, the solutions from problem of
minimum energy yield the similar solution especially in term of
tendency.
Abstract: User-based Collaborative filtering (CF), one of the
most prevailing and efficient recommendation techniques, provides
personalized recommendations to users based on the opinions of other
users. Although the CF technique has been successfully applied in
various applications, it suffers from serious sparsity problems. The
cloud-model approach addresses the sparsity problems by
constructing the user-s global preference represented by a cloud
eigenvector. The user-based CF approach works well with dense
datasets while the cloud-model CF approach has a greater
performance when the dataset is sparse. In this paper, we present a
hybrid approach that integrates the predictions from both the
user-based CF and the cloud-model CF approaches. The experimental
results show that the proposed hybrid approach can ameliorate the
sparsity problem and provide an improved prediction quality.
Abstract: The Kumamoto area, Kyushu, Japan has 1,041km2 in
area and about 1milion in population. This area is a greatest area in Japan which depends on groundwater for all of drinking water. Quantity of this local groundwater use is about 200MCM during the
year. It is understood that the main recharging area of groundwater exist in the rice field zone which have high infiltrate height ahead of
100mm/ day of the irrigated water located in the middle area of the Shira-River Basin. However, by decrease of the paddy-rice planting
area by urbanization and an acreage reduction policy, the groundwater income and expenditure turned worse. Then Kumamoto city and four
companies expended financial support to increase recharging water to
underground by ponded water in the field from 2004.
In this paper, the author reported the situation of recovery of groundwater by recharge and estimates the efficiency of recharge by
statistical method.
Abstract: This paper describes an automatic algorithm to restore
the shape of three-dimensional (3D) left ventricle (LV) models created
from magnetic resonance imaging (MRI) data using a geometry-driven
optimization approach. Our basic premise is to restore the LV shape
such that the LV epicardial surface is smooth after the restoration. A
geometrical measure known as the Minimum Principle Curvature (κ2)
is used to assess the smoothness of the LV. This measure is used to
construct the objective function of a two-step optimization process.
The objective of the optimization is to achieve a smooth epicardial
shape by iterative in-plane translation of the MRI slices.
Quantitatively, this yields a minimum sum in terms of the magnitude
of κ
2, when κ2 is negative. A limited memory quasi-Newton algorithm,
L-BFGS-B, is used to solve the optimization problem. We tested our
algorithm on an in vitro theoretical LV model and 10 in vivo
patient-specific models which contain significant motion artifacts. The
results show that our method is able to automatically restore the shape
of LV models back to smoothness without altering the general shape of
the model. The magnitudes of in-plane translations are also consistent
with existing registration techniques and experimental findings.
Abstract: The aim of this research is to design a collaborative
framework that integrates risk analysis activities into the geospatial
database design (GDD) process. Risk analysis is rarely undertaken
iteratively as part of the present GDD methods in conformance to
requirement engineering (RE) guidelines and risk standards.
Accordingly, when risk analysis is performed during the GDD, some
foreseeable risks may be overlooked and not reach the output
specifications especially when user intentions are not systematically
collected. This may lead to ill-defined requirements and ultimately in
higher risks of geospatial data misuse. The adopted approach consists
of 1) reviewing risk analysis process within the scope of RE and
GDD, 2) analyzing the challenges of risk analysis within the context
of GDD, and 3) presenting the components of a risk-based
collaborative framework that improves the collection of the
intended/forbidden usages of the data and helps geo-IT experts to
discover implicit requirements and risks.
Abstract: In order to develop forest management strategies in
tropical forest in Malaysia, surveying the forest resources and
monitoring the forest area affected by logging activities is essential.
There are tremendous effort has been done in classification of land
cover related to forest resource management in this country as it is a
priority in all aspects of forest mapping using remote sensing and
related technology such as GIS. In fact classification process is a
compulsory step in any remote sensing research. Therefore, the main
objective of this paper is to assess classification accuracy of
classified forest map on Landsat TM data from difference number of
reference data (200 and 388 reference data). This comparison was
made through observation (200 reference data), and interpretation
and observation approaches (388 reference data). Five land cover
classes namely primary forest, logged over forest, water bodies, bare
land and agricultural crop/mixed horticultural can be identified by
the differences in spectral wavelength. Result showed that an overall
accuracy from 200 reference data was 83.5 % (kappa value
0.7502459; kappa variance 0.002871), which was considered
acceptable or good for optical data. However, when 200 reference
data was increased to 388 in the confusion matrix, the accuracy
slightly improved from 83.5% to 89.17%, with Kappa statistic
increased from 0.7502459 to 0.8026135, respectively. The accuracy
in this classification suggested that this strategy for the selection of
training area, interpretation approaches and number of reference data
used were importance to perform better classification result.
Abstract: To decompose organochlorides by bioremediation, co-culture biohydrogen producer and dehalogenation microorganisms is a useful method. In this study, we combined these two characteristics from a biohydrogen producer, Rhodopseudomonas palustris, and a dehalogenation microorganism, Pseudomonas putida, to enchance halorespiration in R. palustris. The genes encoding cytochrome P450cam system (camC, camA, and camB) from P. putida were expressed in R. palustris with designated expression plasmid. All tested strains were cultured to log phase then presented pentachloroethane (PCA) in media. The vector control strain could degrade PCA about 78% after 16 hours, however, the cytochrome P450cam system expressed strain, CGA-camCAB, could completely degrade PCA in 12 hours. While taking chlorinated aromatic, 3-chlorobenzoate, as sole carbon source or present benzoate as co-substrate, CGA-camCAB presented faster growth rate than vector control strain.
Abstract: There are many approaches proposed for solving
Sudoku puzzles. One of them is by modelling the puzzles as block
world problems. There have been three model for Sudoku solvers
based on this approach. Each model expresses Sudoku solver as
a parameterized multi agent systems. In this work, we propose a
new model which is an improvement over the existing models. This
paper presents the development of a Sudoku solver that implements
all the proposed models. Some experiments have been conducted to
determine the performance of each model.
Abstract: This paper proposes the use of Bayesian belief
networks (BBN) as a higher level of health risk assessment for a
dumping site of lead battery smelter factory. On the basis of the
epidemiological studies, the actual hospital attendance records and
expert experiences, the BBN is capable of capturing the probabilistic
relationships between the hazardous substances and their adverse
health effects, and accordingly inferring the morbidity of the adverse
health effects. The provision of the morbidity rates of the related
diseases is more informative and can alleviate the drawbacks of
conventional methods.
Abstract: In this work, we developed the concept of
supercompression, i.e., compression above the compression standard
used. In this context, both compression rates are multiplied. In fact,
supercompression is based on super-resolution. That is to say,
supercompression is a data compression technique that superpose
spatial image compression on top of bit-per-pixel compression to
achieve very high compression ratios. If the compression ratio is very
high, then we use a convolutive mask inside decoder that restores the
edges, eliminating the blur. Finally, both, the encoder and the
complete decoder are implemented on General-Purpose computation
on Graphics Processing Units (GPGPU) cards. Specifically, the
mentio-ned mask is coded inside texture memory of a GPGPU.
Abstract: Recently, various services such as television and the
Internet have come to be received through various terminals.
However, we could gain greater convenience by receiving these
services through cellular phone terminals when we go out and then
continuing to receive the same services through a large screen digital
television after we have come home. However, it is necessary to go
through the same authentication processing again when using TVs
after we have come home. In this study, we have developed an
authentication method that enables users to switch terminals in
environments in which the user receives service from a server through
a terminal. Specifically, the method simplifies the authentication of
the server side when switching from one terminal to another terminal
by using previously authenticated information.
Abstract: The competitive learning is an adaptive process in
which the neurons in a neural network gradually become sensitive to
different input pattern clusters. The basic idea behind the Kohonen-s
Self-Organizing Feature Maps (SOFM) is competitive learning.
SOFM can generate mappings from high-dimensional signal spaces
to lower dimensional topological structures. The main features of this
kind of mappings are topology preserving, feature mappings and
probability distribution approximation of input patterns. To overcome
some limitations of SOFM, e.g., a fixed number of neural units and a
topology of fixed dimensionality, Growing Self-Organizing Neural
Network (GSONN) can be used. GSONN can change its topological
structure during learning. It grows by learning and shrinks by
forgetting. To speed up the training and convergence, a new variant
of GSONN, twin growing cell structures (TGCS) is presented here.
This paper first gives an introduction to competitive learning, SOFM
and its variants. Then, we discuss some GSONN with fixed
dimensionality, which include growing cell structures, its variants
and the author-s model: TGCS. It is ended with some testing results
comparison and conclusions.
Abstract: The development of entrepreneurial competences of
farmers has been pointed out as a necessary condition for the
modernization of land in facing the phenomenon of globalization.
However, the educational processes involved in such a development
have been studied little, especially in emerging economies. This
research aims to enlighten some of the critical issues behind the early
stages of the transformation of farmers into entrepreneurs, through in
depth interviews with farmers, entrepreneurial promoters and public
officials participating in a public pilot project in Mexico. Although
major impacts were expected only in the long run, important positive
changes in the mind set of farmers and other participants were found
in early stages of the intervention. Apparently, the farmers started a
process of becoming more conscious about the importance of
preserving the aquiferous resources, as well as more market and
entrepreneurial oriented.
Abstract: The condition of lightning surge causes the traveling waves and the temporary increase in voltage in the transmission line system. Lightning is the most harmful for destroying the transmission line and setting devices so it is necessary to study and analyze the temporary increase in voltage for designing and setting the surge arrester. This analysis describes the figure of the lightning wave in transmission line with 115 kV voltage level in Thailand by using ATP/EMTP program to create the model of the transmission line and lightning surge. Because of the limit of this program, it must be calculated for the geometry of the transmission line and surge parameter and calculation in the manual book for the closest value of the parameter. On the other hand, for the effects on surge protector when the lightning comes, the surge arrester model must be right and standardized as metropolitan electrical authority's standard. The candidate compared the real information to the result from calculation, also. The results of the analysis show that the temporary increase in voltage value will be rise to 326.59 kV at the line which is done by lightning when the surge arrester is not set in the system. On the other hand, the temporary increase in voltage value will be 182.83 kV at the line which is done by lightning when the surge arrester is set in the system and the period of the traveling wave is reduced, also. The distance for setting the surge arrester must be as near to the transformer as possible. Moreover, it is necessary to know the right distance for setting the surge arrester and the size of the surge arrester for preventing the temporary increase in voltage, effectively.
Abstract: A computer model of Quantum Theory (QT) has been
developed by the author. Major goal of the computer model was
support and demonstration of an as large as possible scope of QT.
This includes simulations for the major QT (Gedanken-) experiments
such as, for example, the famous double-slit experiment.
Besides the anticipated difficulties with (1) transforming exacting
mathematics into a computer program, two further types of problems
showed up, namely (2) areas where QT provides a complete mathematical
formalism, but when it comes to concrete applications the
equations are not solvable at all, or only with extremely high effort;
(3) QT rules which are formulated in natural language and which do
not seem to be translatable to precise mathematical expressions, nor
to a computer program.
The paper lists problems in all three categories and describes also
the possible solutions or circumventions developed for the computer
model.
Abstract: Face recognition is a technique to automatically
identify or verify individuals. It receives great attention in
identification, authentication, security and many more applications.
Diverse methods had been proposed for this purpose and also a lot of
comparative studies were performed. However, researchers could not
reach unified conclusion. In this paper, we are reporting an extensive
quantitative accuracy analysis of four most widely used face
recognition algorithms: Principal Component Analysis (PCA),
Independent Component Analysis (ICA), Linear Discriminant
Analysis (LDA) and Support Vector Machine (SVM) using AT&T,
Sheffield and Bangladeshi people face databases under diverse
situations such as illumination, alignment and pose variations.
Abstract: The unanticipated brittle fracture of connection of the
steel moment resisting frame (SMRF) occurred in 1994 the Northridge
earthquake. Since then, the researches for the vulnerability of
connection of the existing SMRF and for rehabilitation of those
buildings were conducted. This paper suggests performance-based
optimal seismic retrofit technique using connection upgrade. For
optimal design, a multi-objective genetic algorithm(NSGA-II) is used.
One of the two objective functions is to minimize initial cost and
another objective function is to minimize lifetime seismic damages
cost. The optimal algorithm proposed in this paper is performed
satisfying specified performance objective based on FEMA 356. The
nonlinear static analysis is performed for structural seismic
performance evaluation. A numerical example of SAC benchmark
SMRF is provided using the performance-based optimal seismic
retrofit technique proposed in this paper
Abstract: In this paper we improve the quasilinearization method by barycentric Lagrange interpolation because of its numerical stability and computation speed to achieve a stable semi analytical solution. Then we applied the improved method for solving the Fin problem which is a nonlinear equation that occurs in the heat transferring. In the quasilinearization approach the nonlinear differential equation is treated by approximating the nonlinear terms by a sequence of linear expressions. The modified QLM is iterative but not perturbative and gives stable semi analytical solutions to nonlinear problems without depending on the existence of a smallness parameter. Comparison with some numerical solutions shows that the present solution is applicable.