Abstract: It is a challenge to provide a wide range of queries to
database query systems for small mobile devices, such as the PDAs
and cell phones. Currently, due to the physical and resource
limitations of these devices, most reported database querying systems
developed for them are only offering a small set of pre-determined
queries for users to possibly pose. The above can be resolved by
allowing free-form queries to be entered on the devices. Hence, a
query language that does not restrict the combination of query terms
entered by users is proposed. This paper presents the free-form query
language and the method used in translating free-form queries to
their equivalent SQL statements.
Abstract: In the planning point of view, it is essential to have
mode choice, due to the massive amount of incurred in transportation
systems. The intercity travellers in Libya have distinct features, as
against travellers from other countries, which includes cultural and
socioeconomic factors. Consequently, the goal of this study is to
recognize the behavior of intercity travel using disaggregate models,
for projecting the demand of nation-level intercity travel in Libya.
Multinomial Logit Model for all the intercity trips has been
formulated to examine the national-level intercity transportation in
Libya. The Multinomial logit model was calibrated using nationwide
revealed preferences (RP) and stated preferences (SP) survey. The
model was developed for deference purpose of intercity trips (work,
social and recreational). The variables of the model have been
predicted based on maximum likelihood method. The data needed for
model development were obtained from all major intercity corridors
in Libya. The final sample size consisted of 1300 interviews. About
two-thirds of these data were used for model calibration, and the
remaining parts were used for model validation. This study, which is
the first of its kind in Libya, investigates the intercity traveler’s
mode-choice behavior. The intercity travel mode-choice model was
successfully calibrated and validated. The outcomes indicate that, the
overall model is effective and yields higher precision of estimation.
The proposed model is beneficial, due to the fact that, it is receptive
to a lot of variables, and can be employed to determine the impact of
modifications in the numerous characteristics on the need for various
travel modes. Estimations of the model might also be of valuable to
planners, who can estimate possibilities for various modes and
determine the impact of unique policy modifications on the need for
intercity travel.
Abstract: Today, biogenic magnetite nanoparticles among
magnetic nanoparticles have unique attracted attention because of
their magnetic characteristics and potential applications in various
fields such as therapeutic and diagnostic. A well known example of
these biogenic nanoparticles is magnetosomes of magnetotactic
bacteria. In this research, we used two different types of technique for
the isolation and purification of magnetosome nanoparticles from the
isolated magnetotactic bacterial cells, heat-alkaline treatment and
sonication. Also we evaluated pyrogen content and sterility of
synthesized the isolated individual magnetosome by the Limulus
Amoebocyte Lysate test and direct impedimetric method
respectively.
Abstract: The objective of positioning the fixture elements in
the fixture is to make the workpiece stiff, so that geometric errors in
the manufacturing process can be reduced. Most of the work for
optimal fixture layout used the minimization of the sum of the nodal
deflection normal to the surface as objective function. All deflections
in other direction have been neglected. We propose a new method for
fixture layout optimization in this paper, which uses the element
strain energy. The deformations in all the directions have been
considered in this way. The objective function in this method is to
minimize the sum of square of element strain energy. Strain energy
and stiffness are inversely proportional to each other. The
optimization problem is solved by the sequential quadratic
programming method. Three different kinds of case studies are
presented, and results are compared with the method using nodal
deflections as objective function to verify the propose method.
Abstract: To satisfy the need of outfield tests of star sensors, a
method is put forward to construct the reference attitude benchmark.
Firstly, its basic principle is introduced; Then, all the separate
conversion matrixes are deduced, which include: the conversion
matrix responsible for the transformation from the Earth Centered
Inertial frame i to the Earth-centered Earth-fixed frame w according to
the time of an atomic clock, the conversion matrix from frame w to the
geographic frame t, and the matrix from frame t to the platform frame
p, so the attitude matrix of the benchmark platform relative to the
frame i can be obtained using all the three matrixes as the
multiplicative factors; Next, the attitude matrix of the star sensor
relative to frame i is got when the mounting matrix from frame p to the
star sensor frame s is calibrated, and the reference attitude angles for
star sensor outfield tests can be calculated from the transformation
from frame i to frame s; Finally, the computer program is finished to
solve the reference attitudes, and the error curves are drawn about the
three axis attitude angles whose absolute maximum error is just 0.25ÔÇ│.
The analysis on each loop and the final simulating results manifest that
the method by precise timing to acquire the absolute reference attitude
is feasible for star sensor outfield tests.
Abstract: The analysis of Acoustic Emission (AE) signal
generated from metal cutting processes has often approached
statistically. This is due to the stochastic nature of the emission
signal as a result of factors effecting the signal from its generation
through transmission and sensing. Different techniques are applied in
this manner, each of which is suitable for certain processes. In metal
cutting where the emission generated by the deformation process is
rather continuous, an appropriate method for analysing the AE signal
based on the root mean square (RMS) of the signal is often used and
is suitable for use with the conventional signal processing systems.
The aim of this paper is to set a strategy in tool failure detection in
turning processes via the statistic analysis of the AE generated from
the cutting zone. The strategy is based on the investigation of the
distribution moments of the AE signal at predetermined sampling.
The skews and kurtosis of these distributions are the key elements in
the detection. A normal (Gaussian) distribution has first been
suggested then this was eliminated due to insufficiency. The so
called Beta distribution was then considered, this has been used with
an assumed β density function and has given promising results with
regard to chipping and tool breakage detection.
Abstract: The present study explains the effect of aggregate
gradation on moisture damage in bituminous mixes. Three types of
aggregate gradation and two types of binder; VG-30 and Polymer
modified bitumen (PMB-40) are used. Moisture susceptibility tests
like retained stability and tensile strength ratio (TSR) and static creep
test are conducted on Marshall specimens. The creep test was also
conducted for conditioned and unconditioned specimens to observe
the effect of moisture on creep behaviour. The results indicate that
Marshall stability value is higher in PMB-40 mix than VG-30 mixes.
Moisture susceptibility of PMB-40 mixes is low when compared with
mix using VG-30. The reduction in retained stability, and indirect
tensile strength and increase in creep are evaluated for finer, coarser
and normal gradation of aggregate to observe the effect of gradation
on moisture susceptibility of mixes. The retained stability is least
affected when compared with other moisture susceptibility
parameters
Abstract: A one-step conservative level set method, combined with a global mass correction method, is developed in this study to simulate the incompressible two-phase flows. The present framework do not need to solve the conservative level set scheme at two separated steps, and the global mass can be exactly conserved. The present method is then more efficient than two-step conservative level set scheme. The dispersion-relation-preserving schemes are utilized for the advection terms. The pressure Poisson equation solver is applied to GPU computation using the pCDR library developed by National Center for High-Performance Computing, Taiwan. The SMP parallelization is used to accelerate the rest of calculations. Three benchmark problems were done for the performance evaluation. Good agreements with the referenced solutions are demonstrated for all the investigated problems.
Abstract: The artificial intelligent controller in power system
plays as most important rule for many applications such as system
operation and its control specially Load Frequency Controller (LFC).
The main objective of LFC is to keep the frequency and tie-line power
close to their decidable bounds in case of disturbance. In this paper,
parallel fuzzy PI adaptive with conventional PD technique for Load
Frequency Control system was proposed. PSO optimization method
used to optimize both of scale fuzzy PI and tuning of PD. Two equal
interconnected power system areas were used as a test system.
Simulation results show the effectiveness of the proposed controller
compared with different PID and classical fuzzy PI controllers in terms
of speed response and damping frequency.
Abstract: This paper presents a novel algorithm of stereo
correspondence with rank transform. In this algorithm we used the
genetic algorithm to achieve the accurate disparity map. Genetic
algorithms are efficient search methods based on principles of
population genetic, i.e. mating, chromosome crossover, gene
mutation, and natural selection. Finally morphology is employed to
remove the errors and discontinuities.
Abstract: Literature reveals that many investors rely on technical trading rules when making investment decisions. If stock markets are efficient, one cannot achieve superior results by using these trading rules. However, if market inefficiencies are present, profitable opportunities may arise. The aim of this study is to investigate the effectiveness of technical trading rules in 34 emerging stock markets. The performance of the rules is evaluated by utilizing White-s Reality Check and the Superior Predictive Ability test of Hansen, along with an adjustment for transaction costs. These tests are able to evaluate whether the best model performs better than a buy-and-hold benchmark. Further, they provide an answer to data snooping problems, which is essential to obtain unbiased outcomes. Based on our results we conclude that technical trading rules are not able to outperform a naïve buy-and-hold benchmark on a consistent basis. However, we do find significant trading rule profits in 4 of the 34 investigated markets. We also present evidence that technical analysis is more profitable in crisis situations. Nevertheless, this result is relatively weak.
Abstract: In the area where the high quality water is not
available, unconventional water sources are used to irrigate.
Household leachate is one of the sources which are used in dry and
semi dry areas in order to water the barer trees and plants. It meets
the plants needs and also has some effects on the soil, but at the same
time it might cause some problems as well. This study in order to
evaluate the effect of using Compost leachate on the density of soil
iron in form of a statistical pattern called ''Split Plot'' by using two
main treatments, one subsidiary treatment and three repetitions of the
pattern in a three month period. The main N treatments include:
irrigation using well water as a blank treatments and the main I
treatments include: irrigation using leachate and well water
concurrently. Some subsidiary treatments were DI (Drop Irrigation)
and SDI (Sub Drop Irrigation). Then in the established plots, 36
biannual pine and cypress shrubs were randomly grown. Two months
later the treatment begins. The results revealed that there was a
significant variation between the main treatment and the instance
regarding pH decline in the soil which was related to the amount of
leachate injected into the soil. After some time and using leachate the
pH level fell, as much as 0.46 and also increased due to the great
amounts of leachate. The underneath drop irrigation ends in better
results than sub drop irrigation since it keeps the soil texture fixed.
Abstract: This paper provides an in-depth study of Wireless
Sensor Network (WSN) application to monitor and control the
swiftlet habitat. A set of system design is designed and developed
that includes the hardware design of the nodes, Graphical User
Interface (GUI) software, sensor network, and interconnectivity for
remote data access and management. System architecture is proposed
to address the requirements for habitat monitoring. Such applicationdriven
design provides and identify important areas of further work
in data sampling, communications and networking. For this
monitoring system, a sensor node (MTS400), IRIS and Micaz radio
transceivers, and a USB interfaced gateway base station of Crossbow
(Xbow) Technology WSN are employed. The GUI of this monitoring
system is written using a Laboratory Virtual Instrumentation
Engineering Workbench (LabVIEW) along with Xbow Technology
drivers provided by National Instrument. As a result, this monitoring
system is capable of collecting data and presents it in both tables and
waveform charts for further analysis. This system is also able to send
notification message by email provided Internet connectivity is
available whenever changes on habitat at remote sites (swiftlet farms)
occur. Other functions that have been implemented in this system
are the database system for record and management purposes; remote
access through the internet using LogMeIn software. Finally, this
research draws a conclusion that a WSN for monitoring swiftlet
habitat can be effectively used to monitor and manage swiftlet
farming industry in Sarawak.
Abstract: This study presents a new approach based on Tanaka's
fuzzy linear regression (FLP) algorithm to solve well-known power
system economic load dispatch problem (ELD). Tanaka's fuzzy linear
regression (FLP) formulation will be employed to compute the
optimal solution of optimization problem after linearization. The
unknowns are expressed as fuzzy numbers with a triangular
membership function that has middle and spread value reflected on
the unknowns. The proposed fuzzy model is formulated as a linear
optimization problem, where the objective is to minimize the sum of
the spread of the unknowns, subject to double inequality constraints.
Linear programming technique is employed to obtain the middle and
the symmetric spread for every unknown (power generation level).
Simulation results of the proposed approach will be compared with
those reported in literature.
Abstract: The effect of wheat flour extraction rates on flour
composition, farinographic characteristics and the quality of
sourdough naans was investigated. The results indicated that by
increasing the extraction rate, the amount of protein, fiber, fat and
ash increased, whereas moisture content decreased. Farinographic
characteristic like water absorption and dough development time
increased with an increase in flour extraction rate but the dough
stabilities and tolerance indices were reduced with an increase in
flour extraction rates. Titratable acidity for both sourdough and
sourdough naans also increased along with flour extraction rate. The
study showed that overall quality of sourdough naans were affected
by both flour extraction rate and starter culture used. Sensory
analysis of sourdough naans revealed that desirable extraction rate
for sourdough naan was 76%.
Abstract: Recently studies in area of supply chain network
(SCN) have focused on the disruption issues in distribution systems.
Also this paper extends the previous literature by providing a new biobjective
model for cost minimization of designing a three echelon
SCN across normal and failure scenarios with considering multi
capacity option for manufacturers and distribution centers. Moreover,
in order to solve the problem by means of LINGO software, novel
model will be reformulated through a branch of LP-Metric method
called Min-Max approach.
Abstract: Electromyography (EMG) signal processing has been investigated remarkably regarding various applications such as in rehabilitation systems. Specifically, wavelet transform has served as a powerful technique to scrutinize EMG signals since wavelet transform is consistent with the nature of EMG as a non-stationary signal. In this paper, the efficiency of wavelet transform in surface EMG feature extraction is investigated from four levels of wavelet decomposition and a comparative study between different mother wavelets had been done. To recognize the best function and level of wavelet analysis, two evaluation criteria, scatter plot and RES index are recruited. Hereupon, four wavelet families, namely, Daubechies, Coiflets, Symlets and Biorthogonal are studied in wavelet decomposition stage. Consequently, the results show that only features from first and second level of wavelet decomposition yields good performance and some functions of various wavelet families can lead to an improvement in separability class of different hand movements.
Abstract: Modular multiplication is the basic operation
in most public key cryptosystems, such as RSA, DSA, ECC,
and DH key exchange. Unfortunately, very large operands
(in order of 1024 or 2048 bits) must be used to provide
sufficient security strength. The use of such big numbers
dramatically slows down the whole cipher system, especially
when running on embedded processors.
So far, customized hardware accelerators - developed on
FPGAs or ASICs - were the best choice for accelerating
modular multiplication in embedded environments. On the
other hand, many algorithms have been developed to speed
up such operations. Examples are the Montgomery modular
multiplication and the interleaved modular multiplication
algorithms. Combining both customized hardware with
an efficient algorithm is expected to provide a much faster
cipher system.
This paper introduces an enhanced architecture for computing
the modular multiplication of two large numbers X
and Y modulo a given modulus M. The proposed design is
compared with three previous architectures depending on
carry save adders and look up tables. Look up tables should
be loaded with a set of pre-computed values. Our proposed
architecture uses the same carry save addition, but replaces
both look up tables and pre-computations with an enhanced
version of sign detection techniques. The proposed architecture
supports higher frequencies than other architectures.
It also has a better overall absolute time for a single operation.
Abstract: This paper discusses a curriculum approach that will
give emphasis on practical portions of teaching network security
subjects in information and communication technology courses. As
we are well aware, the need to use a practice and application oriented
approach in education is paramount. Research on active learning and
cooperative groups have shown that students grasps more and have
more tendency towards obtaining and realizing soft skills like
leadership, communication and team work as opposed to the more
traditional theory and exam based teaching and learning. While this
teaching and learning paradigm is relatively new in Malaysia, it has
been practiced widely in the West. This paper examines a certain
approach whereby students learning wireless security are divided into
and work in small and manageable groups where there will be 2
teams which consist of black hat and white hat teams. The former
will try to find and expose vulnerabilities in a wireless network while
the latter will try their best to prevent such attacks on their wireless
networks using hardware, software, design and enforcement of
security policy and etc. This paper will try to show that the approach
taken plus the use of relevant and up to date software and hardware
and with suitable environment setting will hopefully expose students
to a more fruitful outcome in terms of understanding of concepts,
theories and their motivation to learn.
Abstract: In this paper, the concepts of dichotomous logistic
regression (DLR) with leave-one-out (L-O-O) were discussed. To
illustrate this, the L-O-O was run to determine the importance of the
simulation conditions for robust test of spread procedures with good
Type I error rates. The resultant model was then evaluated. The
discussions included 1) assessment of the accuracy of the model, and
2) parameter estimates. These were presented and illustrated by
modeling the relationship between the dichotomous dependent
variable (Type I error rates) with a set of independent variables (the
simulation conditions). The base SAS software containing PROC
LOGISTIC and DATA step functions can be making used to do the
DLR analysis.