Abstract: IEEE 802.16 is a new wireless technology standard, it
has some advantages, including wider coverage, higher bandwidth,
and QoS support. As the new wireless technology for last mile
solution, there are designed two models in IEEE 802.16 standard. One
is PMP (point to multipoint) and the other is Mesh. In this paper we
only focus on IEEE 802.16 Mesh model. According to the IEEE
802.16 standard description, Mesh model has two scheduling modes,
centralized and distributed. Considering the pros and cons of the two
scheduling, we present the combined scheduling QoS framework that
the BS (Base Station) controls time frame scheduling and selects the
shortest path from source to destination directly. On the other hand, we
propose the Expedited Queue mechanism to cut down the transmission
time. The EQ mechanism can reduce a lot of end-to-end delay in our
QoS framework. Simulation study has shown that the average delay is
smaller than contrasts. Furthermore, our proposed scheme can also
achieve higher performance.
Abstract: In this paper, the periodic surveillance scheme has
been proposed for any convex region using mobile wireless sensor
nodes. A sensor network typically consists of fixed number of
sensor nodes which report the measurements of sensed data such as
temperature, pressure, humidity, etc., of its immediate proximity
(the area within its sensing range). For the purpose of sensing an
area of interest, there are adequate number of fixed sensor
nodes required to cover the entire region of interest. It implies
that the number of fixed sensor nodes required to cover a given
area will depend on the sensing range of the sensor as well as
deployment strategies employed. It is assumed that the sensors to
be mobile within the region of surveillance, can be mounted on
moving bodies like robots or vehicle. Therefore, in our
scheme, the surveillance time period determines the number of
sensor nodes required to be deployed in the region of interest.
The proposed scheme comprises of three algorithms namely:
Hexagonalization, Clustering, and Scheduling, The first algorithm
partitions the coverage area into fixed sized hexagons that
approximate the sensing range (cell) of individual sensor node.
The clustering algorithm groups the cells into clusters, each of
which will be covered by a single sensor node. The later
determines a schedule for each sensor to serve its respective cluster.
Each sensor node traverses all the cells belonging to the cluster
assigned to it by oscillating between the first and the last cell for
the duration of its life time. Simulation results show that our
scheme provides full coverage within a given period of time using
few sensors with minimum movement, less power consumption,
and relatively less infrastructure cost.
Abstract: Design and implementation of a novel B-ACOSD CFAR algorithm is presented in this paper. It is proposed for detecting radar target in log-normal distribution environment. The BACOSD detector is capable to detect automatically the number interference target in the reference cells and detect the real target by an adaptive threshold. The detector is implemented as a System on Chip on FPGA Altera Stratix II using parallelism and pipelining technique. For a reference window of length 16 cells, the experimental results showed that the processor works properly with a processing speed up to 115.13MHz and processing time0.29 ┬Ás, thus meets real-time requirement for a typical radar system.
Abstract: In this paper, we propose an improvement of pattern
growth-based PrefixSpan algorithm, called I-PrefixSpan. The general idea of I-PrefixSpan is to use sufficient data structure for Seq-Tree
framework and separator database to reduce the execution time and
memory usage. Thus, with I-PrefixSpan there is no in-memory database stored after index set is constructed. The experimental result
shows that using Java 2, this method improves the speed of PrefixSpan up to almost two orders of magnitude as well as the memory usage to more than one order of magnitude.
Abstract: It is important to remove manganese from water
because of its effects on human and the environment. Human
activities are one of the biggest contributors for excessive manganese
concentration in the environment. The proposed method to remove
manganese in aqueous solution by using adsorption as in carbon
nanotubes (CNT) at different parameters: The parameters are CNT
dosage, pH, agitation speed and contact time. Different pHs are pH
6.0, pH 6.5, pH 7.0, pH 7.5 and pH 8.0, CNT dosages are 5mg,
6.25mg, 7.5mg, 8.75mg or 10mg, contact time are 10 min, 32.5 min,
55 min, 87.5 min and 120 min while the agitation speeds are 100rpm,
150rpm, 200rpm, 250rpm and 300rpm. The parameters chosen for
experiments are based on experimental design done by using Central
Composite Design, Design Expert 6.0 with 4 parameters, 5 levels and
2 replications. Based on the results, condition set at pH 7.0, agitation
speed of 300 rpm, 7.5mg and contact time 55 minutes gives the
highest removal with 75.5%. From ANOVA analysis in Design
Expert 6.0, the residual concentration will be very much affected by
pH and CNT dosage. Initial manganese concentration is 1.2mg/L
while the lowest residual concentration achieved is 0.294mg/L,
which almost satisfy DOE Malaysia Standard B requirement.
Therefore, further experiments must be done to remove manganese
from model water to the required standard (0.2 mg/L) with the initial
concentration set to 0.294 mg/L.
Abstract: In this paper, SFQ (Start Time Fair Queuing)
algorithm is analyzed when this is applied in computer networks to
know what kind of behavior the traffic in the net has when different
data sources are managed by the scheduler. Using the NS2 software
the computer networks were simulated to be able to get the graphs
showing the performance of the scheduler. Different traffic sources
were introduced in the scripts, trying to establish the real scenario.
Finally the results were that depending on the data source, the traffic
can be affected in different levels, when Constant Bite Rate is
applied, the scheduler ensures a constant level of data sent and
received, but the truth is that in the real life it is impossible to ensure
a level that resists the changes in work load.
Abstract: The objective of current study is to investigate the
differences of winning and losing teams in terms of goal scoring and
passing sequences. Total of 31 matches from UEFA-EURO 2012
were analyzed and 5 matches were excluded from analysis due to
matches end up drawn. There are two groups of variable used in the
study which is; i. the goal scoring variable and: ii. passing sequences
variable. Data were analyzed using Wilcoxon matched pair rank test
with significant value set at p < 0.05. Current study found the timing
of goal scored was significantly higher for winning team at 1st half
(Z=-3.416, p=.001) and 2nd half (Z=-3.252, p=.001). The scoring
frequency was also found to be increase as time progressed and the
last 15 minutes of the game was the time interval the most goals
scored. The indicators that were significantly differences between
winning and losing team were the goal scored (Z=-4.578, p=.000),
the head (Z=-2.500, p=.012), the right foot (Z=-3.788,p=.000),
corner (Z=-.2.126,p=.033), open play (Z=-3.744,p=.000), inside the
penalty box (Z=-4.174, p=.000) , attackers (Z=-2.976, p=.003) and
also the midfielders (Z=-3.400, p=.001). Regarding the passing
sequences, there are significance difference between both teams in
short passing sequences (Z=-.4.141, p=.000). While for the long
passing, there were no significance difference (Z=-.1.795, p=.073).
The data gathered in present study can be used by the coaches to
construct detailed training program based on their objectives.
Abstract: Mobile ad hoc network is a collection of mobile
nodes communicating through wireless channels without any existing
network infrastructure or centralized administration. Because of the
limited transmission range of wireless network interfaces, multiple
"hops" may be needed to exchange data across the network. In order
to facilitate communication within the network, a routing protocol is
used to discover routes between nodes. The primary goal of such an
ad hoc network routing protocol is correct and efficient route
establishment between a pair of nodes so that messages may be
delivered in a timely manner. Route construction should be done
with a minimum of overhead and bandwidth consumption. This paper
examines two routing protocols for mobile ad hoc networks– the
Destination Sequenced Distance Vector (DSDV), the table- driven
protocol and the Ad hoc On- Demand Distance Vector routing
(AODV), an On –Demand protocol and evaluates both protocols
based on packet delivery fraction, normalized routing load, average
delay and throughput while varying number of nodes, speed and
pause time.
Abstract: In this paper we propose and examine an Adaptive
Neuro-Fuzzy Inference System (ANFIS) in Smoothing Transition
Autoregressive (STAR) modeling. Because STAR models follow
fuzzy logic approach, in the non-linear part fuzzy rules can be
incorporated or other training or computational methods can be
applied as the error backpropagation algorithm instead to nonlinear
squares. Furthermore, additional fuzzy membership functions can be
examined, beside the logistic and exponential, like the triangle,
Gaussian and Generalized Bell functions among others. We examine
two macroeconomic variables of US economy, the inflation rate and
the 6-monthly treasury bills interest rates.
Abstract: The hybridisation of genetic algorithm with heuristics has been shown to be one of an effective way to improve its performance. In this work, genetic algorithm hybridised with four heuristics including a new heuristic called neighbourhood improvement were investigated through the classical travelling salesman problem. The experimental results showed that the proposed heuristic outperformed other heuristics both in terms of quality of the results obtained and the computational time.
Abstract: In this paper we propose a novel method for human
face segmentation using the elliptical structure of the human head. It
makes use of the information present in the edge map of the image.
In this approach we use the fact that the eigenvalues of covariance
matrix represent the elliptical structure. The large and small
eigenvalues of covariance matrix are associated with major and
minor axial lengths of an ellipse. The other elliptical parameters are
used to identify the centre and orientation of the face. Since an
Elliptical Hough Transform requires 5D Hough Space, the Circular
Hough Transform (CHT) is used to evaluate the elliptical parameters.
Sparse matrix technique is used to perform CHT, as it squeeze zero
elements, and have only a small number of non-zero elements,
thereby having an advantage of less storage space and computational
time. Neighborhood suppression scheme is used to identify the valid
Hough peaks. The accurate position of the circumference pixels for
occluded and distorted ellipses is identified using Bresenham-s
Raster Scan Algorithm which uses the geometrical symmetry
properties. This method does not require the evaluation of tangents
for curvature contours, which are very sensitive to noise. The method
has been evaluated on several images with different face orientations.
Abstract: This project aims to investigate the potential of
torrefaction to improve the properties of Malaysian palm kernel shell
(PKS) as a solid fuel. A study towards torrefaction of PKS was
performed under various temperature and residence time of 240, 260,
and 280oC and 30, 60, and 90 minutes respectively. The torrefied
PKS was characterized in terms of the mass yield, energy yield,
elemental composition analysis, calorific value analysis, moisture and
volatile matter contents, and ash and fixed carbon contents. The mass
and energy yield changes in the torrefied PKS were observed to
prove that the temperature has more effect compare to residence time
in the torrefaction process. The C content of PKS increases while H
and O contents decrease after torrefaction, which resulted in higher
heating value between 5 to 16%. Meanwhile, torrefaction caused the
ash and fixed carbon content of PKS to increase, and the moisture
and volatile matter to decrease.
Abstract: The Czech Republic is a country whose economy has
undergone a transformation since 1989. Since joining the EU it has
been striving to reduce the differences in its economic standard and
the quality of its institutional environment in comparison with
developed countries. According to an assessment carried out by the
World Bank, the Czech Republic was long classed as a country
whose institutional development was seen as problematic. For many
years one of the things it was rated most poorly on was its bankruptcy
law. The new Insolvency Act, which is a modern law in terms of its
treatment of bankruptcy, was first adopted in the Czech Republic in
2006. This law, together with other regulatory measures, offers debtridden
Czech economic subjects legal instruments which are well
established and in common practice in developed market economies.
Since then, analyses performed by the World Bank and the London
EBRD have shown that there have been significant steps forward in
the quality of Czech bankruptcy law. The Czech Republic still lacks
an analytical apparatus which can offer a structured characterisation
of the general and specific conditions of Czech company and
household debt which is subject to current changes in the global
economy. This area has so far not been given the attention it
deserves. The lack of research is particularly clear as regards analysis
of household debt and householders- ability to settle their debts in a
reasonable manner using legal and other state means of regulation.
We assume that Czech households have recourse to a modern
insolvency law, yet the effective application of this law is hampered
by the inconsistencies in the formal and informal institutions
involved in resolving debt. This in turn is based on the assumption
that this lack of consistency is more marked in cases of personal
bankruptcy. Our aim is to identify the symptoms which indicate that
for some time the effective application of bankruptcy law in the
Czech Republic will be hindered by factors originating in
householders- relative inability to identify the risks of falling into
debt.
Abstract: In this paper, the problem of stability criteria of neural networks (NNs) with two-additive time-varying delay compenents is investigated. The relationship between the time-varying delay and its lower and upper bounds is taken into account when estimating the upper bound of the derivative of Lyapunov functional. As a result, some improved delay stability criteria for NNs with two-additive time-varying delay components are proposed. Finally, a numerical example is given to illustrate the effectiveness of the proposed method.
Abstract: Ice cover County has a significant impact on rivers as it affects with the ice melting capacity which results in flooding, restrict navigation, modify the ecosystem and microclimate. River ices are made up of different ice types with varying ice thickness, so surveillance of river ice plays an important role. River ice types are captured using infrared imaging camera which captures the images even during the night times. In this paper the river ice infrared texture images are analysed using first-order statistical methods and secondorder statistical methods. The second order statistical methods considered are spatial gray level dependence method, gray level run length method and gray level difference method. The performance of the feature extraction methods are evaluated by using Probabilistic Neural Network classifier and it is found that the first-order statistical method and second-order statistical method yields low accuracy. So the features extracted from the first-order statistical method and second-order statistical method are combined and it is observed that the result of these combined features (First order statistical method + gray level run length method) provides higher accuracy when compared with the features from the first-order statistical method and second-order statistical method alone.
Abstract: The major source of allergy in home is the house dust
mite (Dematophagoides farina, Dermatophagoides pteronyssinus)
causing allergic symptom include atopic dermatitis, asthma, perennial
rhinitis and even infant death syndrome.
Control of this mite species is dependent on the use of chemical
methods such as fumigation treatments with methylene bromide,
spraying with organophosphates such as pirimiphos-methyl, or
treatments with repellents such as DEET and benzyl benzoate.
Although effective, their repeated use for decades has sometimes
resulted in development of resistance and fostered environmental and
human health concerns. Both decomposing animal parts and the
protein that surrounds mite fecal pellets cause mite allergy. So it is
more effective to repel than to kill them because allergen is not living
house dust mite but dead body or fecal particles of house dust mite.
It is important to find out natural repellent material against house
dust mite to control them and reduce the allergic reactions. Plants may
be an alternative source for dust mite control because they contain a
range of bioactive chemicals.
The research objectives of this paper were to verify the acaricidal
and repellent effects of cinnamon essential oil and to find out it-s most
effective concentrations. We could find that cinnamon bark essential
oil was very effective material to control the house dust mite.
Furthermore, it could reduce chemical resistance and danger for
human health.
Abstract: Single crystals of Magnesium alloys such as pure Mg,
Mg-1Zn-0.5Y, Mg-0.1Y, and Mg-0.1Ce alloys were successfully
fabricated in this study by employing the modified Bridgman method.
To determine the exact orientation of crystals, pole figure
measurement using X-ray diffraction were carried out on each single
crystal. Hardness and compression tests were conducted followed by
subsequent recrysatllization annealing. Recrystallization kinetics of
Mg alloy single crystals has been investigated. Fabricated single
crystals were cut into rectangular shaped specimen and solution
treated at 400oC for 24 hrs, and then deformed in compression mode
by 30% reduction. Annealing treatment for recrystallization has been
conducted on these cold-rolled plates at temperatures of 300oC for
various times from 1 to 20 mins. The microstructure observation and
hardness measurement conducted on the recrystallized specimens
revealed that static recrystallization of ternary alloy single crystal was
very slow, while recrystallization behavior of binary alloy single
crystals appeared to be very fast.
Abstract: Flash memory has become an important storage device
in many embedded systems because of its high performance, low
power consumption and shock resistance. Multi-level cell (MLC) is
developed as an effective solution for reducing the cost and increasing
the storage density in recent years. However, most of flash file system
cannot handle the error correction sufficiently. To correct more errors
for MLC, we implement Reed-Solomon (RS) code to YAFFS, what is
widely used for flash-based file system. RS code has longer computing
time but the correcting ability is much higher than that of Hamming
code.
Abstract: As data to be stored in storage subsystems
tremendously increases, data protection techniques have become more
important than ever, to provide data availability and reliability. In this
paper, we present the file system-based data protection (WOWSnap)
that has been implemented using WORM (Write-Once-Read-Many)
scheme. In the WOWSnap, once WORM files have been created, only
the privileged read requests to them are allowed to protect data against
any intentional/accidental intrusions. Furthermore, all WORM files
are related to their protection cycle that is a time period during which
WORM files should securely be protected. Once their protection cycle
is expired, the WORM files are automatically moved to the
general-purpose data section without any user interference. This
prevents the WORM data section from being consumed by
unnecessary files. We evaluated the performance of WOWSnap on
Linux cluster.
Abstract: The motion planning technique described in this paper has been developed to eliminate or reduce the residual vibrations of belt-driven rotary platforms, while maintaining unchanged the motion time and the total angular displacement of the platform. The proposed approach is based on a suitable choice of the motion command given to the servomotor that drives the mechanical device; this command is defined by some numerical coefficients which determine the shape of the displacement, velocity and acceleration profiles. Using a numerical optimization technique, these coefficients can be changed without altering the continuity conditions imposed on the displacement and its time derivatives at the initial and final time instants. The proposed technique can be easily and quickly implemented on an actual device, since it requires only a simple modification of the motion command profile mapped in the memory of the electronic motion controller.