Abstract: As wireless sensor networks are energy constraint networks
so energy efficiency of sensor nodes is the main design issue.
Clustering of nodes is an energy efficient approach. It prolongs the
lifetime of wireless sensor networks by avoiding long distance communication.
Clustering algorithms operate in rounds. Performance of
clustering algorithm depends upon the round time. A large round
time consumes more energy of cluster heads while a small round
time causes frequent re-clustering. So existing clustering algorithms
apply a trade off to round time and calculate it from the initial
parameters of networks. But it is not appropriate to use initial
parameters based round time value throughout the network lifetime
because wireless sensor networks are dynamic in nature (nodes can be
added to the network or some nodes go out of energy). In this paper
a variable round time approach is proposed that calculates round
time depending upon the number of active nodes remaining in the
field. The proposed approach makes the clustering algorithm adaptive
to network dynamics. For simulation the approach is implemented
with LEACH in NS-2 and the results show that there is 6% increase
in network lifetime, 7% increase in 50% node death time and 5%
improvement over the data units gathered at the base station.
Abstract: Deprivation indices are widely used in public health
study. These indices are also referred as the index of inequalities or
disadvantage. Even though, there are many indices that have been
built before, it is believed to be less appropriate to use the existing
indices to be applied in other countries or areas which had different
socio-economic conditions and different geographical characteristics.
The objective of this study is to construct the index based on the
geographical and socio-economic factors in Peninsular Malaysia
which is defined as the weighted household-based deprivation index.
This study has employed the variables based on household items,
household facilities, school attendance and education level obtained
from Malaysia 2000 census report. The factor analysis is used to
extract the latent variables from indicators, or reducing the
observable variable into smaller amount of components or factor.
Based on the factor analysis, two extracted factors were selected,
known as Basic Household Amenities and Middle-Class Household
Item factor. It is observed that the district with a lower index values
are located in the less developed states like Kelantan, Terengganu
and Kedah. Meanwhile, the areas with high index values are located
in developed states such as Pulau Pinang, W.P. Kuala Lumpur and
Selangor.
Abstract: Deep cold rolling (DCR) and low plasticity burnishing (LPB) process are cold working processes, which easily produce a smooth and work-hardened surface by plastic deformation of surface irregularities. The present study focuses on the surface roughness and surface hardness aspects of AISI 4140 work material, using fractional factorial design of experiments. The assessment of the surface integrity aspects on work material was done, in order to identify the predominant factors amongst the selected parameters. They were then categorized in order of significance followed by setting the levels of the factors for minimizing surface roughness and/or maximizing surface hardness. In the present work, the influence of main process parameters (force, feed rate, number of tool passes/overruns, initial roughness of the work piece, ball material, ball diameter and lubricant used) on the surface roughness and the hardness of AISI 4140 steel were studied for both LPB and DCR process and the results are compared. It was observed that by using LPB process surface hardness has been improved by 167% and in DCR process surface hardness has been improved by 442%. It was also found that the force, ball diameter, number of tool passes and initial roughness of the workpiece are the most pronounced parameters, which has a significant effect on the work piece-s surface during deep cold rolling and low plasticity burnishing process.
Abstract: The objective of this study is to evaluate the
occurrence of fungi in aerobic and anoxic activated sludge from
membrane bioreactors (MBRs). Thirty-six samples of both aerobic
and anoxic activated sludge were taken from 2 MBR treating
domestic wastewater. Over a period of eight months 2 samples from
each plant were taken per month. The samples were prepared for
count and definition of fungi. The obtained data show that, sixty
species belonging to 27 genera were collected from activated sludge
samples under aerobic and anoxic conditions. Regarding to the fungi
definition, under aerobic condition the Geotrichum was found at
(8.8%) followed by Penicillium (75.0%), Yeasts (65.7%) and
Trichoderma (55.5%), while Yeasts (77.1%) Geotrichum
candidumand Penicillium (61.1%) species were the most prevalent in
anoxic activated sludge. The results indicate that activated sludge is
habitat for growth and sporulation of different groups of fungi, both
saprophytic and pathogenic.
Abstract: Computer programming is considered a very difficult
course by many computer science students. The reasons for the
difficulties include cognitive load involved in programming,
different learning styles of students, instructional methodology and
the choice of the programming languages. To reduce the difficulties
the following have been tried: pair programming, program
visualization, different learning styles etc. However, these efforts
have produced limited success. This paper reviews the problem and
proposes a framework to help students overcome the difficulties
involved.
Abstract: ICA which is generally used for blind source separation
problem has been tested for feature extraction in Speech recognition
system to replace the phoneme based approach of MFCC. Applying
the Cepstral coefficients generated to ICA as preprocessing has
developed a new signal processing approach. This gives much better
results against MFCC and ICA separately, both for word and speaker
recognition. The mixing matrix A is different before and after MFCC
as expected. As Mel is a nonlinear scale. However, cepstrals
generated from Linear Predictive Coefficient being independent
prove to be the right candidate for ICA. Matlab is the tool used for
all comparisons. The database used is samples of ISOLET.
Abstract: In this paper, a clustering algorithm named KHarmonic
means (KHM) was employed in the training of Radial
Basis Function Networks (RBFNs). KHM organized the data in
clusters and determined the centres of the basis function. The popular
clustering algorithms, namely K-means (KM) and Fuzzy c-means
(FCM), are highly dependent on the initial identification of elements
that represent the cluster well. In KHM, the problem can be avoided.
This leads to improvement in the classification performance when
compared to other clustering algorithms. A comparison of the
classification accuracy was performed between KM, FCM and KHM.
The classification performance is based on the benchmark data sets:
Iris Plant, Diabetes and Breast Cancer. RBFN training with the KHM
algorithm shows better accuracy in classification problem.
Abstract: We consider the development of an eight order Adam-s
type method, with A-stability property discussed by expressing them
as a one-step method in higher dimension. This makes it suitable
for solving variety of initial-value problems. The main method and
additional methods are obtained from the same continuous scheme
derived via interpolation and collocation procedures. The methods
are then applied in block form as simultaneous numerical integrators
over non-overlapping intervals. Numerical results obtained using the
proposed block form reveals that it is highly competitive with existing
methods in the literature.
Abstract: With the advantage of wireless network technology,
there are a variety of mobile applications which make the issue of
wireless sensor networks as a popular research area in recent years.
As the wireless sensor network nodes move arbitrarily with the
topology fast change feature, mobile nodes are often confronted with
the void issue which will initiate packet losing, retransmitting,
rerouting, additional transmission cost and power consumption.
When transmitting packets, we would not predict void problem
occurring in advance. Thus, how to improve geographic routing with
void avoidance in wireless networks becomes an important issue. In
this paper, we proposed a greedy geographical void routing algorithm
to solve the void problem for wireless sensor networks. We use the
information of source node and void area to draw two tangents to
form a fan range of the existence void which can announce voidavoiding
message. Then we use source and destination nodes to draw
a line with an angle of the fan range to select the next forwarding
neighbor node for routing. In a dynamic wireless sensor network
environment, the proposed greedy void avoiding algorithm can be
more time-saving and more efficient to forward packets, and improve
current geographical void problem of wireless sensor networks.
Abstract: In the past years a lot of effort has been made in the
field of face detection. The human face contains important features
that can be used by vision-based automated systems in order to
identify and recognize individuals. Face location, the primary step of
the vision-based automated systems, finds the face area in the input
image. An accurate location of the face is still a challenging task.
Viola-Jones framework has been widely used by researchers in order
to detect the location of faces and objects in a given image. Face
detection classifiers are shared by public communities, such as
OpenCV. An evaluation of these classifiers will help researchers to
choose the best classifier for their particular need. This work focuses
of the evaluation of face detection classifiers minding facial
landmarks.
Abstract: Linear stability of wake-shear layers in two-phase
shallow flows is analyzed in the present paper. Stability analysis is
based on two-dimensional shallow water equations. It is assumed that
the fluid contains uniformly distributed solid particles. No dynamic
interaction between the carrier fluid and particles is expected in the
initial moment. Linear stability curves are obtained for different
values of the particle loading parameter, the velocity ratio and the
velocity deficit. It is shown that the increase in the velocity ratio
destabilizes the flow. The particle loading parameter has a stabilizing
effect on the flow. The role of the velocity deficit is also
destabilizing: the increase of the velocity deficit leads to less stable
flow.
Abstract: The objectives of this study are to determine the role of media that influence the values, attitudes and behaviors of Thai youths. Analytical qualitative research techniques were used for this purpose. Data collection based techniques was used which were individual interviews and focus group discussions with journalists, sample of high school and university students, and parents. The results show that “Social Media" is still the most popular media for Thai youths. It is also still in the hands of the marketing business and it can motivate Thai youths to do so many things. The main reasons of media exposure are to find quality information that they want quickly, get satisfaction and can use social media to get more exciting and to build communities. They believe that the need for media and information literacy skills is defined as making judgments, personal integrity, training of family and the behavior of close friends.
Abstract: Standards for learning objects focus primarily on
content presentation. They were already extended to support automatic evaluation but it is limited to exercises with a predefined
set of answers. The existing standards lack the metadata required by specialized evaluators to handle types of exercises with an indefinite
set of solutions. To address this issue existing learning object standards were extended to the particular requirements of a
specialized domain. A definition of programming problems as learning objects, compatible both with Learning Management Systems and with systems performing automatic evaluation of
programs, is presented in this paper. The proposed definition includes
metadata that cannot be conveniently represented using existing standards, such as: the type of automatic evaluation; the requirements
of the evaluation engine; and the roles of different assets - tests cases, program solutions, etc. The EduJudge project and its main services
are also presented as a case study on the use of the proposed definition of programming problems as learning objects.
Abstract: This paper proposes an efficient lattice-reduction-aided
detection (LRD) scheme to improve the detection performance of
MIMO-OFDM system. In this proposed scheme, V candidate symbols
are considered at the first layer, and V probable streams are
detected with LRD scheme according to the first detected V candidate
symbols. Then, the most probable stream is selected through a ML
test. Since the proposed scheme can more accurately detect initial
symbol and can reduce transmission of error to rest symbols, the
proposed scheme shows more improved performance than conventional
LRD with very low complexity.
Abstract: We demonstrate through a sample application, Ebanking,
that the Web Service Modelling Language Ontology component
can be used as a very powerful object-oriented database design
language with logic capabilities. Its conceptual syntax allows the
definition of class hierarchies, and logic syntax allows the definition
of constraints in the database. Relations, which are available for
modelling relations of three or more concepts, can be connected to
logical expressions, allowing the implicit specification of database
content. Using a reasoning tool, logic queries can also be made
against the database in simulation mode.
Abstract: In general, class complexity is measured based on any
one of these factors such as Line of Codes (LOC), Functional points
(FP), Number of Methods (NOM), Number of Attributes (NOA) and so on. There are several new techniques, methods and metrics with
the different factors that are to be developed by the researchers for calculating the complexity of the class in Object Oriented (OO)
software. Earlier, Arockiam et.al has proposed a new complexity measure namely Extended Weighted Class Complexity (EWCC)
which is an extension of Weighted Class Complexity which is proposed by Mishra et.al. EWCC is the sum of cognitive weights of
attributes and methods of the class and that of the classes derived. In EWCC, a cognitive weight of each attribute is considered to be 1.
The main problem in EWCC metric is that, every attribute holds the
same value but in general, cognitive load in understanding the
different types of attributes cannot be the same. So here, we are proposing a new metric namely Attribute Weighted Class Complexity
(AWCC). In AWCC, the cognitive weights have to be assigned for the attributes which are derived from the effort needed to understand
their data types. The proposed metric has been proved to be a better
measure of complexity of class with attributes through the case studies and experiments
Abstract: A study was carried out at the Rice Research Institute of Iran (RRII) to investigate the effect of rollers differential peripheral speed of commercial rubber roll husker and paddy moisture content on the husking index and percentage of broken rice. The experiment was conducted at six levels of rollers differential speed (1.5, 2.2, 2.9, 3.6, 4.3 and 5 m/s) and three levels of paddy moisture content (8-9, 10-11 and 12-13% w.b.). Two common paddy varieties namely, Binam and Khazer, were selected for this study. Results revealed that the effect of rollers differential speed and moisture content significantly (P
Abstract: This paper describes a method to improve the robustness of a face recognition system based on the combination of two compensating classifiers. The face images are preprocessed by the appearance-based statistical approaches such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). LDA features of the face image are taken as the input of the Radial Basis Function Network (RBFN). The proposed approach has been tested on the ORL database. The experimental results show that the LDA+RBFN algorithm has achieved a recognition rate of 93.5%
Abstract: A full six degrees of freedom (6-DOF) flight dynamics
model is proposed for the accurate prediction of short and long-range
trajectories of high spin and fin-stabilized projectiles via atmospheric
flight to final impact point. The projectiles is assumed to be both rigid
(non-flexible), and rotationally symmetric about its spin axis launched
at low and high pitch angles. The mathematical model is based on the
full equations of motion set up in the no-roll body reference frame and
is integrated numerically from given initial conditions at the firing
site. The projectiles maneuvering motion depends on the most
significant force and moment variations, in addition to wind and
gravity. The computational flight analysis takes into consideration the
Mach number and total angle of attack effects by means of the
variable aerodynamic coefficients. For the purposes of the present
work, linear interpolation has been applied from the tabulated database
of McCoy-s book. The developed computational method gives
satisfactory agreement with published data of verified experiments and
computational codes on atmospheric projectile trajectory analysis for
various initial firing flight conditions.
Abstract: The use of High Order Statistics (HOS) analysis is
expected to provide so many candidates of features that can be selected for pattern recognition. More candidates of the feature can
be extracted using simple manipulation through a specific mathematical function prior to the HOS analysis. Feature extraction
method using HOS analysis combined with Difference to the Nth-Power manipulation has been examined in application for Automatic
Modulation Recognition (AMR) to perform scheme recognition of three digital modulation signal, i.e. QPSK-16QAM-64QAM in the
AWGN transmission channel. The simulation results is reported
when the analysis of HOS up to order-12 and the manipulation of Difference to the Nth-Power up to N = 4. The obtained accuracy rate
of AMR using the method of Simple Decision obtained 90% in SNR > 10 dB in its classifier, while using the method of Voted Decision is
96% in SNR > 2 dB.