Abstract: As wireless sensor networks are energy constraint networks
so energy efficiency of sensor nodes is the main design issue.
Clustering of nodes is an energy efficient approach. It prolongs the
lifetime of wireless sensor networks by avoiding long distance communication.
Clustering algorithms operate in rounds. Performance of
clustering algorithm depends upon the round time. A large round
time consumes more energy of cluster heads while a small round
time causes frequent re-clustering. So existing clustering algorithms
apply a trade off to round time and calculate it from the initial
parameters of networks. But it is not appropriate to use initial
parameters based round time value throughout the network lifetime
because wireless sensor networks are dynamic in nature (nodes can be
added to the network or some nodes go out of energy). In this paper
a variable round time approach is proposed that calculates round
time depending upon the number of active nodes remaining in the
field. The proposed approach makes the clustering algorithm adaptive
to network dynamics. For simulation the approach is implemented
with LEACH in NS-2 and the results show that there is 6% increase
in network lifetime, 7% increase in 50% node death time and 5%
improvement over the data units gathered at the base station.
Abstract: Deprivation indices are widely used in public health
study. These indices are also referred as the index of inequalities or
disadvantage. Even though, there are many indices that have been
built before, it is believed to be less appropriate to use the existing
indices to be applied in other countries or areas which had different
socio-economic conditions and different geographical characteristics.
The objective of this study is to construct the index based on the
geographical and socio-economic factors in Peninsular Malaysia
which is defined as the weighted household-based deprivation index.
This study has employed the variables based on household items,
household facilities, school attendance and education level obtained
from Malaysia 2000 census report. The factor analysis is used to
extract the latent variables from indicators, or reducing the
observable variable into smaller amount of components or factor.
Based on the factor analysis, two extracted factors were selected,
known as Basic Household Amenities and Middle-Class Household
Item factor. It is observed that the district with a lower index values
are located in the less developed states like Kelantan, Terengganu
and Kedah. Meanwhile, the areas with high index values are located
in developed states such as Pulau Pinang, W.P. Kuala Lumpur and
Selangor.
Abstract: Deep cold rolling (DCR) and low plasticity burnishing (LPB) process are cold working processes, which easily produce a smooth and work-hardened surface by plastic deformation of surface irregularities. The present study focuses on the surface roughness and surface hardness aspects of AISI 4140 work material, using fractional factorial design of experiments. The assessment of the surface integrity aspects on work material was done, in order to identify the predominant factors amongst the selected parameters. They were then categorized in order of significance followed by setting the levels of the factors for minimizing surface roughness and/or maximizing surface hardness. In the present work, the influence of main process parameters (force, feed rate, number of tool passes/overruns, initial roughness of the work piece, ball material, ball diameter and lubricant used) on the surface roughness and the hardness of AISI 4140 steel were studied for both LPB and DCR process and the results are compared. It was observed that by using LPB process surface hardness has been improved by 167% and in DCR process surface hardness has been improved by 442%. It was also found that the force, ball diameter, number of tool passes and initial roughness of the workpiece are the most pronounced parameters, which has a significant effect on the work piece-s surface during deep cold rolling and low plasticity burnishing process.
Abstract: Compensating physiological motion in the context
of minimally invasive cardiac surgery has become an attractive
issue since it outperforms traditional cardiac procedures offering
remarkable benefits. Owing to space restrictions, computer vision
techniques have proven to be the most practical and suitable solution.
However, the lack of robustness and efficiency of existing methods
make physiological motion compensation an open and challenging
problem. This work focusses on increasing robustness and efficiency
via exploration of the classes of 1−and 2−regularized optimization,
emphasizing the use of explicit regularization. Both approaches are
based on natural features of the heart using intensity information.
Results pointed out the 1−regularized optimization class as the best
since it offered the shortest computational cost, the smallest average
error and it proved to work even under complex deformations.
Abstract: The objective of this study is to evaluate the
occurrence of fungi in aerobic and anoxic activated sludge from
membrane bioreactors (MBRs). Thirty-six samples of both aerobic
and anoxic activated sludge were taken from 2 MBR treating
domestic wastewater. Over a period of eight months 2 samples from
each plant were taken per month. The samples were prepared for
count and definition of fungi. The obtained data show that, sixty
species belonging to 27 genera were collected from activated sludge
samples under aerobic and anoxic conditions. Regarding to the fungi
definition, under aerobic condition the Geotrichum was found at
(8.8%) followed by Penicillium (75.0%), Yeasts (65.7%) and
Trichoderma (55.5%), while Yeasts (77.1%) Geotrichum
candidumand Penicillium (61.1%) species were the most prevalent in
anoxic activated sludge. The results indicate that activated sludge is
habitat for growth and sporulation of different groups of fungi, both
saprophytic and pathogenic.
Abstract: Gasoline Octane Number is the standard measure of
the anti-knock properties of a motor in platforming processes, that is
one of the important unit operations for oil refineries and can be
determined with online measurement or use CFR (Cooperative Fuel
Research) engines. Online measurements of the Octane number can
be done using direct octane number analyzers, that it is too
expensive, so we have to find feasible analyzer, like ANFIS
estimators.
ANFIS is the systems that neural network incorporated in fuzzy
systems, using data automatically by learning algorithms of NNs.
ANFIS constructs an input-output mapping based both on human
knowledge and on generated input-output data pairs.
In this research, 31 industrial data sets are used (21 data for training
and the rest of the data used for generalization). Results show that,
according to this simulation, hybrid method training algorithm in
ANFIS has good agreements between industrial data and simulated
results.
Abstract: One-way functions are functions that are easy to
compute but hard to invert. Their existence is an open conjecture; it
would imply the existence of intractable problems (i.e. NP-problems
which are not in the P complexity class).
If true, the existence of one-way functions would have an impact
on the theoretical framework of physics, in particularly, quantum
mechanics. Such aspect of one-way functions has never been shown
before.
In the present work, we put forward the following.
We can calculate the microscopic state (say, the particle spin in the
z direction) of a macroscopic system (a measuring apparatus
registering the particle z-spin) by the system macroscopic state (the
apparatus output); let us call this association the function F. The
question is: can we compute the function F in the inverse direction?
In other words, can we compute the macroscopic state of the system
through its microscopic state (the preimage F -1)?
In the paper, we assume that the function F is a one-way function.
The assumption implies that at the macroscopic level the Schrödinger
equation becomes unfeasible to compute. This unfeasibility plays a
role of limit of the validity of the linear Schrödinger equation.
Abstract: In recent years, environment regulation forcing
manufactures to consider recovery activity of end-of- life products
and/or return products for refurbishing, recycling,
remanufacturing/repair and disposal in supply chain management. In
this paper, a mathematical model is formulated for single product
production-inventory system considering remanufacturing/reuse of
return products and rate of return products follows a demand like
function, dependent on purchasing price and acceptance quality level.
It is useful in decision making to determine whether to go for
remanufacturing or disposal of returned products along with newly
produced products to satisfy a stationary demand. In addition, a
modified genetic algorithm approach is proposed, inspired by particle
swarm optimization method. Numerical analysis of the case study is
carried out to validate the model.
Abstract: The continued interest in the use of distributed generation in recent years is leading to the growth in number of distributed generators connected to distribution networks. Steady state voltage rise resulting from the connection of these generators can be a major obstacle to their connection at lower voltage levels. The present electric distribution network is designed to keep the customer voltage within tolerance limit. This may require a reduction in connectable generation capacity, under utilization of appropriate generation sites. Thus distribution network operators need a proper voltage regulation method to allow the significant integration of distributed generation systems to existing network. In this work a voltage rise problem in a typical distribution system has been studied. A method for voltage regulation of distribution system with multiple DG system by coordinated operation distributed generator, capacitor and OLTC has been developed. A sensitivity based analysis has been carried out to determine the priority for individual generators in multiple DG environment. The effectiveness of the developed method has been evaluated under various cases through simulation results.
Abstract: The study describes chitosan membrane platform
modified with nanostructure pattern which using nanotechnology to
fabricate. The cell-substrate interaction between neuro-2a neuroblasts
cell lines and chitosan membrane (flat, nanostructure and
nanostructure pattern types) was investigated. The adhered
morphology of neuro-2a cells depends on the topography of chitosan
surface. We have found that neuro-2a showed different morphogenesis
when cells adhered on flat and nanostructure chitosan membrane. The
cell projected area of neuro-2a on flat chitosan membrane is larger
than on nanostructure chitosan membrane. In addition, neuro-2a cells
preferred to adhere on flat chitosan surface region than on
nanostructure chitosan membrane to immobilize and differentiation.
The experiment suggests surface topography can be used as a critical
mechanism to isolate group of neuro-2a to a particular rectangle area
on chitosan membrane. Our finding will provide a platform to take
patch clamp to record electrophysiological behavior about neurons in
vitro in the future.
Abstract: Two freshwater fishes, Rasbora sumatrana
(Cyprinidae) and Poecilia reticulata (guppy) (Poeciliidae) were
exposed for a four-day period in the laboratory condition to a range
of copper (Cu) and cadmium (Cd) concentrations. Mortality was
assessed and median lethal concentrations (LC50) were calculated.
LC50 increased with decrease in mean exposure times for both metals.
For R. sumatrana, LC50s for 24, 48, 72 and 96 hours for Cu were
54.2, 30.3, 18.9 and 5.6 μg/L and for Cd 1440.2, 459.3, 392.3 and
101.6 μg/L respectively. For P. reticulata, LC50s for 24, 48, 72 and
96 hours for Cu were 348.9, 145.4, 61.3 and 37.9 μg/L and for Cd
8205.6, 2827.1, 405.8 and 168.1 μg/L, respectively. Results indicated
that the Cu was more toxic than Cd to both fishes (Cu>Cd) and R.
sumatrana was more sensitive than P. reticulata to the metals.
Abstract: This paper develops driver reaction-time models for
car-following analysis based on human factors. The reaction time
was classified as brake-reaction time (BRT) and
acceleration/deceleration reaction time (ADRT). The BRT occurs
when the lead vehicle is barking and its brake light is on, while the
ADRT occurs when the driver reacts to adjust his/her speed using the
gas pedal only. The study evaluates the effect of driver
characteristics and traffic kinematic conditions on the driver reaction
time in a car-following environment. The kinematic conditions
introduced urgency and expectancy based on the braking behaviour
of the lead vehicle at different speeds and spacing. The kinematic
conditions were used for evaluating the BRT and are classified as
normal, surprised, and stationary. Data were collected on a driving
simulator integrated into a real car and included the BRT and ADRT
(as dependent variables) and driver-s age, gender, driving experience,
driving intensity (driving hours per week), vehicle speed, and
spacing (as independent variables). The results showed that there was
a significant difference in the BRT at normal, surprised, and
stationary scenarios and supported the hypothesis that both urgency
and expectancy had significant effects on BRT. Driver-s age, gender,
speed, and spacing were found to be significant variables for the
BRT in all scenarios. The results also showed that driver-s age and
gender were significant variables for the ADRT. The research
presented in this paper is part of a larger project to develop a driversensitive
in-vehicle rear-end collision warning system.
Abstract: The fuzzy set theory has been applied in many fields,
such as operations research, control theory, and management
sciences, etc. In particular, an application of this theory in decision
making problems is linear programming problems with fuzzy
numbers. In this study, we present a new method for solving fuzzy
number linear programming problems, by use of linear ranking
function. In fact, our method is similar to simplex method that was
used for solving linear programming problems in crisp environment
before.
Abstract: In order to calculate the flexural strength of
normal-strength concrete (NSC) beams, the nonlinear actual concrete
stress distribution within the compression zone is normally replaced
by an equivalent rectangular stress block, with two coefficients of α
and β to regulate the intensity and depth of the equivalent stress
respectively. For NSC beams design, α and β are usually assumed
constant as 0.85 and 0.80 in reinforced concrete (RC) codes. From an
earlier investigation of the authors, α is not a constant but significantly
affected by flexural strain gradient, and increases with the increasing
of strain gradient till a maximum value. It indicates that larger
concrete stress can be developed in flexure than that stipulated by
design codes. As an extension and application of the authors- previous
study, the modified equivalent concrete stress block is used here to
produce a series of design charts showing the maximum design limits
of flexural strength and ductility of singly- and doubly- NSC beams,
through which both strength and ductility design limits are improved
by taking into account strain gradient effect.
Abstract: Solar water heating (SWH) systems are gaining popularity in ASEAN in the midst of increasing number of affluent population in society and environmental concerns from seemingly unchanged reliance on fossil-based fuels. The penetration of these systems and technologies into ASEAN markets is a welcome development; however there is a need for the method of assessment of their thermal performances. This paper discusses the reasons for this need and a suitable method for thermal performance evaluation of SWH systems in ASEAN. The paper also calls on research to be focused on the establishment of reliable data to be entered into the performance rating software. The establishment of accredited solar systems testing facilities can help boost the competitiveness of ASEAN solar industry.
Abstract: This paper identifies five key design characteristics of
production scheduling software systems in printed circuit board (PCB) manufacturing. The authors consider that, in addition to an effective scheduling engine, a scheduling system should be able to
process a preventative maintenance calendar, to give the user the
flexibility to handle data using a variety of electronic sources, to run
simulations to support decision-making, and to have simple and
customisable graphical user interfaces. These design considerations
were the result of a review of academic literature, the evaluation of
commercial applications and a compilation of requirements of a PCB manufacturer. It was found that, from those systems that were evaluated, those that effectively addressed all five characteristics
outlined in this paper were the most robust of all and could be used in
PCB manufacturing.
Abstract: In this paper, a clustering algorithm named KHarmonic
means (KHM) was employed in the training of Radial
Basis Function Networks (RBFNs). KHM organized the data in
clusters and determined the centres of the basis function. The popular
clustering algorithms, namely K-means (KM) and Fuzzy c-means
(FCM), are highly dependent on the initial identification of elements
that represent the cluster well. In KHM, the problem can be avoided.
This leads to improvement in the classification performance when
compared to other clustering algorithms. A comparison of the
classification accuracy was performed between KM, FCM and KHM.
The classification performance is based on the benchmark data sets:
Iris Plant, Diabetes and Breast Cancer. RBFN training with the KHM
algorithm shows better accuracy in classification problem.
Abstract: Application of wood in rural construction is diffused
all around the world since remote times. However, its inclusion in
structural design deserves strong support from broad knowledge of
material properties. The pertinent literature reveals the application of
optical methods in determining the complete field displacement on
bodies exhibiting regular as well as irregular surfaces. The use of
moiré techniques in experimental mechanics consists in analyzing the
patterns generated on the body surface before and after deformation.
The objective of this research work is to study the qualitative
deformation behavior of wooden testing specimens under specific
loading situations. The experiment setup follows the literature
description of shadow moiré methods. Results indicate strong
anisotropy influence of the generated displacement field. Important
qualitative as well as quantitative stress and strain distribution were
obtained wooden members which are applicable to rural
constructions.
Abstract: The spin (ms) and orbital (mo) magnetic moment of
the antiferromagnetic NiO and MnO have been studied in the local
spin density approximation (LSDA+U) within full potential linear
muffin-tin orbital (FP-LMTO method with in the coulomb interaction
U varying from 0 to 10eV, exchange interaction J, from 0 to 1.0eV,
and volume compression VC in range of 0 to 80%. Our calculated
results shown that the spin magnetic moments and the orbital
magnetic moments increase linearly with increasing U and J. While
the interesting behaviour appears when volume compression is
greater than 70% for NiO and 50% for MnO at which ms collapses.
Further increase of volume compression to be at 80% leads to the
disappearance of both magnetic moments.
Abstract: De novo genome assembly is always fragmented. Assembly fragmentation is more serious using the popular next generation sequencing (NGS) data because NGS sequences are shorter than the traditional Sanger sequences. As the data throughput of NGS is high, the fragmentations in assemblies are usually not the result of missing data. On the contrary, the assembled sequences, called contigs, are often connected to more than one other contigs in a complicated manner, leading to the fragmentations. False connections in such complicated connections between contigs, named a contig graph, are inevitable because of repeats and sequencing/assembly errors. Simplifying a contig graph by removing false connections directly improves genome assembly. In this work, we have developed a tool, SIMGraph, to resolve ambiguous connections between contigs using NGS data. Applying SIMGraph to the assembly of a fungus and a fish genome, we resolved 27.6% and 60.3% ambiguous contig connections, respectively. These results can reduce the experimental efforts in resolving contig connections.