Abstract: In this study, data loss tolerance of Support Vector Machines (SVM) based activity recognition model and multi activity classification performance when data are received over a lossy wireless sensor network is examined. Initially, the classification algorithm we use is evaluated in terms of resilience to random data loss with 3D acceleration sensor data for sitting, lying, walking and standing actions. The results show that the proposed classification method can recognize these activities successfully despite high data loss. Secondly, the effect of differentiated quality of service performance on activity recognition success is measured with activity data acquired from a multi hop wireless sensor network, which introduces high data loss. The effect of number of nodes on the reliability and multi activity classification success is demonstrated in simulation environment. To the best of our knowledge, the effect of data loss in a wireless sensor network on activity detection success rate of an SVM based classification algorithm has not been studied before.
Abstract: This paper presents the scaling laws that provide the
criteria of geometry and dynamic similitude between the full-size
rotor-shaft system and its scale model, and can be used to predict the
torsional vibration characteristics of the full-size rotor-shaft system by
manipulating the corresponding data of its scale model. The scaling
factors, which play fundamental roles in predicting the geometry and
dynamic relationships between the full-size rotor-shaft system and its
scale model, for torsional free vibration problems between scale and
full-size rotor-shaft systems are firstly obtained from the equation of
motion of torsional free vibration. Then, the scaling factor of external
force (i.e., torque) required for the torsional forced vibration problems
is determined based on the Newton’s second law. Numerical results
show that the torsional free and forced vibration characteristics of a
full-size rotor-shaft system can be accurately predicted from those of
its scale models by using the foregoing scaling factors. For this reason,
it is believed that the presented approach will be significant for
investigating the relevant phenomenon in the scale model tests.
Abstract: Future flood can be predicted using the probable
maximum flood (PMF). PMF is calculated using the historical
discharge or rainfall data considering the other climatic parameters
remaining stationary. However climate is changing globally and the
key climatic variables are temperature, evaporation, rainfall and sea
level rise are likely to change. To develop scenarios to a basin or
catchment scale these important climatic variables should be
considered. Nowadays scenario based on climatic variables is more
suitable than PMF. Six scenarios were developed for a large Fitzroy
basin and presented in this paper.
Abstract: In-memory database systems are becoming popular
due to the availability and affordability of sufficiently large RAM and
processors in modern high-end servers with the capacity to manage
large in-memory database transactions. While fast and reliable inmemory
systems are still being developed to overcome cache misses,
CPU/IO bottlenecks and distributed transaction costs, disk-based data
stores still serve as the primary persistence. In addition, with the
recent growth in multi-tenancy cloud applications and associated
security concerns, many organisations consider the trade-offs and
continue to require fast and reliable transaction processing of diskbased
database systems as an available choice. For these
organizations, the only way of increasing throughput is by improving
the performance of disk-based concurrency control. This warrants a
hybrid database system with the ability to selectively apply an
enhanced disk-based data management within the context of inmemory
systems that would help improve overall throughput.
The general view is that in-memory systems substantially
outperform disk-based systems. We question this assumption and
examine how a modified variation of access invariance that we call
enhanced memory access, (EMA) can be used to allow very high
levels of concurrency in the pre-fetching of data in disk-based
systems. We demonstrate how this prefetching in disk-based systems
can yield close to in-memory performance, which paves the way for
improved hybrid database systems. This paper proposes a novel EMA
technique and presents a comparative study between disk-based EMA
systems and in-memory systems running on hardware configurations
of equivalent power in terms of the number of processors and their
speeds. The results of the experiments conducted clearly substantiate
that when used in conjunction with all concurrency control
mechanisms, EMA can increase the throughput of disk-based systems
to levels quite close to those achieved by in-memory system. The
promising results of this work show that enhanced disk-based
systems facilitate in improving hybrid data management within the
broader context of in-memory systems.
Abstract: The purpose of this study is to examine the possible
link between employee and customer satisfaction. The service
provided by employees, help to build a good relationship with
customers and can help at increasing their loyalty. Published data for
job satisfaction and indicators of customer services of banks were
gathered from relevant published works which included data from
five different countries. The scores of customers and employees
satisfaction of the different published works were transformed and
normalized to the scale of 1 to 100. The data were analyzed and a
regression analysis of the two parameters was used to describe the
link between employee’s satisfaction and customer’s satisfaction.
Assuming that employee satisfaction has a significant influence on
customer’s service and the resulting customer satisfaction, the
reviewed data indicate that employee’s satisfaction contributes
significantly on the level of customer satisfaction in the Banking
sector. There was a significant correlation between the two
parameters (Pearson correlation R2=0.52 P
Abstract: In this paper we describe the Levenvberg-Marquardt
(LM) algorithm for identification and equalization of CDMA
signals received by an antenna array in communication channels.
The synthesis explains the digital separation and equalization of
signals after propagation through multipath generating intersymbol
interference (ISI). Exploiting discrete data transmitted and three
diversities induced at the reception, the problem can be composed
by the Block Component Decomposition (BCD) of a tensor of
order 3 which is a new tensor decomposition generalizing the
PARAFAC decomposition. We optimize the BCD decomposition by
Levenvberg-Marquardt method gives encouraging results compared to
classical alternating least squares algorithm (ALS). In the equalization
part, we use the Minimum Mean Square Error (MMSE) to perform
the presented method. The simulation results using the LM algorithm
are important.
Abstract: Cost of governance in Nigeria has become a challenge
to development and concern to practitioners and scholars alike in the
field of business and social science research. In the 2010 national
budget of NGN4.6 trillion or USD28.75billion for instance, only a
pantry sum of NGN1.8trillion or USD11.15billion was earmarked for
capital expenditure. Similarly, in 2013, out of a total national budget
of NGN4.92trillion or USD30.75billion, only the sum of
NGN1.50trllion or USD9.38billion was voted for capital expenditure.
Therefore, based on the data sourced from the Nigerian Office of
Statistics, Central bank of Nigeria Statistical Bulletin as well as from
the United Nations Development Programme, this study examined
the causes of high cost of governance in Nigeria. It found out that the
high cost of governance in the country is in the interest of the ruling
class, arising from their unethical behaviour – corrupt practices and
the poor management of public resources. As a result, the study
recommends the need to intensify the war against corruption and
mismanagement of public resources by government officials as
possible solution to overcome the high cost of governance in Nigeria.
This could be achieved by strengthening the constitutional powers of
the various anti-corruption agencies in the area of arrest, investigation
and prosecution of offenders without the interference of the executive
arm of government either at the local, state or federal level.
Abstract: Validity, integrity, and impacts of the IT systems of
the US federal courts have been studied as part of the Human Rights
Alert-NGO (HRA) submission for the 2015 Universal Periodic
Review (UPR) of human rights in the United States by the Human
Rights Council (HRC) of the United Nations (UN). The current
report includes overview of IT system analysis, data-mining and case
studies. System analysis and data-mining show: Development and
implementation with no lawful authority, servers of unverified
identity, invalidity in implementation of electronic signatures,
authentication instruments and procedures, authorities and
permissions; discrimination in access against the public and
unrepresented (pro se) parties and in favor of attorneys; widespread
publication of invalid judicial records and dockets, leading to their
false representation and false enforcement. A series of case studies
documents the impacts on individuals' human rights, on banking
regulation, and on international matters. Significance is discussed in
the context of various media and expert reports, which opine
unprecedented corruption of the US justice system today, and which
question, whether the US Constitution was in fact suspended. Similar
findings were previously reported in IT systems of the State of
California and the State of Israel, which were incorporated, subject to
professional HRC staff review, into the UN UPR reports (2010 and
2013). Solutions are proposed, based on the principles of publicity of
the law and the separation of power: Reliance on US IT and legal
experts under accountability to the legislative branch, enhancing
transparency, ongoing vigilance by human rights and internet
activists. IT experts should assume more prominent civic duties in the
safeguard of civil society in our era.
Abstract: In this study, the performance analyses of the twenty
five Coal-Fired Power Plants (CFPPs) used for electricity generation
are carried out through various Data Envelopment Analysis (DEA)
models. Three efficiency indices are defined and pursued. During the
calculation of the operational performance, energy and non-energy
variables are used as input, and net electricity produced is used as
desired output (Model-1). CO2 emitted to the environment is used as
the undesired output (Model-2) in the computation of the pure
environmental performance while in Model-3 CO2 emissions is
considered as detrimental input in the calculation of operational and
environmental performance. Empirical results show that most of the
plants are operating in increasing returns to scale region and Mettur
plant is efficient one with regards to energy use and environment.
The result also indicates that the undesirable output effect is
insignificant in the research sample. The present study will provide
clues to plant operators towards raising the operational and
environmental performance of CFPPs.
Abstract: One of the major difficulties introduced with wind
power penetration is the inherent uncertainty in production originating
from uncertain wind conditions. This uncertainty impacts many
different aspects of power system operation, especially the balancing
power requirements. For this reason, in power system development
planing, it is necessary to evaluate the potential uncertainty in future
wind power generation. For this purpose, simulation models are
required, reproducing the performance of wind power forecasts.
This paper presents a wind power forecast error simulation models
which are based on the stochastic process simulation. Proposed
models capture the most important statistical parameters recognized
in wind power forecast error time series. Furthermore, two distinct
models are presented based on data availability. First model uses
wind speed measurements on potential or existing wind power plant
locations, while the seconds model uses statistical distribution of wind
speeds.
Abstract: It is difficult to study the effect of various variables on
cycle fitting through actual experiment. To overcome such difficulty,
the forward dynamics of a musculoskeletal model was applied to cycle
fitting in this study. The measured EMG data weres compared with the
muscle activities of the musculoskeletal model through forward
dynamics. EMG data were measured from five cyclists who do not
have musculoskeletal diseases during three minutes pedaling with a
constant load (150 W) and cadence (90 RPM). The muscles used for
the analysis were the Vastus Lateralis (VL), Tibialis Anterior (TA),
Bicep Femoris (BF), and Gastrocnemius Medial (GM). Person’s
correlation coefficients of the muscle activity patterns, the peak timing
of the maximum muscle activities, and the total muscle activities were
calculated and compared. BIKE3D model of AnyBody (Anybodytech,
Denmark) was used for the musculoskeletal model simulation. The
comparisons of the actual experiments with the simulation results
showed significant correlations in the muscle activity patterns (VL:
0.789, TA: 0.503, BF: 0.468, GM: 0.670). The peak timings of the
maximum muscle activities were distributed at particular phases. The
total muscle activities were compared with the normalized muscle
activities, and the comparison showed about 10% difference in the VL
(+10%), TA (+9.7%), and BF (+10%), excluding the GM (+29.4%).
Thus, it can be concluded that muscle activities of model &
experiment showed similar results. The results of this study indicated
that it was possible to apply the simulation of further improved
musculoskeletal model to cycle fitting.
Abstract: Alkylated silicon nanocrystals (C11-SiNCs) were
prepared successfully by galvanostatic etching of p-Si(100) wafers
followed by a thermal hydrosilation reaction of 1-undecene in
refluxing toluene in order to extract C11-SiNCs from porous silicon.
Erbium trichloride was added to alkylated SiNCs using a simple
mixing chemical route. To the best of our knowledge, this is the first
investigation on mixing SiNCs with erbium ions (III) by this
chemical method. The chemical characterization of C11-SiNCs and
their mixtures with Er3+(Er/C11-SiNCs) were carried out using X-ray
photoemission spectroscopy (XPS). The optical properties of C11-
SiNCs and their mixtures with Er3+ were investigated using Raman
spectroscopy and photoluminescence (PL). The erbium mixed
alkylated SiNCs shows an orange PL emission peak at around 595
nm that originates from radiative recombination of Si. Er/C11-SiNCs
mixture also exhibits a weak PL emission peak at 1536 nm that
originates from the intra-4f transition in erbium ions (Er3+). The PL
peak of Si in Er/C11-SiNCs mixture is increased in the intensity up to
three times as compared to pure C11-SiNCs. The collected data
suggest that this chemical mixing route leads instead to a transfer of
energy from erbium ions to alkylated SiNCs.
Abstract: The Simulation based VLSI Implementation of
FELICS (Fast Efficient Lossless Image Compression System)
Algorithm is proposed to provide the lossless image compression and
is implemented in simulation oriented VLSI (Very Large Scale
Integrated). To analysis the performance of Lossless image
compression and to reduce the image without losing image quality
and then implemented in VLSI based FELICS algorithm. In FELICS
algorithm, which consists of simplified adjusted binary code for
Image compression and these compression image is converted in
pixel and then implemented in VLSI domain. This parameter is used
to achieve high processing speed and minimize the area and power.
The simplified adjusted binary code reduces the number of arithmetic
operation and achieved high processing speed. The color difference
preprocessing is also proposed to improve coding efficiency with
simple arithmetic operation. Although VLSI based FELICS
Algorithm provides effective solution for hardware architecture
design for regular pipelining data flow parallelism with four stages.
With two level parallelisms, consecutive pixels can be classified into
even and odd samples and the individual hardware engine is
dedicated for each one. This method can be further enhanced by
multilevel parallelisms.
Abstract: This paper describes a novel application of Fiber
Braggs Grating (FBG) sensors in the assessment of human postural
stability and balance on an unstable platform. In this work, FBG
sensor Stability Analyzing Device (FBGSAD) is developed for
measurement of plantar strain to assess the postural stability of
subjects on unstable platforms during different stances in eyes open
and eyes closed conditions on a rocker board. The studies are
validated by comparing the Centre of Gravity (CG) variations
measured on the lumbar vertebra of subjects using a commercial
accelerometer. The results obtained from the developed FBGSAD
depict qualitative similarities with the data recorded by commercial
accelerometer. The advantage of the FBGSAD is that it measures
simultaneously plantar strain distribution and postural stability of the
subject along with its inherent benefits like non-requirement of
energizing voltage to the sensor, electromagnetic immunity and
simple design which suits its applicability in biomechanical
applications. The developed FBGSAD can serve as a tool/yardstick to
mitigate space motion sickness, identify individuals who are
susceptible to falls and to qualify subjects for balance and stability,
which are important factors in the selection of certain unique
professionals such as aircraft pilots, astronauts, cosmonauts etc.
Abstract: Strong anion exchange resins with QN+OH-, have the
potential to be developed and employed as heterogeneous catalyst for
transesterification, as they are chemically stable to leaching of the
functional group. Nine different SIERs (SIER1-9) with QN+OH-were
prepared by suspension polymerization of vinylbenzyl chloridedivinylbenzene
(VBC-DVB) copolymers in the presence of n-heptane
(pore-forming agent). The amine group was successfully grafted into
the polymeric resin beads through functionalization with
trimethylamine. These SIERs are then used as a catalyst for the
transesterification of triacetin with methanol. A set of differential
equations that represents the Langmuir-Hinshelwood-Hougen-
Watson (LHHW) and Eley-Rideal (ER) models for the
transesterification reaction were developed. These kinetic models of
LHHW and ER were fitted to the experimental data. Overall, the
synthesized ion exchange resin-catalyzed reaction were welldescribed
by the Eley-Rideal model compared to LHHW models,
with sum of square error (SSE) of 0.742 and 0.996, respectively.
Abstract: Currently, thorium fuel has been especially noticed
because of its proliferation resistance than long half-life alpha emitter
minor actinides, breeding capability in fast and thermal neutron flux
and mono-isotopic naturally abundant. In recent years, efficiency of
minor actinide burning up in PWRs has been investigated. Hence, a
minor actinide-contained thorium based fuel matrix can confront both
proliferation resistance and nuclear waste depletion aims. In the
present work, minor actinide depletion rate in a CANDU-type nuclear
core modeled using MCNP code has been investigated. The obtained
effects of minor actinide load as mixture of thorium fuel matrix on
the core neutronics has been studied with comparing presence and
non-presence of minor actinide component in the fuel matrix.
Depletion rate of minor actinides in the MA-contained fuel has been
calculated using different power loads. According to the obtained
computational data, minor actinide loading in the modeled core
results in more negative reactivity coefficients. The MA-contained
fuel achieves less radial peaking factor in the modeled core. The
obtained computational results showed 140 kg of 464 kg initial load
of minor actinide has been depleted in during a 6-year burn up in 10
MW power.
Abstract: Factors affecting construction unit cost vary
depending on a country’s political, economic, social and
technological inclinations. Factors affecting construction costs have
been studied from various perspectives. Analysis of cost factors
requires an appreciation of a country’s practices. Identified cost
factors provide an indication of a country’s construction economic
strata. The purpose of this paper is to identify the essential factors
that affect unit cost estimation and their breakdown using artificial
neural networks. Twenty five (25) identified cost factors in road
construction were subjected to a questionnaire survey and employing
SPSS factor analysis the factors were reduced to eight. The 8 factors
were analysed using neural network (NN) to determine the
proportionate breakdown of the cost factors in a given construction
unit rate. NN predicted that political environment accounted 44% of
the unit rate followed by contractor capacity at 22% and financial
delays, project feasibility and overhead & profit each at 11%. Project
location, material availability and corruption perception index had
minimal impact on the unit cost from the training data provided.
Quantified cost factors can be incorporated in unit cost estimation
models (UCEM) to produce more accurate estimates. This can create
improvements in the cost estimation of infrastructure projects and
establish a benchmark standard to assist the process of alignment of
work practises and training of new staff, permitting the on-going
development of best practises in cost estimation to become more
effective.
Abstract: In this study, attempt has been made to investigate the
relationship specifically the causal relation between fund unit prices
of Islamic equity unit trust fund which measure by fund NAV and the
selected macro-economic variables of Malaysian economy by using
VECM causality test and Granger causality test. Monthly data has
been used from Jan, 2006 to Dec, 2012 for all the variables. The
findings of the study showed that industrial production index,
political election and financial crisis are the only variables having
unidirectional causal relationship with fund unit price. However the
global oil price is having bidirectional causality with fund NAV.
Thus, it is concluded that the equity unit trust fund industry in
Malaysia is an inefficient market with respect to the industrial
production index, global oil prices, political election and financial
crisis. However the market is approaching towards informational
efficiency at least with respect to four macroeconomic variables,
treasury bill rate, money supply, foreign exchange rate, and
corruption index.
Abstract: The thermal conductivity of a fluid can be
significantly enhanced by dispersing nano-sized particles in it, and
the resultant fluid is termed as "nanofluid". A theoretical model for
estimating the thermal conductivity of a nanofluid has been proposed
here. It is based on the mechanism that evenly dispersed
nanoparticles within a nanofluid undergo Brownian motion in course
of which the nanoparticles repeatedly collide with the heat source.
During each collision a rapid heat transfer occurs owing to the solidsolid
contact. Molecular dynamics (MD) simulation of the collision
of nanoparticles with the heat source has shown that there is a pulselike
pick up of heat by the nanoparticles within 20-100 ps, the extent
of which depends not only on thermal conductivity of the
nanoparticles, but also on the elastic and other physical properties of
the nanoparticle. After the collision the nanoparticles undergo
Brownian motion in the base fluid and release the excess heat to the
surrounding base fluid within 2-10 ms. The Brownian motion and
associated temperature variation of the nanoparticles have been
modeled by stochastic analysis. Repeated occurrence of these events
by the suspended nanoparticles significantly contributes to the
characteristic thermal conductivity of the nanofluids, which has been
estimated by the present model for a ethylene glycol based nanofluid
containing Cu-nanoparticles of size ranging from 8 to 20 nm, with
Gaussian size distribution. The prediction of the present model has
shown a reasonable agreement with the experimental data available
in literature.
Abstract: Developing young people’s employability is a key
policy issue for ensuring their successful transition to the labour
market and their access to career oriented employment. The youths of
today irrespective of their gender need to acquire the knowledge,
skills and attitudes that will enable them to create or find jobs as well
as cope with unpredictable labour market changes throughout their
working lives. In a study carried out to determine the influence of
gender on job-competencies requirements of chemical-based
industries and undergraduate-competencies acquisition by chemists
working in the industries, all chemistry graduates working in twenty
(20) chemical-based industries that were randomly selected from six
sectors of chemical-based industries in Lagos and Ogun States of
Nigeria were administered with Job-competencies required and
undergraduate-competencies acquired assessment questionnaire. The
data were analysed using means and independent sample t-test. The
findings revealed that the population of female chemists working in
chemical-based industries is low compared with the number of male
chemists; furthermore, job-competencies requirements are found not
to be gender biased while there is no significant difference in
undergraduate-competencies acquisition of male and female
chemists. This suggests that females should be given the same
opportunity of employment in chemical-based industries as their male
counterparts. The study also revealed the level of acquisition of
undergraduate competencies as related to the needs of chemicalbased
industries.