Abstract: In general, class complexity is measured based on any
one of these factors such as Line of Codes (LOC), Functional points
(FP), Number of Methods (NOM), Number of Attributes (NOA) and so on. There are several new techniques, methods and metrics with
the different factors that are to be developed by the researchers for calculating the complexity of the class in Object Oriented (OO)
software. Earlier, Arockiam et.al has proposed a new complexity measure namely Extended Weighted Class Complexity (EWCC)
which is an extension of Weighted Class Complexity which is proposed by Mishra et.al. EWCC is the sum of cognitive weights of
attributes and methods of the class and that of the classes derived. In EWCC, a cognitive weight of each attribute is considered to be 1.
The main problem in EWCC metric is that, every attribute holds the
same value but in general, cognitive load in understanding the
different types of attributes cannot be the same. So here, we are proposing a new metric namely Attribute Weighted Class Complexity
(AWCC). In AWCC, the cognitive weights have to be assigned for the attributes which are derived from the effort needed to understand
their data types. The proposed metric has been proved to be a better
measure of complexity of class with attributes through the case studies and experiments
Abstract: The recurring decimal of rural and urban poverty in
Nigeria, resulting from lack of sustainable livelihood activities by
the people due to non-diversification of the economy, necessitated
this study. One hundred snail farmers were randomly selected in
Akure North and Akure South Local Government areas of Ondo
State, Southwest Nigeria where snail farming is widely practised.
Data collection was through questionnaires administration and onsite
observation of farms. Data obtained were subjected to
descriptive statistics, Student-s t-test and regression analysis. Cost
benefit ratio (CBR) and rate of return on investment (RORI) were
calculated in order to determine the poverty alleviation potentials of
snail farming in the study areas. Although snail farming was
profitable and viable, it was below poverty line. With time and more
knowledge in its farming activities, and with more people taking to
snail production, its poverty alleviation and reduction potentials will
increase.
Abstract: The zero inflated models are usually used in modeling
count data with excess zeros where the existence of the excess zeros
could be structural zeros or zeros which occur by chance. These type
of data are commonly found in various disciplines such as finance,
insurance, biomedical, econometrical, ecology, and health sciences
which involve sex and health dental epidemiology. The most popular
zero inflated models used by many researchers are zero inflated
Poisson and zero inflated negative binomial models. In addition, zero
inflated generalized Poisson and zero inflated double Poisson models
are also discussed and found in some literature. Recently zero
inflated inverse trinomial model and zero inflated strict arcsine
models are advocated and proven to serve as alternative models in
modeling overdispersed count data caused by excessive zeros and
unobserved heterogeneity. The purpose of this paper is to review
some related literature and provide a variety of examples from
different disciplines in the application of zero inflated models.
Different model selection methods used in model comparison are
discussed.
Abstract: Commercial nanocomposite food packaging type nano-silver containers were characterised using scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDX). The presence of nanoparticles consistent with the incorporation of 1% nano-silver (Ag) and 0.1% titanium dioxide (TiO2) nanoparticle into polymeric materials formed into food containers was confirmed. Both nanomaterials used in this type of packaging appear to be embedded in a layered configuration within the bulk polymer. The dimensions of the incorporated nanoparticles were investigated using X-ray diffraction (XRD) and determined by calculation using the Scherrer Formula; these were consistent with Ag and TiO2 nanoparticles in the size range 20-70nm both were spherical shape nanoparticles. Antimicrobial assessment of the nanocomposite container has also been performed and the results confirm the antimicrobial activity of Ag and TiO2 nanoparticles in food packaging containers. Migration assessments were performed in a wide range of food matrices to determine the migration of nanoparticles from the packages. The analysis was based upon the relevant European safety Directives and involved the application of inductively coupled plasma mass spectrometry (ICP-MS) to identify the range of migration risk. The data pertain to insignificance levels of migration of Ag and TiO2 nanoparticles into the selected food matrices.
Abstract: This paper presents the review of past studies
concerning mathematical models for rescheduling passenger railway
services, as part of delay management in the occurrence of railway
disruption. Many past mathematical models highlighted were aimed
at minimizing the service delays experienced by passengers during
service disruptions. Integer programming (IP) and mixed-integer
programming (MIP) models are critically discussed, focusing on the
model approach, decision variables, sets and parameters. Some of
them have been tested on real-life data of railway companies
worldwide, while a few have been validated on fictive data. Based
on selected literatures on train rescheduling, this paper is able to
assist researchers in the model formulation by providing
comprehensive analyses towards the model building. These analyses
would be able to help in the development of new approaches in
rescheduling strategies or perhaps to enhance the existing
rescheduling models and make them more powerful or more
applicable with shorter computing time.
Abstract: With the proliferation of mobile computing technology, mobile learning (m-learning) will play a vital role in the rapidly growing electronic learning market. However, the acceptance of m-learning by individuals is critical to the successful implementation of m-learning systems. Thus, there is a need to research the factors that affect users- intention to use m-learning. Based on an updated information system (IS) success model, data collected from 350 respondents in Taiwan were tested against the research model using the structural equation modeling approach. The data collected by questionnaire were analyzed to check the validity of constructs. Then hypotheses describing the relationships between the identified constructs and users- satisfaction were formulated and tested.
Abstract: This paper describes a method to improve the robustness of a face recognition system based on the combination of two compensating classifiers. The face images are preprocessed by the appearance-based statistical approaches such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). LDA features of the face image are taken as the input of the Radial Basis Function Network (RBFN). The proposed approach has been tested on the ORL database. The experimental results show that the LDA+RBFN algorithm has achieved a recognition rate of 93.5%
Abstract: A full six degrees of freedom (6-DOF) flight dynamics
model is proposed for the accurate prediction of short and long-range
trajectories of high spin and fin-stabilized projectiles via atmospheric
flight to final impact point. The projectiles is assumed to be both rigid
(non-flexible), and rotationally symmetric about its spin axis launched
at low and high pitch angles. The mathematical model is based on the
full equations of motion set up in the no-roll body reference frame and
is integrated numerically from given initial conditions at the firing
site. The projectiles maneuvering motion depends on the most
significant force and moment variations, in addition to wind and
gravity. The computational flight analysis takes into consideration the
Mach number and total angle of attack effects by means of the
variable aerodynamic coefficients. For the purposes of the present
work, linear interpolation has been applied from the tabulated database
of McCoy-s book. The developed computational method gives
satisfactory agreement with published data of verified experiments and
computational codes on atmospheric projectile trajectory analysis for
various initial firing flight conditions.
Abstract: Electrocardiogram (ECG) data compression algorithm
is needed that will reduce the amount of data to be transmitted, stored
and analyzed, but without losing the clinical information content. A
wavelet ECG data codec based on the Set Partitioning In Hierarchical
Trees (SPIHT) compression algorithm is proposed in this paper. The
SPIHT algorithm has achieved notable success in still image coding.
We modified the algorithm for the one-dimensional (1-D) case and
applied it to compression of ECG data.
By this compression method, small percent root mean square
difference (PRD) and high compression ratio with low
implementation complexity are achieved. Experiments on selected
records from the MIT-BIH arrhythmia database revealed that the
proposed codec is significantly more efficient in compression and in
computation than previously proposed ECG compression schemes.
Compression ratios of up to 48:1 for ECG signals lead to acceptable
results for visual inspection.
Abstract: It is a challenge to provide a wide range of queries to
database query systems for small mobile devices, such as the PDAs
and cell phones. Currently, due to the physical and resource
limitations of these devices, most reported database querying systems
developed for them are only offering a small set of pre-determined
queries for users to possibly pose. The above can be resolved by
allowing free-form queries to be entered on the devices. Hence, a
query language that does not restrict the combination of query terms
entered by users is proposed. This paper presents the free-form query
language and the method used in translating free-form queries to
their equivalent SQL statements.
Abstract: Gradual patterns have been studied for many years as
they contain precious information. They have been integrated in
many expert systems and rule-based systems, for instance to reason
on knowledge such as “the greater the number of turns, the greater
the number of car crashes”. In many cases, this knowledge has been
considered as a rule “the greater the number of turns → the greater
the number of car crashes” Historically, works have thus been
focused on the representation of such rules, studying how implication
could be defined, especially fuzzy implication. These rules were
defined by experts who were in charge to describe the systems they
were working on in order to turn them to operate automatically. More
recently, approaches have been proposed in order to mine databases
for automatically discovering such knowledge. Several approaches
have been studied, the main scientific topics being: how to determine
what is an relevant gradual pattern, and how to discover them as
efficiently as possible (in terms of both memory and CPU usage).
However, in some cases, end-users are not interested in raw level
knowledge, and are rather interested in trends. Moreover, it may be
the case that no relevant pattern can be discovered at a low level of
granularity (e.g. city), whereas some can be discovered at a higher
level (e.g. county). In this paper, we thus extend gradual pattern
approaches in order to consider multiple level gradual patterns. For
this purpose, we consider two aggregation policies, namely
horizontal and vertical.
Abstract: In the planning point of view, it is essential to have
mode choice, due to the massive amount of incurred in transportation
systems. The intercity travellers in Libya have distinct features, as
against travellers from other countries, which includes cultural and
socioeconomic factors. Consequently, the goal of this study is to
recognize the behavior of intercity travel using disaggregate models,
for projecting the demand of nation-level intercity travel in Libya.
Multinomial Logit Model for all the intercity trips has been
formulated to examine the national-level intercity transportation in
Libya. The Multinomial logit model was calibrated using nationwide
revealed preferences (RP) and stated preferences (SP) survey. The
model was developed for deference purpose of intercity trips (work,
social and recreational). The variables of the model have been
predicted based on maximum likelihood method. The data needed for
model development were obtained from all major intercity corridors
in Libya. The final sample size consisted of 1300 interviews. About
two-thirds of these data were used for model calibration, and the
remaining parts were used for model validation. This study, which is
the first of its kind in Libya, investigates the intercity traveler’s
mode-choice behavior. The intercity travel mode-choice model was
successfully calibrated and validated. The outcomes indicate that, the
overall model is effective and yields higher precision of estimation.
The proposed model is beneficial, due to the fact that, it is receptive
to a lot of variables, and can be employed to determine the impact of
modifications in the numerous characteristics on the need for various
travel modes. Estimations of the model might also be of valuable to
planners, who can estimate possibilities for various modes and
determine the impact of unique policy modifications on the need for
intercity travel.
Abstract: Sediment formation and its transport along the river course is considered as important hydraulic consideration in river engineering. Their impact on the morphology of rivers on one hand and important considerations of which in the design and construction of the hydraulic structures on the other has attracted the attention of experts in arid and semi-arid regions. Under certain conditions where the momentum energy of the flow stream reaches a specific rate, the sediment materials start to be transported with the flow. This can usually be analyzed in two different categories of suspended and bed load materials. Sedimentation phenomenon along the waterways and the conveyance of vast volume of materials into the canal networks can potentially influence water abstraction in the intake structures. This can pose a serious threat to operational sustainability and water delivery performance in the canal networks. The situation is serious where ineffective watershed management (poor vegetation cover in the water basin) is the underlying cause of soil erosion which feeds the materials into the waterways that intern would necessitate comprehensive study. The present paper aims to present an analytical investigation of the sediment process in the waterways on one hand and estimation of the sediment load transport into the lined canals using the SHARC software on the other. For this reason, the paper focuses on the comparative analysis of the hydraulic behaviors of the Sabilli main canal that feeds the pumping station with that of the Western canal in the Greater Dezful region to identify effective factors in sedimentation and ways of mitigating their impact on water abstraction in the canal systems. The method involved use of observational data available in the Dezful Dastmashoon hydrometric station along a 6 km waterway of the Sabilli main canal using the SHARC software to estimate the suspended load concentration and bed load materials. Results showed the transport of a significant volume of sediment loads from the waterways into the canal system which is assumed to have arisen from the absence of stilling basin on one hand and the gravity flow on the other has caused serious challenges. This is contrary to what occurs in the Sabilli canal, where the design feature which incorporates a settling basin just before the pumping station is the major cause of reduced sediment load transport into the canal system.Results showed that modification of the present design features by constructing a settling basin just upstream of the western intake structure can considerably reduce the entry of sediment materials into the canal system. Not only this can result in the sustainability of the hydraulic structures but can also improve operational performance of water conveyance and distribution system, all of which are the pre-requisite to secure reliable and equitable water delivery regime for the command area.
Abstract: Rapid prototyping (RP) techniques are a group of
advanced manufacturing processes that can produce custom made
objects directly from computer data such as Computer Aided Design
(CAD), Computed Tomography (CT) and Magnetic Resonance
Imaging (MRI) data. Using RP fabrication techniques, constructs
with controllable and complex internal architecture with appropriate
mechanical properties can be achieved. One of the attractive and
promising utilization of RP techniques is related to tissue engineering
(TE) scaffold fabrication. Tissue engineering scaffold is a 3D
construction that acts as a template for tissue regeneration. Although
several conventional techniques such as solvent casting and gas
forming are utilized in scaffold fabrication; these processes show
poor interconnectivity and uncontrollable porosity of the produced
scaffolds. So, RP techniques become the best alternative fabrication
methods of TE scaffolds. This paper reviews the current state of the
art in the area of tissue engineering scaffolds fabrication using
advanced RP processes, as well as the current limitations and future
trends in scaffold fabrication RP techniques.
Abstract: This paper examines the impact of information and
communication technology (ICT) usage, internal relationship,
supplier-retailer relationship, logistics services and inventory
management on convenience store suppliers- performance. Data was
collected from 275 convenience store managers in Malaysia using a
set of questionnaire. The multiple linear regression results indicate
that inventory management, supplier-retailer relationship, logistics
services and internal relationship are predictors of supplier
performance as perceived by convenience store managers. However,
ICT usage is not a predictor of supplier performance. The study
focuses only on convenience stores and petrol station convenience
stores and concentrates only on managers. The results provide
insights to suppliers who serve convenience stores and possibly
similar retail format on factors to consider in improving their service
to retailers. The results also provide insights to government in its
aspiration to improve business operations of convenience store to
consider ways to enhance the adoption of ICT by retailers and
suppliers.
Abstract: A wide spectrum of systems require reliable
personal recognition schemes to either confirm or determine the
identity of an individual person. This paper considers multimodal
biometric system and their applicability to access control,
authentication and security applications. Strategies for feature
extraction and sensor fusion are considered and contrasted. Issues
related to performance assessment, deployment and standardization
are discussed. Finally future directions of biometric systems
development are discussed.
Abstract: A clustering based technique has been developed and implemented for Short Term Load Forecasting, in this article. Formulation has been done using Mean Absolute Percentage Error (MAPE) as an objective function. Data Matrix and cluster size are optimization variables. Model designed, uses two temperature variables. This is compared with six input Radial Basis Function Neural Network (RBFNN) and Fuzzy Inference Neural Network (FINN) for the data of the same system, for same time period. The fuzzy inference system has the network structure and the training procedure of a neural network which initially creates a rule base from existing historical load data. It is observed that the proposed clustering based model is giving better forecasting accuracy as compared to the other two methods. Test results also indicate that the RBFNN can forecast future loads with accuracy comparable to that of proposed method, where as the training time required in the case of FINN is much less.
Abstract: ECG analysis method was developed using ROC
analysis of PVC detecting algorithm. ECG signal of MIT-BIH
arrhythmia database was analyzed by MATLAB. First of all, the
baseline was removed by median filter to preprocess the ECG signal.
R peaks were detected for ECG analysis method, and normal VCG
was extracted for VCG analysis method. Four PVC detecting
algorithm was analyzed by ROC curve, which parameters are
maximum amplitude of QRS complex, width of QRS complex, r-r
interval and geometric mean of VCG. To set cut-off value of
parameters, ROC curve was estimated by true-positive rate
(sensitivity) and false-positive rate. sensitivity and false negative rate
(specificity) of ROC curve calculated, and ECG was analyzed using
cut-off value which was estimated from ROC curve. As a result, PVC
detecting algorithm of VCG geometric mean have high availability,
and PVC could be detected more accurately with amplitude and width
of QRS complex.
Abstract: Surface metrology with image processing is a challenging task having wide applications in industry. Surface roughness can be evaluated using texture classification approach. Important aspect here is appropriate selection of features that characterize the surface. We propose an effective combination of features for multi-scale and multi-directional analysis of engineering surfaces. The features include standard deviation, kurtosis and the Canny edge detector. We apply the method by analyzing the surfaces with Discrete Wavelet Transform (DWT) and Dual-Tree Complex Wavelet Transform (DT-CWT). We used Canberra distance metric for similarity comparison between the surface classes. Our database includes the surface textures manufactured by three machining processes namely Milling, Casting and Shaping. The comparative study shows that DT-CWT outperforms DWT giving correct classification performance of 91.27% with Canberra distance metric.
Abstract: This research is aimed to compare the percentages of correct classification of Empirical Bayes method (EB) to Classical method when data are constructed as near normal, short-tailed and long-tailed symmetric, short-tailed and long-tailed asymmetric. The study is performed using conjugate prior, normal distribution with known mean and unknown variance. The estimated hyper-parameters obtained from EB method are replaced in the posterior predictive probability and used to predict new observations. Data are generated, consisting of training set and test set with the sample sizes 100, 200 and 500 for the binary classification. The results showed that EB method exhibited an improved performance over Classical method in all situations under study.