Abstract: In general, class complexity is measured based on any
one of these factors such as Line of Codes (LOC), Functional points
(FP), Number of Methods (NOM), Number of Attributes (NOA) and so on. There are several new techniques, methods and metrics with
the different factors that are to be developed by the researchers for calculating the complexity of the class in Object Oriented (OO)
software. Earlier, Arockiam et.al has proposed a new complexity measure namely Extended Weighted Class Complexity (EWCC)
which is an extension of Weighted Class Complexity which is proposed by Mishra et.al. EWCC is the sum of cognitive weights of
attributes and methods of the class and that of the classes derived. In EWCC, a cognitive weight of each attribute is considered to be 1.
The main problem in EWCC metric is that, every attribute holds the
same value but in general, cognitive load in understanding the
different types of attributes cannot be the same. So here, we are proposing a new metric namely Attribute Weighted Class Complexity
(AWCC). In AWCC, the cognitive weights have to be assigned for the attributes which are derived from the effort needed to understand
their data types. The proposed metric has been proved to be a better
measure of complexity of class with attributes through the case studies and experiments
Abstract: Polymers are one of the most widely used materials in our every day life. The subject of renewable resources has attracted great attention in the last period of time. New polymeric materials derived from renewable resources, like carbohydrates draw attention to public eye especially because of their biocompatibility and biodegradability. The aim of our paper was to obtain environmentally compatible polymers from monosaccharides. Novel glycopolymers based on D-glucose have been obtained from copolymerization of a new monomer carrying carbohydrate moiety with methyl methacrylate (MMA) via free radical bulk polymerization. Differential scanning calorimetry (DSC) was performed in order to study the copolymerization process of the monomer into the chosen co-monomer; the activation energy of this process was evaluated using Ozawa method. The copolymers obtained were characterized using ATR-FTIR spectroscopy. The thermal stability of the obtained products was studied by thermogravimetry (TG).
Abstract: The volume of biosolids produced in Malaysia
nowadays had increased proportionally to its population size. The end
products from the waste treatments were mounting, thus inevitable
that in the end the environment will be surrounded by the waste. This
study was conducted to investigate the suitability of biosolids to be
reused as fertilizer for non-food crop. By varying the concentration of
biosolids applied onto the soil, growth of five ornamental plant
samples were tested for eight consecutive weeks. The results show
that the pH of the soil after the addition of biosolids ranges from 6.45
to 6.56 which is suitable for the plant growth. The soil samples that
contains biosolid also show higher amount of macronutrients (N, P,
K) and the heavy metals content are significantly increased in the
plant however it does not exceed the guidelines drawn by the
Environmental Protection Agency. It is also proven that there was
only small significant different in the performance of plant growth
between biosolids and commercial fertilizer. It can be seen that
biosolids was able to perform just as well as commercial fertilizer.
Abstract: This study applies the sequential panel selection
method (SPSM) procedure proposed by Chortareas and Kapetanios
(2009) to investigate the time-series properties of energy
consumption in 50 US states from 1963 to 2009. SPSM involves the
classification of the entire panel into a group of stationary series and
a group of non-stationary series to identify how many and which
series in the panel are stationary processes. Empirical results obtained
through SPSM with the panel KSS unit root test developed by Ucar
and Omay (2009) combined with a Fourier function indicate that
energy consumption in all the 50 US states are stationary. The results
of this study have important policy implications for the 50 US states.
Abstract: Interactive public displays give access as an
innovative media to promote enhanced communication between
people and information. However, digital public displays are subject
to a few constraints, such as content presentation. Content
presentation needs to be developed to be more interesting to attract
people’s attention and motivate people to interact with the display. In
this paper, we proposed idea to implement contents with interaction
elements for vision-based digital public display. Vision-based
techniques are applied as a sensor to detect passers-by and theme
contents are suggested to attract their attention for encouraging them
to interact with the announcement content. Virtual object, gesture
detection and projection installation are applied for attracting
attention from passers-by. Preliminary study showed positive
feedback of interactive content designing towards the public display.
This new trend would be a valuable innovation as delivery of
announcement content and information communication through this
media is proven to be more engaging.
Abstract: This paper describes a method to improve the robustness of a face recognition system based on the combination of two compensating classifiers. The face images are preprocessed by the appearance-based statistical approaches such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). LDA features of the face image are taken as the input of the Radial Basis Function Network (RBFN). The proposed approach has been tested on the ORL database. The experimental results show that the LDA+RBFN algorithm has achieved a recognition rate of 93.5%
Abstract: A full six degrees of freedom (6-DOF) flight dynamics
model is proposed for the accurate prediction of short and long-range
trajectories of high spin and fin-stabilized projectiles via atmospheric
flight to final impact point. The projectiles is assumed to be both rigid
(non-flexible), and rotationally symmetric about its spin axis launched
at low and high pitch angles. The mathematical model is based on the
full equations of motion set up in the no-roll body reference frame and
is integrated numerically from given initial conditions at the firing
site. The projectiles maneuvering motion depends on the most
significant force and moment variations, in addition to wind and
gravity. The computational flight analysis takes into consideration the
Mach number and total angle of attack effects by means of the
variable aerodynamic coefficients. For the purposes of the present
work, linear interpolation has been applied from the tabulated database
of McCoy-s book. The developed computational method gives
satisfactory agreement with published data of verified experiments and
computational codes on atmospheric projectile trajectory analysis for
various initial firing flight conditions.
Abstract: Electrocardiogram (ECG) data compression algorithm
is needed that will reduce the amount of data to be transmitted, stored
and analyzed, but without losing the clinical information content. A
wavelet ECG data codec based on the Set Partitioning In Hierarchical
Trees (SPIHT) compression algorithm is proposed in this paper. The
SPIHT algorithm has achieved notable success in still image coding.
We modified the algorithm for the one-dimensional (1-D) case and
applied it to compression of ECG data.
By this compression method, small percent root mean square
difference (PRD) and high compression ratio with low
implementation complexity are achieved. Experiments on selected
records from the MIT-BIH arrhythmia database revealed that the
proposed codec is significantly more efficient in compression and in
computation than previously proposed ECG compression schemes.
Compression ratios of up to 48:1 for ECG signals lead to acceptable
results for visual inspection.
Abstract: It is a challenge to provide a wide range of queries to
database query systems for small mobile devices, such as the PDAs
and cell phones. Currently, due to the physical and resource
limitations of these devices, most reported database querying systems
developed for them are only offering a small set of pre-determined
queries for users to possibly pose. The above can be resolved by
allowing free-form queries to be entered on the devices. Hence, a
query language that does not restrict the combination of query terms
entered by users is proposed. This paper presents the free-form query
language and the method used in translating free-form queries to
their equivalent SQL statements.
Abstract: Gradual patterns have been studied for many years as
they contain precious information. They have been integrated in
many expert systems and rule-based systems, for instance to reason
on knowledge such as “the greater the number of turns, the greater
the number of car crashes”. In many cases, this knowledge has been
considered as a rule “the greater the number of turns → the greater
the number of car crashes” Historically, works have thus been
focused on the representation of such rules, studying how implication
could be defined, especially fuzzy implication. These rules were
defined by experts who were in charge to describe the systems they
were working on in order to turn them to operate automatically. More
recently, approaches have been proposed in order to mine databases
for automatically discovering such knowledge. Several approaches
have been studied, the main scientific topics being: how to determine
what is an relevant gradual pattern, and how to discover them as
efficiently as possible (in terms of both memory and CPU usage).
However, in some cases, end-users are not interested in raw level
knowledge, and are rather interested in trends. Moreover, it may be
the case that no relevant pattern can be discovered at a low level of
granularity (e.g. city), whereas some can be discovered at a higher
level (e.g. county). In this paper, we thus extend gradual pattern
approaches in order to consider multiple level gradual patterns. For
this purpose, we consider two aggregation policies, namely
horizontal and vertical.
Abstract: Multicarrier code-division multiple-access is one of the
effective techniques to gain its multiple access capability, robustness
against fading, and to mitigate the ISI. In this paper, we propose an
improved mulcarrier CDMA system with adaptive subchannel
allocation. We analyzed the performance of our proposed system in
frequency selective fading environment with narrowband interference
existing and compared it with that of parallel transmission over many
subchannels (namely, conventional MC-CDMA scheme) and
DS-CDMA system. Simulation results show that adaptive subchannel
allocation scheme, when used in conventional multicarrier CDMA
system, the performance will be greatly improved.
Abstract: In this paper, a wavelet-based neural network (WNN) classifier for recognizing EEG signals is implemented and tested under three sets EEG signals (healthy subjects, patients with epilepsy and patients with epileptic syndrome during the seizure). First, the Discrete Wavelet Transform (DWT) with the Multi-Resolution Analysis (MRA) is applied to decompose EEG signal at resolution levels of the components of the EEG signal (δ, θ, α, β and γ) and the Parseval-s theorem are employed to extract the percentage distribution of energy features of the EEG signal at different resolution levels. Second, the neural network (NN) classifies these extracted features to identify the EEGs type according to the percentage distribution of energy features. The performance of the proposed algorithm has been evaluated using in total 300 EEG signals. The results showed that the proposed classifier has the ability of recognizing and classifying EEG signals efficiently.
Abstract: This paper presents the development techniques
for a complete autonomous design model of an advanced train
control system and gives a new approach for the
implementation of multi-agents based system. This research
work proposes to develop a novel control system to enhance
the efficiency of the vehicles under constraints of various
conditions, and contributes in stability and controllability
issues, considering relevant safety and operational
requirements with command control communication and
various sensors to avoid accidents. The approach of speed
scheduling, management and control in local and distributed
environment is given to fulfill the dire needs of modern trend
and enhance the vehicles control systems in automation. These
techniques suggest the state of the art microelectronic
technology with accuracy and stability as forefront goals.
Abstract: In the planning point of view, it is essential to have
mode choice, due to the massive amount of incurred in transportation
systems. The intercity travellers in Libya have distinct features, as
against travellers from other countries, which includes cultural and
socioeconomic factors. Consequently, the goal of this study is to
recognize the behavior of intercity travel using disaggregate models,
for projecting the demand of nation-level intercity travel in Libya.
Multinomial Logit Model for all the intercity trips has been
formulated to examine the national-level intercity transportation in
Libya. The Multinomial logit model was calibrated using nationwide
revealed preferences (RP) and stated preferences (SP) survey. The
model was developed for deference purpose of intercity trips (work,
social and recreational). The variables of the model have been
predicted based on maximum likelihood method. The data needed for
model development were obtained from all major intercity corridors
in Libya. The final sample size consisted of 1300 interviews. About
two-thirds of these data were used for model calibration, and the
remaining parts were used for model validation. This study, which is
the first of its kind in Libya, investigates the intercity traveler’s
mode-choice behavior. The intercity travel mode-choice model was
successfully calibrated and validated. The outcomes indicate that, the
overall model is effective and yields higher precision of estimation.
The proposed model is beneficial, due to the fact that, it is receptive
to a lot of variables, and can be employed to determine the impact of
modifications in the numerous characteristics on the need for various
travel modes. Estimations of the model might also be of valuable to
planners, who can estimate possibilities for various modes and
determine the impact of unique policy modifications on the need for
intercity travel.
Abstract: Traveling salesman problem (TSP) is hard to resolve
when the number of cities and routes become large. The frequency
graph is constructed to tackle the problem. A frequency graph
maintains the topological relationships of the original weighted graph.
The numbers on the edges are the frequencies of the edges emulated
from the local optimal Hamiltonian paths. The simplest kind of local
optimal Hamiltonian paths are computed based on the four vertices
and three lines inequality. The search algorithm is given to find the
optimal Hamiltonian circuit based on the frequency graph. The
experiments show that the method can find the optimal Hamiltonian
circuit within several trials.
Abstract: Tackling emergency situations is performed based on emergency scenarios. These scenarios do not have a uniform form in the Czech Republic. They are unstructured and developed primarily in the text form. This does not allow solving emergency situations efficiently. For this reason, the paper aims at defining a Process Oriented Architecture to support and thus to improve tackling emergency situations in the Czech Republic. The innovative Process Oriented Architecture is based on the Workflow Reference Model while taking into account the options of Business Process Management Suites for the implementation of process oriented emergency scenarios. To verify the proposed architecture the Proof of Concept has been used which covers the reception of an emergency event at the district emergency operations centre. Within the particular implementation of the proposed architecture the Bonita Open Solution has been used. The architecture created in this way is suitable not only for emergency management, but also for educational purposes.
Abstract: Since after the historical moment of Malaysia
Independence Day on the year of 1957, the government had been trying hard in order to find the most efficient methods in learning.
However, it is hard to actually access and evaluate students whom will then be called an excellent student. It because in our realtime
student who excellent is only excel in academic. This evaluation
become a problem because it not balance in our real life interm of to get an excellent student in whole area in their involvement of curiculum and co-curiculum. To overcome this scenario, we
proposed a method called Student Idol to evaluate student through
three categories which are academic, co-curiculum and leadership.
All the categories have their own merit point. Using this method, student will be evaluated more accurate compared to the previously.
So, teacher can easily evaluate their student without having any emotion factor, relation factor and others. As conclustion this method will helps student evaluation more accurate and valid.
Abstract: This paper addresses the problems encountered by conventional distance relays when protecting double-circuit transmission lines. The problems arise principally as a result of the mutual coupling between the two circuits under different fault conditions; this mutual coupling is highly nonlinear in nature. An adaptive protection scheme is proposed for such lines based on application of artificial neural network (ANN). ANN has the ability to classify the nonlinear relationship between measured signals by identifying different patterns of the associated signals. One of the key points of the present work is that only current signals measured at local end have been used to detect and classify the faults in the double circuit transmission line with double end infeed. The adaptive protection scheme is tested under a specific fault type, but varying fault location, fault resistance, fault inception angle and with remote end infeed. An improved performance is experienced once the neural network is trained adequately, which performs precisely when faced with different system parameters and conditions. The entire test results clearly show that the fault is detected and classified within a quarter cycle; thus the proposed adaptive protection technique is well suited for double circuit transmission line fault detection & classification. Results of performance studies show that the proposed neural network-based module can improve the performance of conventional fault selection algorithms.
Abstract: When studying electronics, hands-on experience is considered to be very valuable for a better understanding of the concepts of electricity and electronics. Students lacking sufficient time in the lab are often put at disadvantage. A way to overcome this, is by using interactive multimedia in a virtual environment. Instead of proposing another new ad-hoc simulator for e-learning, we propose in this paper an e-learning platform integrating the SPICE simulator as a web service. This enables to make use of all the functions of the de-facto standard simulator SPICE inelectronics when developing new simulations.
Abstract: The objective of positioning the fixture elements in
the fixture is to make the workpiece stiff, so that geometric errors in
the manufacturing process can be reduced. Most of the work for
optimal fixture layout used the minimization of the sum of the nodal
deflection normal to the surface as objective function. All deflections
in other direction have been neglected. We propose a new method for
fixture layout optimization in this paper, which uses the element
strain energy. The deformations in all the directions have been
considered in this way. The objective function in this method is to
minimize the sum of square of element strain energy. Strain energy
and stiffness are inversely proportional to each other. The
optimization problem is solved by the sequential quadratic
programming method. Three different kinds of case studies are
presented, and results are compared with the method using nodal
deflections as objective function to verify the propose method.