Abstract: This paper describes a study of geometrically
nonlinear free vibration of thin circular functionally graded (CFGP)
plates resting on Winkler elastic foundations. The material properties
of the functionally graded composites examined here are assumed to
be graded smoothly and continuously through the direction of the
plate thickness according to a power law and are estimated using the
rule of mixture. The theoretical model is based on the classical Plate
theory and the Von-Kármán geometrical nonlinearity assumptions.
An homogenization procedure (HP) is developed to reduce the
problem considered here to that of isotropic homogeneous circular
plates resting on Winkler foundation. Hamilton-s principle is applied
and a multimode approach is derived to calculate the fundamental
nonlinear frequency parameters which are found to be in a good
agreement with the published results. On the other hand, the
influence of the foundation parameters on the nonlinear fundamental
frequency has also been analysed.
Abstract: Emerging Bio-engineering fields such as Brain
Computer Interfaces, neuroprothesis devices and modeling and
simulation of neural networks have led to increased research activity
in algorithms for the detection, isolation and classification of Action
Potentials (AP) from noisy data trains. Current techniques in the field
of 'unsupervised no-prior knowledge' biosignal processing include
energy operators, wavelet detection and adaptive thresholding. These
tend to bias towards larger AP waveforms, AP may be missed due to
deviations in spike shape and frequency and correlated noise
spectrums can cause false detection. Also, such algorithms tend to
suffer from large computational expense.
A new signal detection technique based upon the ideas of phasespace
diagrams and trajectories is proposed based upon the use of a
delayed copy of the AP to highlight discontinuities relative to
background noise. This idea has been used to create algorithms that
are computationally inexpensive and address the above problems.
Distinct AP have been picked out and manually classified from
real physiological data recorded from a cockroach. To facilitate
testing of the new technique, an Auto Regressive Moving Average
(ARMA) noise model has been constructed bases upon background
noise of the recordings. Along with the AP classification means this
model enables generation of realistic neuronal data sets at arbitrary
signal to noise ratio (SNR).
Abstract: Modern manufacturing facilities are large scale,
highly complex, and operate with large number of variables under
closed loop control. Early and accurate fault detection and diagnosis
for these plants can minimise down time, increase the safety of plant
operations, and reduce manufacturing costs. Fault detection and
isolation is more complex particularly in the case of the faulty analog
control systems. Analog control systems are not equipped with
monitoring function where the process parameters are continually
visualised. In this situation, It is very difficult to find the relationship
between the fault importance and its consequences on the product
failure. We consider in this paper an approach to fault detection and
analysis of its effect on the production quality using an adaptive
centring and scaling in the pickling process in cold rolling. The fault
appeared on one of the power unit driving a rotary machine, this
machine can not track a reference speed given by another machine.
The length of metal loop is then in continuous oscillation, this affects
the product quality. Using a computerised data acquisition system,
the main machine parameters have been monitored. The fault has
been detected and isolated on basis of analysis of monitored data.
Normal and faulty situation have been obtained by an artificial neural
network (ANN) model which is implemented to simulate the normal
and faulty status of rotary machine. Correlation between the product
quality defined by an index and the residual is used to quality
classification.
Abstract: In recent years, rehabilitation has been the subject of extensive research due to increased spending on building work and repair of built works. In all cases, it is absolutely essential to carry out methods of strengthening or repair of structural elements, and that following an inspection analysis and methodology of a correct diagnosis. The reinforced concrete columns are important elements in building structures. They support the vertical loads and provide bracing against the horizontal loads. This research about the behavior of reinforced concrete rectangular columns, rehabilitated by concrete liner, confinement FRP fabric, steel liner or cage formed by metal corners. It allows comparing the contributions of different processes used perspective section resistance elements rehabilitated compared to that is not reinforced or repaired. The different results obtained revealed a considerable gain in bearing capacity failure of reinforced sections cladding concrete, metal bracket, steel plates and a slight improvement to the section reinforced with fabric FRP. The use of FRP does not affect the weight of the structures, but the use of different techniques cladding increases the weight of elements rehabilitated and therefore the weight of the building which requires resizing foundations.
Abstract: This study is about an application of King Bhumibol
Adulyadej’s “Learn Wisely” (LW) concept in instructional design
and management process at the Faculty of Education, Suan Sunahdha
Rajabhat University. The concept suggests four strategies for true
learning. Related literature and significant LW methods in teaching
and learning are also reviewed and then applied in designing a
pedagogy learning module. The design has been implemented in
three classrooms with a total of 115 sophomore student teachers.
After one consecutive semester of managing and adjusting the
process by instructors and experts using collected data from minutes,
assessment of learning management, satisfaction and learning
achievement of the students, it is found that the effective SSRU
model of LW instructional method comprises of five steps.
Abstract: Segmentation is an important step in medical image
analysis and classification for radiological evaluation or computer
aided diagnosis. The CAD (Computer Aided Diagnosis ) of lung CT
generally first segment the area of interest (lung) and then analyze
the separately obtained area for nodule detection in order to
diagnosis the disease. For normal lung, segmentation can be
performed by making use of excellent contrast between air and
surrounding tissues. However this approach fails when lung is
affected by high density pathology. Dense pathologies are present in
approximately a fifth of clinical scans, and for computer analysis
such as detection and quantification of abnormal areas it is vital that
the entire and perfectly lung part of the image is provided and no
part, as present in the original image be eradicated. In this paper we
have proposed a lung segmentation technique which accurately
segment the lung parenchyma from lung CT Scan images. The
algorithm was tested against the 25 datasets of different patients
received from Ackron Univeristy, USA and AGA Khan Medical
University, Karachi, Pakistan.
Abstract: Monitoring lightning electromagnetic pulses (sferics) and other terrestrial as well as extraterrestrial transient radiation signals is of considerable interest for practical and theoretical purposes in astro- and geophysics as well as meteorology. Managing a continuous flow of data, automation of the analysis and classification process is important. Features based on a combination of wavelet and statistical methods proved efficient for this task and serve as input into a radial basis function network that is trained to discriminate transient shapes from pulse like to wave like. We concentrate on signals in the Very Low Frequency (VLF, 3 -30 kHz) range in this paper, but the developed methods are independent of this specific choice.
Abstract: Morgan-s refinement calculus (MRC) is one of the
well-known methods allowing the formality presented in the program
specification to be continued all the way to code. On the other hand,
Object-Z (OZ) is an extension of Z adding support for classes and
objects. There are a number of methods for obtaining code from OZ
specifications that can be categorized into refinement and animation
methods. As far as we know, only one refinement method exists
which refines OZ specifications into code. However, this method
does not have fine-grained refinement rules and thus cannot be
automated. On the other hand, existing animation methods do not
present mapping rules formally and do not support the mapping of
several important constructs of OZ, such as all cases of operation
expressions and most of constructs in global paragraph. In this paper,
with the aim of providing an automatic path from OZ specifications
to code, we propose an approach to map OZ specifications into their
counterparts in MRC in order to use fine-grained refinement rules of
MRC. In this way, having counterparts of our specifications in MRC,
we can refine them into code automatically using MRC tools such as
RED. Other advantages of our work pertain to proposing mapping
rules formally, supporting the mapping of all important constructs of
Object-Z, and considering dynamic instantiation of objects while OZ
itself does not cover this facility.
Abstract: Automatic reading of handwritten cheque is a computationally
complex process and it plays an important role in financial
risk management. Machine vision and learning provide a viable
solution to this problem. Research effort has mostly been focused
on recognizing diverse pitches of cheques and demand drafts with an
identical outline. However most of these methods employ templatematching
to localize the pitches and such schemes could potentially
fail when applied to different types of outline maintained by the
bank. In this paper, the so-called outline problem is resolved by
a cheque information tree (CIT), which generalizes the localizing
method to extract active-region-of-entities. In addition, the weight
based density plot (WBDP) is performed to isolate text entities and
read complete pitches. Recognition is based on texture features using
neural classifiers. Legal amount is subsequently recognized by both
texture and perceptual features. A post-processing phase is invoked
to detect the incorrect readings by Type-2 grammar using the Turing
machine. The performance of the proposed system was evaluated
using cheque and demand drafts of 22 different banks. The test data
consists of a collection of 1540 leafs obtained from 10 different
account holders from each bank. Results show that this approach
can easily be deployed without significant design amendments.
Abstract: Dengue virus is transmitted from person to person
through the biting of infected Aedes Aegypti mosquitoes. DEN-1,
DEN-2, DEN-3 and DEN-4 are four serotypes of this virus. Infection
with one of these four serotypes apparently produces permanent
immunity to it, but only temporary cross immunity to the others. The
length of time during incubation of dengue virus in human and
mosquito are considered in this study. The dengue patients are
classified into infected and infectious classes. The infectious human
can transmit dengue virus to susceptible mosquitoes but infected
human can not. The transmission model of this disease is formulated.
The human population is divided into susceptible, infected, infectious
and recovered classes. The mosquito population is separated into
susceptible, infected and infectious classes. Only infectious
mosquitoes can transmit dengue virus to the susceptible human. We
analyze this model by using dynamical analysis method. The
threshold condition is discussed to reduce the outbreak of this
disease.
Abstract: In this paper, a novel method for recognition of musical
instruments in a polyphonic music is presented by using an
embedded hidden Markov model (EHMM). EHMM is a doubly
embedded HMM structure where each state of the external HMM
is an independent HMM. The classification is accomplished for
two different internal HMM structures where GMMs are used as
likelihood estimators for the internal HMMs. The results are compared
to those achieved by an artificial neural network with two
hidden layers. Appropriate classification accuracies were achieved
both for solo instrument performance and instrument combinations
which demonstrates that the new approach outperforms the similar
classification methods by means of the dynamic of the signal.
Abstract: In this study, a fuzzy-logic based control system was
designed to ensure that time and energy is saved during the operation
of load elevators which are used during the construction of tall
buildings. In the control system that was devised, for the load
elevators to work more efficiently, the energy interval where the
motor worked was taken as the output variable whereas the amount
of load and the building height were taken as input variables. The
most appropriate working intervals depending on the characteristics
of these variables were defined by the help of an expert. Fuzzy expert
system software was formed using Delphi programming language. In
this design, mamdani max-min inference mechanism was used and
the centroid method was employed in the clarification procedure. In
conclusion, it is observed that the system that was designed is
feasible and this is supported by statistical analyses..
Abstract: Global Software Development (GSD) projects are
passing through different boundaries of a company, country and even
in other continents where time zone differs between both sites.
Beside many benefits of such development, research declared plenty
of negative impacts on these GSD projects. It is important to
understand problems which may lie during the execution of GSD
project with different time zones. This research project discussed and
provided different issues related to time delays in GSD projects. In
this paper, authors investigated some of the time delay factors which
usually lie in GSD projects with different time zones. This
investigation is done through systematic review of literature.
Furthermore, the practices to overcome these delay factors which
have already been reported in literature and GSD organizations are
also explored through literature survey and case studies.
Abstract: We have defined two suites of metrics, which cover
static and dynamic aspects of component assembly. The static
metrics measure complexity and criticality of component assembly,
wherein complexity is measured using Component Packing Density
and Component Interaction Density metrics. Further, four criticality
conditions namely, Link, Bridge, Inheritance and Size criticalities
have been identified and quantified. The complexity and criticality
metrics are combined to form a Triangular Metric, which can be used
to classify the type and nature of applications. Dynamic metrics are
collected during the runtime of a complete application. Dynamic
metrics are useful to identify super-component and to evaluate the
degree of utilisation of various components. In this paper both static
and dynamic metrics are evaluated using Weyuker-s set of properties.
The result shows that the metrics provide a valid means to measure
issues in component assembly. We relate our metrics suite with
McCall-s Quality Model and illustrate their impact on product
quality and to the management of component-based product
development.
Abstract: Faced with social and health system capacity
constraints and rising and changing demand for welfare services,
governments and welfare providers are increasingly relying on
innovation to help support and enhance services. However, the
evidence reported by several studies indicates that the realization of
that potential is not an easy task. Innovations can be deemed
inherently complex to implement and operate, because many of them
involve a combination of technological and organizational renewal
within an environment featuring a diversity of stakeholders. Many
public welfare service innovations are markedly systemic in their
nature, which means that they emerge from, and must address, the
complex interplay between political, administrative, technological,
institutional and legal issues. This paper suggests that stakeholders
dealing with systemic innovation in welfare services must deal with
ambiguous and incomplete information in circumstances of
uncertainty. Employing a literature review methodology and case
study, this paper identifies, categorizes and discusses different
aspects of the uncertainty of systemic innovation in public welfare
services, and argues that uncertainty can be classified into eight
categories: technological uncertainty, market uncertainty,
regulatory/institutional uncertainty, social/political uncertainty,
acceptance/legitimacy uncertainty, managerial uncertainty, timing
uncertainty and consequence uncertainty.
Abstract: Computer languages are usually lumped together
into broad -paradigms-, leaving us in want of a finer classification
of kinds of language. Theories distinguishing between -genuine
differences- in language has been called for, and we propose that
such differences can be observed through a notion of expressive mode.
We outline this concept, propose how it could be operationalized and
indicate a possible context for the development of a corresponding
theory. Finally we consider a possible application in connection
with evaluation of language revision. We illustrate this with a case,
investigating possible revisions of the relational algebra in order to
overcome weaknesses of the division operator in connection with
universal queries.
Abstract: Partial discharge (PD) detection is an important
method to evaluate the insulation condition of metal-clad apparatus.
Non-intrusive sensors which are easy to install and have no
interruptions on operation are preferred in onsite PD detection.
However, it often lacks of accuracy due to the interferences in PD
signals. In this paper a novel PD extraction method that uses frequency
analysis and entropy based time-frequency (TF) analysis is introduced.
The repetitive pulses from convertor are first removed via frequency
analysis. Then, the relative entropy and relative peak-frequency of
each pulse (i.e. time-indexed vector TF spectrum) are calculated and
all pulses with similar parameters are grouped. According to the
characteristics of non-intrusive sensor and the frequency distribution
of PDs, the pulses of PD and interferences are separated. Finally the
PD signal and interferences are recovered via inverse TF transform.
The de-noised result of noisy PD data demonstrates that the
combination of frequency and time-frequency techniques can
discriminate PDs from interferences with various frequency
distributions.
Abstract: As the Internet continues to grow at a rapid pace as
the primary medium for communications and commerce and as
telecommunication networks and systems continue to expand their
global reach, digital information has become the most popular and
important information resource and our dependence upon the
underlying cyber infrastructure has been increasing significantly.
Unfortunately, as our dependency has grown, so has the threat to the
cyber infrastructure from spammers, attackers and criminal
enterprises. In this paper, we propose a new machine learning based
network intrusion detection framework for cyber security. The
detection process of the framework consists of two stages: model
construction and intrusion detection. In the model construction stage,
a semi-supervised machine learning algorithm is applied to a
collected set of network audit data to generate a profile of normal
network behavior and in the intrusion detection stage, input network
events are analyzed and compared with the patterns gathered in the
profile, and some of them are then flagged as anomalies should these
events are sufficiently far from the expected normal behavior. The
proposed framework is particularly applicable to the situations where
there is only a small amount of labeled network training data
available, which is very typical in real world network environments.
Abstract: In digital signal processing it is important to
approximate multi-dimensional data by the method called rank
reduction, in which we reduce the rank of multi-dimensional data from
higher to lower. For 2-dimennsional data, singular value
decomposition (SVD) is one of the most known rank reduction
techniques. Additional, outer product expansion expanded from SVD
was proposed and implemented for multi-dimensional data, which has
been widely applied to image processing and pattern recognition.
However, the multi-dimensional outer product expansion has behavior
of great computation complex and has not orthogonally between the
expansion terms. Therefore we have proposed an alterative method,
Third-order Orthogonal Tensor Product Expansion short for 3-OTPE.
3-OTPE uses the power method instead of nonlinear optimization
method for decreasing at computing time. At the same time the group
of B. D. Lathauwer proposed Higher-Order SVD (HOSVD) that is
also developed with SVD extensions for multi-dimensional data.
3-OTPE and HOSVD are similarly on the rank reduction of
multi-dimensional data. Using these two methods we can obtain
computation results respectively, some ones are the same while some
ones are slight different. In this paper, we compare 3-OTPE to
HOSVD in accuracy of calculation and computing time of resolution,
and clarify the difference between these two methods.
Abstract: Feature selection study is gaining importance due to its contribution to save classification cost in terms of time and computation load. In search of essential features, one of the methods to search the features is via the decision tree. Decision tree act as an intermediate feature space inducer in order to choose essential features. In decision tree-based feature selection, some studies used decision tree as a feature ranker with a direct threshold measure, while others remain the decision tree but utilized pruning condition that act as a threshold mechanism to choose features. This paper proposed threshold measure using Manhattan Hierarchical Cluster distance to be utilized in feature ranking in order to choose relevant features as part of the feature selection process. The result is promising, and this method can be improved in the future by including test cases of a higher number of attributes.