Abstract: This paper is a survey of current component-based
software technologies and the description of promotion and
inhibition factors in CBSE. The features that software components
inherit are also discussed. Quality Assurance issues in componentbased
software are also catered to. The feat research on the quality
model of component based system starts with the study of what the
components are, CBSE, its development life cycle and the pro &
cons of CBSE. Various attributes are studied and compared keeping
in view the study of various existing models for general systems and
CBS. When illustrating the quality of a software component an apt
set of quality attributes for the description of the system (or
components) should be selected. Finally, the research issues that can
be extended are tabularized.
Abstract: The objective of this research is to study principal
component analysis for classification of 67 soil samples collected from
different agricultural areas in the western part of Thailand. Six soil
properties were measured on the soil samples and are used as original
variables. Principal component analysis is applied to reduce the
number of original variables. A model based on the first two
principal components accounts for 72.24% of total variance. Score
plots of first two principal components were used to map with
agricultural areas divided into horticulture, field crops and wetland.
The results showed some relationships between soil properties and
agricultural areas. PCA was shown to be a useful tool for agricultural
areas classification based on soil properties.
Abstract: Deprivation indices are widely used in public health
study. These indices are also referred as the index of inequalities or
disadvantage. Even though, there are many indices that have been
built before, it is believed to be less appropriate to use the existing
indices to be applied in other countries or areas which had different
socio-economic conditions and different geographical characteristics.
The objective of this study is to construct the index based on the
geographical and socio-economic factors in Peninsular Malaysia
which is defined as the weighted household-based deprivation index.
This study has employed the variables based on household items,
household facilities, school attendance and education level obtained
from Malaysia 2000 census report. The factor analysis is used to
extract the latent variables from indicators, or reducing the
observable variable into smaller amount of components or factor.
Based on the factor analysis, two extracted factors were selected,
known as Basic Household Amenities and Middle-Class Household
Item factor. It is observed that the district with a lower index values
are located in the less developed states like Kelantan, Terengganu
and Kedah. Meanwhile, the areas with high index values are located
in developed states such as Pulau Pinang, W.P. Kuala Lumpur and
Selangor.
Abstract: This paper presents the exergy analysis of a
desalination unit using humidification-dehumidification process.
Here, this unit is considered as a thermal system with three main
components, which are the heating unit by using a solar collector, the
evaporator or the humidifier, and the condenser or the dehumidifier.
In these components the exergy is a measure of the quality or grade
of energy and it can be destroyed in them. According to the second
law of thermodynamics this destroyed part is due to irreversibilities
which must be determined to obtain the exergetic efficiency of the
system.
In the current paper a computer program has been developed using
visual basic to determine the exergy destruction and the exergetic
efficiencies of the components of the desalination unit at variable
operation conditions such as feed water temperature, outlet air
temperature, air to feed water mass ratio and salinity, in addition to
cooling water mass flow rate and inlet temperature, as well as
quantity of solar irradiance.
The results obtained indicate that the exergy efficiency of the
humidifier increases by increasing the mass ratio and decreasing the
outlet air temperature. In the other hand the exergy efficiency of the
condenser increases with the increase of this ratio and also with the
increase of the outlet air temperature.
Abstract: In this paper, a wavelet-based neural network (WNN) classifier for recognizing EEG signals is implemented and tested under three sets EEG signals (healthy subjects, patients with epilepsy and patients with epileptic syndrome during the seizure). First, the Discrete Wavelet Transform (DWT) with the Multi-Resolution Analysis (MRA) is applied to decompose EEG signal at resolution levels of the components of the EEG signal (δ, θ, α, β and γ) and the Parseval-s theorem are employed to extract the percentage distribution of energy features of the EEG signal at different resolution levels. Second, the neural network (NN) classifies these extracted features to identify the EEGs type according to the percentage distribution of energy features. The performance of the proposed algorithm has been evaluated using in total 300 EEG signals. The results showed that the proposed classifier has the ability of recognizing and classifying EEG signals efficiently.
Abstract: The present study was done primarily to address two major research gaps: firstly, development of an empirical measure of life meaningfulness for substance users and secondly, to determine the psychosocial determinants of life meaningfulness among the substance users. The study is classified into two phases: the first phase which dealt with development of Life Meaningfulness Scale and the second phase which examined the relationship between life meaningfulness and social support, abstinence self efficacy and depression. Both qualitative and quantitative approaches were used for framing items. A Principal Component Analysis yielded three components: Overall Goal Directedness, Striving for healthy lifestyle and Concern for loved ones which collectively accounted for 42.06% of the total variance. The scale and its subscales were also found to be highly reliable. Multiple regression analyses in the second phase of the study revealed that social support and abstinence self efficacy significantly predicted life meaningfulness among 48 recovering inmates of a de-addiction center while level of depression failed to predict life meaningfulness.
Abstract: Electromagnetic interference (EMI) is one of the
serious problems in most electrical and electronic appliances
including fluorescent lamps. The electronic ballast used to regulate
the power flow through the lamp is the major cause for EMI. The
interference is because of the high frequency switching operation of
the ballast. Formerly, some EMI mitigation techniques were in
practice, but they were not satisfactory because of the hardware
complexity in the circuit design, increased parasitic components and
power consumption and so on. The majority of the researchers have
their spotlight only on EMI mitigation without considering the other
constraints such as cost, effective operation of the equipment etc. In
this paper, we propose a technique for EMI mitigation in fluorescent
lamps by integrating Frequency Modulation and Evolutionary
Programming. By the Frequency Modulation technique, the
switching at a single central frequency is extended to a range of
frequencies, and so, the power is distributed throughout the range of
frequencies leading to EMI mitigation. But in order to meet the
operating frequency of the ballast and the operating power of the
fluorescent lamps, an optimal modulation index is necessary for
Frequency Modulation. The optimal modulation index is determined
using Evolutionary Programming. Thereby, the proposed technique
mitigates the EMI to a satisfactory level without disturbing the
operation of the fluorescent lamp.
Abstract: A new paradigm for software design and development models software by its business process, translates the model into a process execution language, and has it run by a supporting execution engine. This process-oriented paradigm promotes modeling of software by less technical users or business analysts as well as rapid development. Since business process models may be shared by different organizations and sometimes even by different business domains, it is interesting to apply a technique used in traditional software component technology to design reusable business processes. This paper discusses an approach to apply a technique for software component fabrication to the design of process-oriented software units, called process components. These process components result from decomposing a business process of a particular application domain into subprocesses with an aim that the process components can be reusable in different process-based software models. The approach is quantitative because the quality of process component design is measured from technical features of the process components. The approach is also strategic because the measured quality is determined against business-oriented component management goals. A software tool has been developed to measure how good a process component design is, according to the required managerial goals and comparing to other designs. We also discuss how we benefit from reusable process components.
Abstract: This research proposes a methodology for patent-citation-based technology input-output analysis by applying the patent information to input-output analysis developed for the dependencies among different industries. For this analysis, a technology relationship matrix and its components, as well as input and technology inducement coefficients, are constructed using patent information. Then, a technology inducement coefficient is calculated by normalizing the degree of citation from certain IPCs to the different IPCs (International patent classification) or to the same IPCs. Finally, we construct a Dependency Structure Matrix (DSM) based on the technology inducement coefficient to suggest a useful application for this methodology.
Abstract: Stick models are widely used in studying the
behaviour of straight as well as skew bridges and viaducts subjected
to earthquakes while carrying out preliminary studies. The
application of such models to highly curved bridges continues to
pose challenging problems. A viaduct proposed in the foothills of the
Himalayas in Northern India is chosen for the study. It is having 8
simply supported spans @ 30 m c/c. It is doubly curved in horizontal
plane with 20 m radius. It is inclined in vertical plane as well. The
superstructure consists of a box section. Three models have been
used: a conventional stick model, an improved stick model and a 3D
finite element model. The improved stick model is employed by
making use of body constraints in order to study its capabilities. The
first 8 frequencies are about 9.71% away in the latter two models.
Later the difference increases to 80% in 50th mode. The viaduct was
subjected to all three components of the El Centro earthquake of May
1940. The numerical integration was carried out using the Hilber-
Hughes-Taylor method as implemented in SAP2000. Axial forces
and moments in the bridge piers as well as lateral displacements at
the bearing levels are compared for the three models. The maximum
difference in the axial forces and bending moments and
displacements vary by 25% between the improved and finite element
model. Whereas, the maximum difference in the axial forces,
moments, and displacements in various sections vary by 35%
between the improved stick model and equivalent straight stick
model. The difference for torsional moment was as high as 75%. It is
concluded that the stick model with body constraints to model the
bearings and expansion joints is not desirable in very sharp S curved
viaducts even for preliminary analysis. This model can be used only
to determine first 10 frequency and mode shapes but not for member
forces. A 3D finite element analysis must be carried out for
meaningful results.
Abstract: Data on 657 lactation from 163 Maltese goat,
collected over a 5-year period were analyzed by a mixed model to
estimate the variance components for heritability. The considered
lactation traits were: milk yield (MY) and lactation length (LL). Year,
parity and type of birth (single or twin) were significant sources of
variation for lactation length; on the other hand milk yield was
significantly influenced only by the year. The average MY was
352.34 kg and the average LL was 230 days. Estimates of heritability
were 0.21 and 0.15 for MY and LL respectively. These values
suggest there is low correlation between genotype and phenotype so
it may be difficult to evaluate animals directly on phenotype. So, the
genetic improvement of this breed may be quite slow without the
support of progeny test aimed to select Maltese breeders.
Abstract: The paper presents a complete discrete statistical framework, based on a novel vector quantization (VQ) front-end process. This new VQ approach performs an optimal distribution of VQ codebook components on HMM states. This technique that we named the distributed vector quantization (DVQ) of hidden Markov models, succeeds in unifying acoustic micro-structure and phonetic macro-structure, when the estimation of HMM parameters is performed. The DVQ technique is implemented through two variants. The first variant uses the K-means algorithm (K-means- DVQ) to optimize the VQ, while the second variant exploits the benefits of the classification behavior of neural networks (NN-DVQ) for the same purpose. The proposed variants are compared with the HMM-based baseline system by experiments of specific Arabic consonants recognition. The results show that the distributed vector quantization technique increase the performance of the discrete HMM system.
Abstract: A way of generating millimeter wave I/Q signal using inductive resonator matched poly-phase filter is suggested. Normally the poly-phase filter generates quite accurate I/Q phase and magnitude but the loss of the filter is considerable due to series connection of passive RC components. This loss term directly increases system noise figure when the poly-phase filter is used in RF Front-end. The proposed matching method eliminates above mentioned loss and in addition provides gain on the passive filter. The working algorithm is illustrated by mathematical analysis. The generated I/Q signal is used in implementing millimeter wave phase shifter for the 60 GHz communication system to verify its effectiveness. The circuit is fabricated in 90 nm TSMC RF CMOS process under 1.2 V supply voltage. The measurement results showed that the suggested method improved gain by 6.5 dB and noise by 2.3 dB. The summary of the proposed I/Q generation is compared with previous works.
Abstract: The importance of machining process in today-s
industry requires the establishment of more practical approaches to
clearly represent the intimate and severe contact on the tool-chipworkpiece
interfaces. Mathematical models are developed using the
measured force signals to relate each of the tool-chip friction
components on the rake face to the operating cutting parameters in
rough turning operation using multilayers coated carbide inserts.
Nonlinear modeling proved to have high capability to detect the
nonlinear functional variability embedded in the experimental data.
While feedrate is found to be the most influential parameter on the
friction coefficient and its related force components, both cutting
speed and depth of cut are found to have slight influence. Greater
deformed chip thickness is found to lower the value of friction
coefficient as the sliding length on the tool-chip interface is reduced.
Abstract: In the recent past Learning Classifier Systems have
been successfully used for data mining. Learning Classifier System
(LCS) is basically a machine learning technique which combines
evolutionary computing, reinforcement learning, supervised or
unsupervised learning and heuristics to produce adaptive systems. A
LCS learns by interacting with an environment from which it
receives feedback in the form of numerical reward. Learning is
achieved by trying to maximize the amount of reward received. All
LCSs models more or less, comprise four main components; a finite
population of condition–action rules, called classifiers; the
performance component, which governs the interaction with the
environment; the credit assignment component, which distributes the
reward received from the environment to the classifiers accountable
for the rewards obtained; the discovery component, which is
responsible for discovering better rules and improving existing ones
through a genetic algorithm. The concatenate of the production rules
in the LCS form the genotype, and therefore the GA should operate
on a population of classifier systems. This approach is known as the
'Pittsburgh' Classifier Systems. Other LCS that perform their GA at
the rule level within a population are known as 'Mitchigan' Classifier
Systems. The most predominant representation of the discovered
knowledge is the standard production rules (PRs) in the form of IF P
THEN D. The PRs, however, are unable to handle exceptions and do
not exhibit variable precision. The Censored Production Rules
(CPRs), an extension of PRs, were proposed by Michalski and
Winston that exhibit variable precision and supports an efficient
mechanism for handling exceptions. A CPR is an augmented
production rule of the form: IF P THEN D UNLESS C, where
Censor C is an exception to the rule. Such rules are employed in
situations, in which conditional statement IF P THEN D holds
frequently and the assertion C holds rarely. By using a rule of this
type we are free to ignore the exception conditions, when the
resources needed to establish its presence are tight or there is simply
no information available as to whether it holds or not. Thus, the IF P
THEN D part of CPR expresses important information, while the
UNLESS C part acts only as a switch and changes the polarity of D
to ~D. In this paper Pittsburgh style LCSs approach is used for
automated discovery of CPRs. An appropriate encoding scheme is
suggested to represent a chromosome consisting of fixed size set of
CPRs. Suitable genetic operators are designed for the set of CPRs
and individual CPRs and also appropriate fitness function is proposed
that incorporates basic constraints on CPR. Experimental results are
presented to demonstrate the performance of the proposed learning
classifier system.
Abstract: The paper deals with the estimation of amplitude and phase of an analogue multi-harmonic band-limited signal from irregularly spaced sampling values. To this end, assuming the signal fundamental frequency is known in advance (i.e., estimated at an independent stage), a complexity-reduced algorithm for signal reconstruction in time domain is proposed. The reduction in complexity is achieved owing to completely new analytical and summarized expressions that enable a quick estimation at a low numerical error. The proposed algorithm for the calculation of the unknown parameters requires O((2M+1)2) flops, while the straightforward solution of the obtained equations takes O((2M+1)3) flops (M is the number of the harmonic components). It is applied in signal reconstruction, spectral estimation, system identification, as well as in other important signal processing problems. The proposed method of processing can be used for precise RMS measurements (for power and energy) of a periodic signal based on the presented signal reconstruction. The paper investigates the errors related to the signal parameter estimation, and there is a computer simulation that demonstrates the accuracy of these algorithms.
Abstract: A simple mobile engine-driven pneumatic paddy
collector made of locally available materials using local
manufacturing technology was designed, fabricated, and tested for
collecting and bagging of paddy dried on concrete pavement. The
pneumatic paddy collector had the following major components:
radial flat bladed type centrifugal fan, power transmission system,
bagging area, frame and the conveyance system. Results showed
significant differences on the collecting capacity, noise level, and fuel
consumption when rotational speed of the air mover shaft was varied.
Other parameters such as collecting efficiency, air velocity,
augmented cracked grain percentage, and germination rate were not
significantly affected by varying rotational speed of the air mover
shaft. The pneumatic paddy collector had a collecting efficiency of
99.33 % with a collecting capacity of 2685.00 kg/h at maximum
rotational speed of centrifugal fan shaft of about 4200 rpm. The
machine entailed an investment cost of P 62,829.25. The break-even
weight of paddy was 510,606.75 kg/yr at a collecting cost of 0.11
P/kg of paddy. Utilizing the machine for 400 hours per year
generated an income of P 23,887.73. The projected time needed to
recover cost of the machine based on 2685 kg/h collecting capacity
was 2.63 year.
Abstract: There have been different approaches to compute the
analytic instantaneous frequency with a variety of background reasoning
and applicability in practice, as well as restrictions. This paper presents an adaptive Fourier decomposition and (α-counting) based
instantaneous frequency computation approach. The adaptive Fourier
decomposition is a recently proposed new signal decomposition
approach. The instantaneous frequency can be computed through the so called mono-components decomposed by it. Due to the fast energy
convergency, the highest frequency of the signal will be discarded by the adaptive Fourier decomposition, which represents the noise of
the signal in most of the situation. A new instantaneous frequency
definition for a large class of so-called simple waves is also proposed
in this paper. Simple wave contains a wide range of signals for which
the concept instantaneous frequency has a perfect physical sense.
The α-counting instantaneous frequency can be used to compute the highest frequency for a signal. Combination of these two approaches one can obtain the IFs of the whole signal. An experiment is demonstrated the computation procedure with promising results.
Abstract: In a product development process, understanding the functional behavior of the system, the role of components in achieving functions and failure modes if components/subsystem fails its required function will help develop appropriate design validation and verification program for reliability assessment. The integration of these three issues will help design and reliability engineers in identifying weak spots in design and planning future actions and testing program. This case study demonstrate the advantage of unascertained theory described in the subjective cognition uncertainty, and then applies blind number (BN) theory in describing the uncertainty of the mechanical system failure process and the same time used the same theory in bringing out another mechanical reliability system model. The practical calculations shows the BN Model embodied the characters of simply, small account of calculation but betterforecasting capability, which had the value of macroscopic discussion to some extent.
Abstract: In this paper, we proposed an efficient data
compression strategy exploiting the multi-resolution characteristic of
the wavelet transform. We have developed a sensor node called
“Smart Sensor Node; SSN". The main goals of the SSN design are
lightweight, minimal power consumption, modular design and robust
circuitry. The SSN is made up of four basic components which are a
sensing unit, a processing unit, a transceiver unit and a power unit.
FiOStd evaluation board is chosen as the main controller of the SSN
for its low costs and high performance. The software coding of the
implementation was done using Simulink model and MATLAB
programming language. The experimental results show that the
proposed data compression technique yields recover signal with good
quality. This technique can be applied to compress the collected data
to reduce the data communication as well as the energy consumption
of the sensor and so the lifetime of sensor node can be extended.