Abstract: The traditional software product and process metrics
are neither suitable nor sufficient in measuring the complexity of
software components, which ultimately is necessary for quality and
productivity improvement within organizations adopting CBSE.
Researchers have proposed a wide range of complexity metrics for
software systems. However, these metrics are not sufficient for
components and component-based system and are restricted to the
module-oriented systems and object-oriented systems. In this
proposed study it is proposed to find the complexity of the JavaBean
Software Components as a reflection of its quality and the component
can be adopted accordingly to make it more reusable. The proposed
metric involves only the design issues of the component and does not
consider the packaging and the deployment complexity. In this way,
the software components could be kept in certain limit which in turn
help in enhancing the quality and productivity.
Abstract: While compressing text files is useful, compressing
still image files is almost a necessity. A typical image takes up much
more storage than a typical text message and without compression
images would be extremely clumsy to store and distribute. The
amount of information required to store pictures on modern
computers is quite large in relation to the amount of bandwidth
commonly available to transmit them over the Internet and
applications. Image compression addresses the problem of reducing
the amount of data required to represent a digital image. Performance
of any image compression method can be evaluated by measuring the
root-mean-square-error & peak signal to noise ratio. The method of
image compression that will be analyzed in this paper is based on the
lossy JPEG image compression technique, the most popular
compression technique for color images. JPEG compression is able to
greatly reduce file size with minimal image degradation by throwing
away the least “important" information. In JPEG, both color
components are downsampled simultaneously, but in this paper we
will compare the results when the compression is done by
downsampling the single chroma part. In this paper we will
demonstrate more compression ratio is achieved when the
chrominance blue is downsampled as compared to downsampling the
chrominance red in JPEG compression. But the peak signal to noise
ratio is more when the chrominance red is downsampled as compared
to downsampling the chrominance blue in JPEG compression. In
particular we will use the hats.jpg as a demonstration of JPEG
compression using low pass filter and demonstrate that the image is
compressed with barely any visual differences with both methods.
Abstract: Experiments have been carried out at sub-critical
Reynolds number to investigate free-to-roll motions induced by
forebody and/or wings complex flow on a 30° swept back nonslender
wings-slender body-model for static and dynamic (pitch-up)
cases. For the dynamic (pitch-up) case it has been observed that roll
amplitude decreases and lag increases with increase in pitching
speed. Decrease in roll amplitude with increase in pitch rate is
attributed to low disturbing rolling moment due to weaker interaction
between forebody and wing flow components. Asymmetric forebody
vortices dominate and control the roll motion of the model in
dynamic case when non-dimensional pitch rate ≥ 1x10-2.
Effectiveness of the active control scheme utilizing rotating nose with
artificial tip perturbation is observed to be low in the angle of attack
region where the complex flow over the wings has contributions from
both forebody and wings.
Abstract: This paper presents a robust method to detect obstacles in stereo images using shadow removal technique and color information. Stereo vision based obstacle detection is an algorithm that aims to detect and compute obstacle depth using stereo matching and disparity map. The proposed advanced method is divided into three phases, the first phase is detecting obstacles and removing shadows, the second one is matching and the last phase is depth computing. We propose a robust method for detecting obstacles in stereo images using a shadow removal technique based on color information in HIS space, at the first phase. In this paper we use Normalized Cross Correlation (NCC) function matching with a 5 × 5 window and prepare an empty matching table τ and start growing disparity components by drawing a seed s from S which is computed using canny edge detector, and adding it to τ. In this way we achieve higher performance than the previous works [2,17]. A fast stereo matching algorithm is proposed that visits only a small fraction of disparity space in order to find a semi-dense disparity map. It works by growing from a small set of correspondence seeds. The obstacle identified in phase one which appears in the disparity map of phase two enters to the third phase of depth computing. Finally, experimental results are presented to show the effectiveness of the proposed method.
Abstract: The object of this paper is to design and analyze a
proportional – integral (PI) control for positive output elementary
super lift Luo converter (POESLLC), which is the start-of-the-art
DC-DC converter. The positive output elementary super lift Luo
converter performs the voltage conversion from positive source
voltage to positive load voltage. This paper proposes a
development of PI control capable of providing the good static and
dynamic performance compared to proportional – integralderivative
(PID) controller. Using state space average method
derives the dynamic equations describing the positive output
elementary super lift luo converter and PI control is designed. The
simulation model of the positive output elementary super lift Luo
converter with its control circuit is implemented in
Matlab/Simulink. The PI control for positive output elementary
super lift Luo converter is tested for transient region, line changes,
load changes, steady state region and also for components
variations.
Abstract: In this paper, the statistical properties of filtered or convolved signals are considered by deriving the resulting density functions as well as the exact mean and variance expressions given a prior knowledge about the statistics of the individual signals in the filtering or convolution process. It is shown that the density function after linear convolution is a mixture density, where the number of density components is equal to the number of observations of the shortest signal. For circular convolution, the observed samples are characterized by a single density function, which is a sum of products.
Abstract: The vast rural landscape in the southern United States
is conspicuously characterized by the hedgerow trees or groves. The
patchwork landscape of fields surrounded by high hedgerows is a
traditional and familiar feature of the American countryside.
Hedgerows are in effect linear strips of trees, groves, or woodlands,
which are often critical habitats for wildlife and important for the
visual quality of the landscape. As landscape interfaces, hedgerows
define the spaces in the landscape, give the landscape life and
meaning, and enrich ecologies and cultural heritages of the American
countryside. Although hedgerows were originally intended as fences
and to mark property and townland boundaries, they are not merely
the natural or man-made additions to the landscape--they have
gradually become “naturalized" into the landscape, deeply rooted in
the rural culture, and now formed an important component of the
southern American rural environment. However, due to the ever
expanding real estate industry and high demand for new residential
development, substantial areas of authentic hedgerow landscape in
the southern United States are being urbanized. Using Hudson Farm
as an example, this study illustrated guidelines of how hedgerows can
be integrated into town planning as green infrastructure and
landscape interface to innovate and direct sustainable land use, and
suggest ways in which such vernacular landscapes can be preserved
and integrated into new development without losing their contextual
inspiration.
Abstract: The dynamic behaviour of a four-bar linkage driven by a velocity controlled DC motor is discussed in the paper. In particular the author presents the results obtained by means of a specifically developed software, which implements the mathematical models of all components of the system (linkage, transmission, electric motor, control devices). The use of this software enables a more efficient design approach, since it allows the designer to check, in a simple and immediate way, the dynamic behaviour of the mechanism, arising from different values of the system parameters.
Abstract: The paper presents a computational tool developed for
the evaluation of technical and economic advantages of an innovative
cleaning and conditioning technology of fluidized bed steam/oxygen
gasifiers outlet product gas. This technology integrates into a single
unit the steam gasification of biomass and the hot gas cleaning and
conditioning system. Both components of the computational tool,
process flowsheet and economic evaluator, have been developed
under IPSEpro software. The economic model provides information
that can help potential users, especially small and medium size
enterprises acting in the regenerable energy field, to decide the
optimal scale of a plant and to better understand both potentiality and
limits of the system when applied to a wide range of conditions.
Abstract: Principal Component Analysis (PCA) has many
different important applications especially in pattern detection
such as face detection / recognition. Therefore, for real time
applications, the response time is required to be as small as
possible. In this paper, new implementation of PCA for fast
face detection is presented. Such new implementation is
designed based on cross correlation in the frequency domain
between the input image and eigenvectors (weights).
Simulation results show that the proposed implementation of
PCA is faster than conventional one.
Abstract: Hydrothermally synthesized high silica borosilicates
with the MFI structure was subjected to several characterization
techniques. The effect of boron on the structure and acidity of
HZSM-5 catalyst were studied by XRD, SEM, N2 adsorption, solid
state NMR, NH3-TPD. It was confirmed that boron had entered the
framework in the boron samples. The results also revealed that strong
acidity was weakened and weak acidity was strengthened by the
boron added zeolite framework compared with parent catalyst. The
catalytic performance was carried out in a fixed bed at 460°C for
methanol to propylene (MTP) reaction. The results of MTP reaction
showed a great increment of the propylene selectivity and excellent
stability for the B-HZSM-5. The catalyst exhibited about 81%
selectivity to C2
= - C4
= olefins with 40% selectivity of propylene as
major component at near 100% methanol conversion, and the stable
performance in the studied period was 100h.
Abstract: This study examines the impact of working capital
management on firms- performance and market value of the firms in
Nigeria. A sample of fifty four non-financial quoted firms in Nigeria
listed on the Nigeria Stock Exchange was used for this study. Data
were collected from annual reports of the sampled firms for the
period 1995-2009. This result shows there is a significant negative
relationship between cash conversion cycle and market valuation
and firm-s performance. It also shows that debt ratio is positively
related to market valuation and negatively related firm-s
performance. The findings confirm that there is a significant
relationship between Market valuation, profitability and working
capital component in line with previous studies. This mean that
Nigeria firms should ensure adequate management of working
capital especially cash conversion cycle components of account
receivables, account payables and inventories, as efficiency working
capital management is expected to contribute positively to the firms-
market value.
Abstract: This paper presents a new method for the
implementation of a direct rotor flux control (DRFOC) of induction
motor (IM) drives. It is based on the rotor flux components
regulation. The d and q axis rotor flux components feed proportional
integral (PI) controllers. The outputs of which are the target stator
voltages (vdsref and vqsref). While, the synchronous speed is depicted at
the output of rotor speed controller. In order to accomplish variable
speed operation, conventional PI like controller is commonly used.
These controllers provide limited good performances over a wide
range of operations even under ideal field oriented conditions. An
alternate approach is to use the so called fuzzy logic controller. The
overall investigated system is implemented using dSpace system
based on digital signal processor (DSP). Simulation and experimental
results have been presented for a one kw IM drives to confirm the
validity of the proposed algorithms.
Abstract: The ElectroEncephaloGram (EEG) is useful for
clinical diagnosis and biomedical research. EEG signals often
contain strong ElectroOculoGram (EOG) artifacts produced
by eye movements and eye blinks especially in EEG recorded
from frontal channels. These artifacts obscure the underlying
brain activity, making its visual or automated inspection
difficult. The goal of ocular artifact removal is to remove
ocular artifacts from the recorded EEG, leaving the underlying
background signals due to brain activity. In recent times,
Independent Component Analysis (ICA) algorithms have
demonstrated superior potential in obtaining the least
dependent source components. In this paper, the independent
components are obtained by using the JADE algorithm (best
separating algorithm) and are classified into either artifact
component or neural component. Neural Network is used for
the classification of the obtained independent components.
Neural Network requires input features that exactly represent
the true character of the input signals so that the neural
network could classify the signals based on those key
characters that differentiate between various signals. In this
work, Auto Regressive (AR) coefficients are used as the input
features for classification. Two neural network approaches
are used to learn classification rules from EEG data. First, a
Polynomial Neural Network (PNN) trained by GMDH (Group
Method of Data Handling) algorithm is used and secondly,
feed-forward neural network classifier trained by a standard
back-propagation algorithm is used for classification and the
results show that JADE-FNN performs better than JADEPNN.
Abstract: Single-pole switching scheme is widely used in the
Extra High Voltage system. However, the substantial negativesequence
current injected to the turbine-generators imposes the
electromagnetic (E/M) torque of double system- frequency
components during the dead time (between single-pole clearing and
line reclosing). This would induce supersynchronous resonance
(SPSR) torque amplifications on low pressure turbine generator
blades and even lead to fatigue damage. This paper proposes the
design of a mechanical filter (MF) with natural frequency close to
double-system frequency. From the simulation results, it is found that
such a filter not only successfully damps the resonant effect, but also
has the characteristics of feasibility and compact.
Abstract: This paper explores the effectiveness of machine
learning techniques in detecting firms that issue fraudulent financial
statements (FFS) and deals with the identification of factors
associated to FFS. To this end, a number of experiments have been
conducted using representative learning algorithms, which were
trained using a data set of 164 fraud and non-fraud Greek firms in the
recent period 2001-2002. The decision of which particular method to
choose is a complicated problem. A good alternative to choosing
only one method is to create a hybrid forecasting system
incorporating a number of possible solution methods as components
(an ensemble of classifiers). For this purpose, we have implemented
a hybrid decision support system that combines the representative
algorithms using a stacking variant methodology and achieves better
performance than any examined simple and ensemble method. To
sum up, this study indicates that the investigation of financial
information can be used in the identification of FFS and underline the
importance of financial ratios.
Abstract: In single trial analysis, when using Principal
Component Analysis (PCA) to extract Visual Evoked Potential
(VEP) signals, the selection of principal components (PCs) is an
important issue. We propose a new method here that selects only
the appropriate PCs. We denote the method as selective eigen-rate
(SER). In the method, the VEP is reconstructed based on the rate
of the eigen-values of the PCs. When this technique is applied on
emulated VEP signals added with background
electroencephalogram (EEG), with a focus on extracting the
evoked P3 parameter, it is found to be feasible. The improvement
in signal to noise ratio (SNR) is superior to two other existing
methods of PC selection: Kaiser (KSR) and Residual Power (RP).
Though another PC selection method, Spectral Power Ratio (SPR)
gives a comparable SNR with high noise factors (i.e. EEGs), SER
give more impressive results in such cases. Next, we applied SER
method to real VEP signals to analyse the P3 responses for
matched and non-matched stimuli. The P3 parameters extracted
through our proposed SER method showed higher P3 response for
matched stimulus, which confirms to the existing neuroscience
knowledge. Single trial PCA using KSR and RP methods failed to
indicate any difference for the stimuli.
Abstract: In any distributed systems, process scheduling plays a
vital role in determining the efficiency of the system. Process scheduling algorithms are used to ensure that the components of the
system would be able to maximize its utilization and able to complete all the processes assigned in a specified period of time.
This paper focuses on the development of comparative simulator for distributed process scheduling algorithms. The objectives of the works that have been carried out include the development of the
comparative simulator, as well as to implement a comparative study
between three distributed process scheduling algorithms; senderinitiated,
receiver-initiated and hybrid sender-receiver-initiated
algorithms. The comparative study was done based on the Average Waiting Time (AWT) and Average Turnaround Time (ATT) of the
processes involved. The simulation results show that the performance of the algorithms depends on the number of nodes in the system.
Abstract: Reservoirs with high pressures and temperatures
(HPHT) that were considered to be atypical in the past are now
frequent targets for exploration. For downhole oilfield drilling tools
and components, the temperature and pressure affect the mechanical
strength. To address this issue, a finite element analysis (FEA) for
206.84 MPa (30 ksi) pressure and 165°C has been performed on the
pressure housing of the measurement-while-drilling/logging-whiledrilling
(MWD/LWD) density tool.
The density tool is a MWD/LWD sensor that measures the density
of the formation. One of the components of the density tool is the
pressure housing that is positioned in the tool. The FEA results are
compared with the experimental test performed on the pressure
housing of the density tool. Past results show a close match between
the numerical results and the experimental test. This FEA model can
be used for extreme HPHT and ultra HPHT analyses, and/or optimal
design changes.
Abstract: Mobile IP has been developed to provide the
continuous information network access to mobile users. In IP-based
mobile networks, location management is an important component of
mobility management. This management enables the system to track
the location of mobile node between consecutive communications. It
includes two important tasks- location update and call delivery.
Location update is associated with signaling load. Frequent updates
lead to degradation in the overall performance of the network and the
underutilization of the resources. It is, therefore, required to devise
the mechanism to minimize the update rate. Mobile IPv6 (MIPv6)
and Hierarchical MIPv6 (HMIPv6) have been the potential
candidates for deployments in mobile IP networks for mobility
management. HMIPv6 through studies has been shown with better
performance as compared to MIPv6. It reduces the signaling
overhead traffic by making registration process local. In this paper,
we present performance analysis of MIPv6 and HMIPv6 using an
analytical model. Location update cost function is formulated based
on fluid flow mobility model. The impact of cell residence time, cell
residence probability and user-s mobility is investigated. Numerical
results are obtained and presented in graphical form. It is shown that
HMIPv6 outperforms MIPv6 for high mobility users only and for low
mobility users; performance of both the schemes is almost equivalent
to each other.