Abstract: Chua’s circuit is one of the most important electronic devices that are used for Chaos and Bifurcation studies. A central role of secure communication is devoted to it. Since the adaptive control is used vastly in the linear systems control, here we introduce a new trend of application of adaptive method in the chaos controlling field. In this paper, we try to derive a new adaptive control scheme for Chua’s circuit controlling because control of chaos is often very important in practical operations. The novelty of this approach is for sake of its robustness against the external perturbations which is simulated as an additive noise in all measured states and can be generalized to other chaotic systems. Our approach is based on Lyapunov analysis and the adaptation law is considered for the feedback gain. Because of this, we have named it NAFT (Nonlinear Adaptive Feedback Technique). At last, simulations show the capability of the presented technique for Chua’s circuit.
Abstract: Importance of software quality is increasing leading to development of new sophisticated techniques, which can be used in constructing models for predicting quality attributes. One such technique is Artificial Neural Network (ANN). This paper examined the application of ANN for software quality prediction using Object- Oriented (OO) metrics. Quality estimation includes estimating maintainability of software. The dependent variable in our study was maintenance effort. The independent variables were principal components of eight OO metrics. The results showed that the Mean Absolute Relative Error (MARE) was 0.265 of ANN model. Thus we found that ANN method was useful in constructing software quality model.
Abstract: Independent component analysis (ICA) in the
frequency domain is used for solving the problem of blind source
separation (BSS). However, this method has some problems. For
example, a general ICA algorithm cannot determine the permutation
of signals which is important in the frequency domain ICA. In this
paper, we propose an approach to the solution for a permutation
problem. The idea is to effectively combine two conventional
approaches. This approach improves the signal separation
performance by exploiting features of the conventional approaches.
We show the simulation results using artificial data.
Abstract: The paradigm of mobile agent provides a promising technology for the development of distributed and open applications. However, one of the main obstacles to widespread adoption of the mobile agent paradigm seems to be security. This paper treats the security of the mobile agent against malicious host attacks. It describes generic mobile agent protection architecture. The proposed approach is based on the dynamic adaptability and adopts the reflexivity as a model of conception and implantation. In order to protect it against behaviour analysis attempts, the suggested approach supplies the mobile agent with a flexibility faculty allowing it to present an unexpected behaviour. Furthermore, some classical protective mechanisms are used to reinforce the level of security.
Abstract: The last decade has shown that object-oriented
concept by itself is not that powerful to cope with the rapidly
changing requirements of ongoing applications. Component-based
systems achieve flexibility by clearly separating the stable parts of
systems (i.e. the components) from the specification of their
composition. In order to realize the reuse of components effectively
in CBSD, it is required to measure the reusability of components.
However, due to the black-box nature of components where the
source code of these components are not available, it is difficult to
use conventional metrics in Component-based Development as these
metrics require analysis of source codes. In this paper, we survey
few existing component-based reusability metrics. These metrics
give a border view of component-s understandability, adaptability,
and portability. It also describes the analysis, in terms of quality
factors related to reusability, contained in an approach that aids
significantly in assessing existing components for reusability.
Abstract: In this paper, to optimize the “Characteristic Straight Line Method" which is used in the soil displacement analysis, a “best estimate" of the geodetic leveling observations has been achieved by taking in account the concept of 'Height systems'. This concept has been discussed in detail and consequently the concept of “height". In landslides dynamic analysis, the soil is considered as a mosaic of rigid blocks. The soil displacement has been monitored and analyzed by using the “Characteristic Straight Line Method". Its characteristic components have been defined constructed from a “best estimate" of the topometric observations. In the measurement of elevation differences, we have used the most modern leveling equipment available. Observational procedures have also been designed to provide the most effective method to acquire data. In addition systematic errors which cannot be sufficiently controlled by instrumentation or observational techniques are minimized by applying appropriate corrections to the observed data: the level collimation correction minimizes the error caused by nonhorizontality of the leveling instrument's line of sight for unequal sight lengths, the refraction correction is modeled to minimize the refraction error caused by temperature (density) variation of air strata, the rod temperature correction accounts for variation in the length of the leveling rod' s Invar/LO-VAR® strip which results from temperature changes, the rod scale correction ensures a uniform scale which conforms to the international length standard and the introduction of the concept of the 'Height systems' where all types of height (orthometric, dynamic, normal, gravity correction, and equipotential surface) have been investigated. The “Characteristic Straight Line Method" is slightly more convenient than the “Characteristic Circle Method". It permits to evaluate a displacement of very small magnitude even when the displacement is of an infinitesimal quantity. The inclination of the landslide is given by the inverse of the distance reference point O to the “Characteristic Straight Line". Its direction is given by the bearing of the normal directed from point O to the Characteristic Straight Line (Fig..6). A “best estimate" of the topometric observations was used to measure the elevation of points carefully selected, before and after the deformation. Gross errors have been eliminated by statistical analyses and by comparing the heights within local neighborhoods. The results of a test using an area where very interesting land surface deformation occurs are reported. Monitoring with different options and qualitative comparison of results based on a sufficient number of check points are presented.
Abstract: The purpose of this paper is to perform a multidisciplinary design and analysis (MDA) of honeycomb panels used in the satellites structural design. All the analysis is based on clamped-free boundary conditions. In the present work, detailed finite element models for honeycomb panels are developed and analysed. Experimental tests were carried out on a honeycomb specimen of which the goal is to compare the previous modal analysis made by the finite element method as well as the existing equivalent approaches. The obtained results show a good agreement between the finite element analysis, equivalent and tests results; the difference in the first two frequencies is less than 4% and less than 10% for the third frequency. The results of the equivalent model presented in this analysis are obtained with a good accuracy. Moreover, investigations carried out in this research relate to the honeycomb plate modal analysis under several aspects including the structural geometrical variation by studying the various influences of the dimension parameters on the modal frequency, the variation of core and skin material of the honeycomb. The various results obtained in this paper are promising and show that the geometry parameters and the type of material have an effect on the value of the honeycomb plate modal frequency.
Abstract: Automatic segmentation of skin lesions is the first step
towards development of a computer-aided diagnosis of melanoma.
Although numerous segmentation methods have been developed,
few studies have focused on determining the most discriminative
and effective color space for melanoma application. This paper
proposes a novel automatic segmentation algorithm using color space
analysis and clustering-based histogram thresholding, which is able to
determine the optimal color channel for segmentation of skin lesions.
To demonstrate the validity of the algorithm, it is tested on a set of 30
high resolution dermoscopy images and a comprehensive evaluation
of the results is provided, where borders manually drawn by four
dermatologists, are compared to automated borders detected by the
proposed algorithm. The evaluation is carried out by applying three
previously used metrics of accuracy, sensitivity, and specificity and
a new metric of similarity. Through ROC analysis and ranking the
metrics, it is shown that the best results are obtained with the X and
XoYoR color channels which results in an accuracy of approximately
97%. The proposed method is also compared with two state-ofthe-
art skin lesion segmentation methods, which demonstrates the
effectiveness and superiority of the proposed segmentation method.
Abstract: In this paper the design of maximally flat linear phase
finite impulse response (FIR) filters is considered. The problem is
handled with totally two different approaches. The first one is
completely deterministic numerical approach where the problem is
formulated as a Linear Complementarity Problem (LCP). The other
one is based on a combination of Markov Random Fields (MRF's)
approach with messy genetic algorithm (MGA). Markov Random
Fields (MRFs) are a class of probabilistic models that have been
applied for many years to the analysis of visual patterns or textures.
Our objective is to establish MRFs as an interesting approach to
modeling messy genetic algorithms. We establish a theoretical result
that every genetic algorithm problem can be characterized in terms of
a MRF model. This allows us to construct an explicit probabilistic
model of the MGA fitness function and introduce the Ising MGA.
Experimentations done with Ising MGA are less costly than those
done with standard MGA since much less computations are involved.
The least computations of all is for the LCP. Results of the LCP,
random search, random seeded search, MGA, and Ising MGA are
discussed.
Abstract: The paper is concerned with relationships between
SSME and ICTs and focuses on the role of Web 2.0 tools in
the service development process. The research presented aims at
exploring how collaborative technologies can support and improve
service processes, highlighting customer centrality and value coproduction.
The core idea of the paper is the centrality of user
participation and the collaborative technologies as enabling factors;
Wikipedia is analyzed as an example. The result of such analysis is
the identification and description of a pattern characterising specific
services in which users collaborate by means of web tools with value
co-producers during the service process. The pattern of collaborative
co-production concerning several categories of services including
knowledge based services is then discussed.
Abstract: Multiprocessor task scheduling is a NP-hard problem and Genetic Algorithm (GA) has been revealed as an excellent technique for finding an optimal solution. In the past, several methods have been considered for the solution of this problem based on GAs. But, all these methods consider single criteria and in the present work, minimization of the bi-criteria multiprocessor task scheduling problem has been considered which includes weighted sum of makespan & total completion time. Efficiency and effectiveness of genetic algorithm can be achieved by optimization of its different parameters such as crossover, mutation, crossover probability, selection function etc. The effects of GA parameters on minimization of bi-criteria fitness function and subsequent setting of parameters have been accomplished by central composite design (CCD) approach of response surface methodology (RSM) of Design of Experiments. The experiments have been performed with different levels of GA parameters and analysis of variance has been performed for significant parameters for minimisation of makespan and total completion time simultaneously.
Abstract: MC (Management Control)& IC (Internal Control) – what is the relationship? (an empirical study into the definitions between MC and IC) based on the wider considerations of Internal Control and Management Control terms, attention is focused not only on the financial aspects but also more on the soft aspects of the business, such as culture, behaviour, standards and values. The limited considerations of Management Control are focused mainly in the hard, financial aspects of business operation. The definitions of Management Control and Internal Control are often used interchangeably and the results of this empirical study reveal that Management Control is part of Internal Control, there is no causal link between the two concepts. Based on the interpretation of the respondents, the term Management Control has moved from a broad term to a more limited term with the soft aspects of the influencing of behaviour, performance measurements, incentives and culture. This paper is an exploratory study based on qualitative research and on a qualitative matrix method analysis of the thematic definition of the terms Management Control and Internal Control.
Abstract: A reliability, availability and maintainability (RAM) model has been built for acid gas removal plant for system analysis that will play an important role in any process modifications, if required, for achieving its optimum performance. Due to the complexity of the plant, the model was based on a Reliability Block Diagram (RBD) with a Monte Carlo simulation engine. The model has been validated against actual plant data as well as local expert opinions, resulting in an acceptable simulation model. The results from the model showed that the operation and maintenance can be further improved, resulting in reduction of the annual production loss.
Abstract: In this paper, the requirement for Coke quality
prediction, its role in Blast furnaces, and the model output is
explained. By applying method of Artificial Neural Networking
(ANN) using back propagation (BP) algorithm, prediction model has
been developed to predict CSR. Important blast furnace functions
such as permeability, heat exchanging, melting, and reducing
capacity are mostly connected to coke quality. Coke quality is further
dependent upon coal characterization and coke making process
parameters. The ANN model developed is a useful tool for process
experts to adjust the control parameters in case of coke quality
deviations. The model also makes it possible to predict CSR for new
coal blends which are yet to be used in Coke Plant. Input data to the
model was structured into 3 modules, for tenure of past 2 years and
the incremental models thus developed assists in identifying the
group causing the deviation of CSR.
Abstract: In this paper, the signal transmission analysis of the
semicircle-shaped via structure for the differential pairs is presented in
the frequency range up to 10 GHz. In order to improve the signal
transmission properties in the differential pairs, single via is separated
centrally into two semicircle-shaped sections, which are
interconnected with the traces of differential pairs respectively. This
via structure make possible to route differential pairs using only one
via. In addition, it can improve impedance discontinuity around its
region and then enhance the signal transmission properties in the
differential pairs. The electrical analysis such as S-parameter
calculation and eye diagram simulation has been performed to
investigate the improvement of the signal transmission property in the
differential pairs with new via structure.
Abstract: Validation of an automation system is an important issue. The goal is to check if the system under investigation, modeled by a Petri net, never enters the undesired states. Usually, tools dedicated to Petri nets such as DESIGN/CPN are used to make reachability analysis. The biggest problem with this approach is that it is impossible to generate the full occurence graph of the system because it is too large. In this paper, we show how computational methods such as temporal logic model checking and Groebner bases can be used to verify the correctness of the design of an automation system. We report our experimental results with two automation systems: the Automated Guided Vehicle (AGV) system and the traffic light system. Validation of these two systems ranged from 10 to 30 seconds on a PC depending on the optimizing parameters.
Abstract: In research on natural ventilation, and passive cooling
with forced convection, is essential to know how heat flows in a solid
object and the pattern of temperature distribution on their surfaces,
and eventually how air flows through and convects heat from the
surfaces of steel under roof. This paper presents some results from
running the computational fluid dynamic program (CFD) by
comparison between natural ventilation and forced convection within
roof attic that is received directly from solar radiation. The CFD
program for modeling air flow inside roof attic has been modified to
allow as two cases. First case, the analysis under natural ventilation,
is closed area in roof attic and second case, the analysis under forced
convection, is opened area in roof attic. These extend of all cases to
available predictions of variations such as temperature, pressure, and
mass flow rate distributions in each case within roof attic. The
comparison shows that this CFD program is an effective model for
predicting air flow of temperature and heat transfer coefficient
distribution within roof attic. The result shows that forced convection
can help to reduce heat transfer through roof attic and an around area
of steel core has temperature inner zone lower than natural
ventilation type. The different temperature on the steel core of roof
attic of two cases was 10-15 oK.
Abstract: The development of the signal compression
algorithms is having compressive progress. These algorithms are
continuously improved by new tools and aim to reduce, an average,
the number of bits necessary to the signal representation by means of
minimizing the reconstruction error. The following article proposes
the compression of Arabic speech signal by a hybrid method
combining the wavelet transform and the linear prediction. The
adopted approach rests, on one hand, on the original signal
decomposition by ways of analysis filters, which is followed by the
compression stage, and on the other hand, on the application of the
order 5, as well as, the compression signal coefficients. The aim of
this approach is the estimation of the predicted error, which will be
coded and transmitted. The decoding operation is then used to
reconstitute the original signal. Thus, the adequate choice of the
bench of filters is useful to the transform in necessary to increase the
compression rate and induce an impercevable distortion from an
auditive point of view.
Abstract: The fluid mechanics principle is used extensively in
designing axial flow fans and their associated equipment. This paper presents a computational fluid dynamics (CFD) modeling of air flow
distribution from a radiator axial flow fan used in an acid pump truck Tier4 (APT T4) Repower. This axial flow fan augments the transfer
of heat from the engine mounted on the APT T4.
CFD analysis was performed for an area weighted average static pressure difference at the inlet and outlet of the fan. Pressure contours, velocity vectors, and path lines were plotted for detailing
the flow characteristics for different orientations of the fan blade. The results were then compared and verified against known theoretical observations and actual experimental data. This study
shows that a CFD simulation can be very useful for predicting and understanding the flow distribution from a radiator fan for further
research work.
Abstract: This paper presents image compression with wavelet based method. The wavelet transformation divides image to low- and high pass filtered parts. The traditional JPEG compression technique requires lower computation power with feasible losses, when only compression is needed. However, there is obvious need for wavelet based methods in certain circumstances. The methods are intended to the applications in which the image analyzing is done parallel with compression. Furthermore, high frequency bands can be used to detect changes or edges. Wavelets enable hierarchical analysis for low pass filtered sub-images. The first analysis can be done for a small image, and only if any interesting is found, the whole image is processed or reconstructed.