Abstract: Prediction of highly non linear behavior of suspended
sediment flow in rivers has prime importance in the field of water
resources engineering. In this study the predictive performance of
two Artificial Neural Networks (ANNs) namely, the Radial Basis
Function (RBF) Network and the Multi Layer Feed Forward (MLFF)
Network have been compared. Time series data of daily suspended
sediment discharge and water discharge at Pari River was used for
training and testing the networks. A number of statistical parameters
i.e. root mean square error (RMSE), mean absolute error (MAE),
coefficient of efficiency (CE) and coefficient of determination (R2)
were used for performance evaluation of the models. Both the models
produced satisfactory results and showed a good agreement between
the predicted and observed data. The RBF network model provided
slightly better results than the MLFF network model in predicting
suspended sediment discharge.
Abstract: We suggest a novel method to incorporate longterm
redundancy (LTR) in signal time domain compression
methods. The proposition is based on block-sorting and curve
simplification. The proposition is illustrated on the ECG
signal as a post-processor for the FAN method. Test
applications on the new so-obtained FAN+ method using the
MIT-BIH database show substantial improvement of the
compression ratio-distortion behavior for a higher quality
reconstructed signal.
Abstract: A comparative study on the feasibility of producing instant high fibre plantain flour for diabetic fufu by blending soy residence with different plantain (Musa spp) varieties (Horn, false Horn and French), all sieved at 60 mesh, mixed in ratio of 60:40 was analyzed for their passing properties using standard analytical method. Results show that VIIIS60 had the highest peak viscosity (303.75 RVU), Trough value (182.08 RVU), final viscosity (284.50 RVU), and lowest in breakdown viscosity (79.58 RVU), set back value (88.17 RVU), peak time (4.36min), pasting temperature (81.18°C) and differed significantly (p
Abstract: The counting process of cell colonies is always a long
and laborious process that is dependent on the judgment and ability
of the operator. The judgment of the operator in counting can vary in
relation to fatigue. Moreover, since this activity is time consuming it
can limit the usable number of dishes for each experiment. For these
purposes, it is necessary that an automatic system of cell colony
counting is used. This article introduces a new automatic system of
counting based on the elaboration of the digital images of cellular
colonies grown on petri dishes. This system is mainly based on the
algorithms of region-growing for the recognition of the regions of
interest (ROI) in the image and a Sanger neural net for the
characterization of such regions. The better final classification is
supplied from a Feed-Forward Neural Net (FF-NN) and confronted
with the K-Nearest Neighbour (K-NN) and a Linear Discriminative
Function (LDF). The preliminary results are shown.
Abstract: A study was conducted in greenhouse environment to
determine the response of five tissue-cultured date palm cultivars, Al-
Ahamad, Nabusaif, Barhee, Khalas, and Kasab to irrigation water
salinity of 1.6, 5, 10, or 20 dS/ m. The salinity level of 1.6dS/m, was
used as a control. The effects of high salinity on plant survival were
manifested at 360 days after planting (DAP) onwards. Three
cultivars, Khalas, Kasab and Barhee were able to tolerate 10 dS/m
salinity level at 24 months after the start of study. Khalas tolerated
the highest salinity level of 20 dS/ m and 'Nabusaif' was found to be the
least tolerant cv. The average heights of palms and the number of
fronds were decreased with increasing salinity levels as time
progressed.
Abstract: Dynamics of laser radiation – metal target interaction
in water at 1064 nm by applying Mach-Zehnder interference
technique was studied. The mechanism of generating the well
developed regime of evaporation of a metal surface and a spherical
shock wave in water is proposed. Critical intensities of the NIR for
the well developed evaporation of silver and gold targets were
determined. Dynamics of shock waves was investigated for earlier
(dozens) and later (hundreds) nanoseconds of time. Transparent
expanding plasma-vapor-compressed water object was visualized and
measured. The thickness of compressed layer of water and pressures
behind the front of a shock wave for later time delays were obtained
from the optical treatment of interferograms.
Abstract: A dead leg is a typical subsea production system
component. CFD is required to model heat transfer within the dead
leg. Unfortunately its solution is time demanding and thus not
suitable for fast prediction or repeated simulations. Therefore there is
a need to create a thermal FEA model, mimicking the heat flows and
temperatures seen in CFD cool down simulations.
This paper describes the conventional way of tuning and a new
automated way using parametric model order reduction (PMOR)
together with an optimization algorithm. The tuned FE analyses
replicate the steady state CFD parameters within a maximum error in
heat flow of 6 % and 3 % using manual and PMOR method
respectively. During cool down, the relative error of the tuned FEA
models with respect to temperature is below 5% comparing to the
CFD. In addition, the PMOR method obtained the correct FEA setup
five times faster than the manually tuned FEA.
Abstract: Software project effort estimation is frequently seen
as complex and expensive for individual software engineers.
Software production is in a crisis. It suffers from excessive costs.
Software production is often out of control. It has been suggested that
software production is out of control because we do not measure.
You cannot control what you cannot measure. During last decade, a
number of researches on cost estimation have been conducted. The
metric-set selection has a vital role in software cost estimation
studies; its importance has been ignored especially in neural network
based studies. In this study we have explored the reasons of those
disappointing results and implemented different neural network
models using augmented new metrics. The results obtained are
compared with previous studies using traditional metrics. To be able
to make comparisons, two types of data have been used. The first
part of the data is taken from the Constructive Cost Model
(COCOMO'81) which is commonly used in previous studies and the
second part is collected according to new metrics in a leading
international company in Turkey. The accuracy of the selected
metrics and the data samples are verified using statistical techniques.
The model presented here is based on Multi-Layer Perceptron
(MLP). Another difficulty associated with the cost estimation studies
is the fact that the data collection requires time and care. To make a
more thorough use of the samples collected, k-fold, cross validation
method is also implemented. It is concluded that, as long as an
accurate and quantifiable set of metrics are defined and measured
correctly, neural networks can be applied in software cost estimation
studies with success
Abstract: One of the main trouble in a steel strip manufacturing
line is the breakage of whatever weld carried out between steel coils,
that are used to produce the continuous strip to be processed. A weld
breakage results in a several hours stop of the manufacturing line. In
this process the damages caused by the breakage must be repaired.
After the reparation and in order to go on with the production it will
be necessary a restarting process of the line. For minimizing this
problem, a human operator must inspect visually and manually each
weld in order to avoid its breakage during the manufacturing process.
The work presented in this paper is based on the Bayesian decision
theory and it presents an approach to detect, on real-time, steel strip
defective welds. This approach is based on quantifying the tradeoffs
between various classification decisions using probability and the
costs that accompany such decisions.
Abstract: ''Cocktail party problem'' is well known as one of the human auditory abilities. We can recognize the specific sound that we want to listen by this ability even if a lot of undesirable sounds or noises are mixed. Blind source separation (BSS) based on independent component analysis (ICA) is one of the methods by which we can separate only a special signal from their mixed signals with simple hypothesis. In this paper, we propose an online approach for blind source separation using the sliding DFT and the time domain independent component analysis. The proposed method can reduce calculation complexity in comparison with conventional methods, and can be applied to parallel processing by using digital signal processors (DSPs) and so on. We evaluate this method and show its availability.
Abstract: Clustering is the process of subdividing an input data set into a desired number of subgroups so that members of the same subgroup are similar and members of different subgroups have diverse properties. Many heuristic algorithms have been applied to the clustering problem, which is known to be NP Hard. Genetic algorithms have been used in a wide variety of fields to perform clustering, however, the technique normally has a long running time in terms of input set size. This paper proposes an efficient genetic algorithm for clustering on very large data sets, especially on image data sets. The genetic algorithm uses the most time efficient techniques along with preprocessing of the input data set. We test our algorithm on both artificial and real image data sets, both of which are of large size. The experimental results show that our algorithm outperforms the k-means algorithm in terms of running time as well as the quality of the clustering.
Abstract: This article is dedicated to development of
mathematical models for determining the dynamics of
concentration of hazardous substances in urban turbulent
atmosphere. Development of the mathematical models implied
taking into account the time-space variability of the fields of
meteorological items and such turbulent atmosphere data as vortex
nature, nonlinear nature, dissipativity and diffusivity. Knowing the
turbulent airflow velocity is not assumed when developing the
model. However, a simplified model implies that the turbulent and
molecular diffusion ratio is a piecewise constant function that
changes depending on vertical distance from the earth surface.
Thereby an important assumption of vertical stratification of urban
air due to atmospheric accumulation of hazardous substances
emitted by motor vehicles is introduced into the mathematical
model. The suggested simplified non-linear mathematical model of
determining the sought exhaust concentration at a priori unknown
turbulent flow velocity through non-degenerate transformation is
reduced to the model which is subsequently solved analytically.
Abstract: On the basis of the linearized Phillips-Herffron model of a single-machine power system, a novel method for designing unified power flow controller (UPFC) based output feedback controller is presented. The design problem of output feedback controller for UPFC is formulated as an optimization problem according to with the time domain-based objective function which is solved by iteration particle swarm optimization (IPSO) that has a strong ability to find the most optimistic results. To ensure the robustness of the proposed damping controller, the design process takes into account a wide range of operating conditions and system configurations. The simulation results prove the effectiveness and robustness of the proposed method in terms of a high performance power system. The simulation study shows that the designed controller by Iteration PSO performs better than Classical PSO in finding the solution.
Abstract: An approach to develop the FPGA of a flexible key
RSA encryption engine that can be used as a standard device in the
secured communication system is presented. The VHDL modeling of
this RSA encryption engine has the unique characteristics of
supporting multiple key sizes, thus can easily be fit into the systems
that require different levels of security. A simple nested loop addition
and subtraction have been used in order to implement the RSA
operation. This has made the processing time faster and used
comparatively smaller amount of space in the FPGA. The hardware
design is targeted on Altera STRATIX II device and determined that
the flexible key RSA encryption engine can be best suited in the
device named EP2S30F484C3. The RSA encryption implementation
has made use of 13,779 units of logic elements and achieved a clock
frequency of 17.77MHz. It has been verified that this RSA
encryption engine can perform 32-bit, 256-bit and 1024-bit
encryption operation in less than 41.585us, 531.515us and 790.61us
respectively.
Abstract: Intrusion Detection System is significant in network
security. It detects and identifies intrusion behavior or intrusion
attempts in a computer system by monitoring and analyzing the
network packets in real time. In the recent year, intelligent algorithms
applied in the intrusion detection system (IDS) have been an
increasing concern with the rapid growth of the network security.
IDS data deals with a huge amount of data which contains irrelevant
and redundant features causing slow training and testing process,
higher resource consumption as well as poor detection rate. Since the
amount of audit data that an IDS needs to examine is very large even
for a small network, classification by hand is impossible. Hence, the
primary objective of this review is to review the techniques prior to
classification process suit to IDS data.
Abstract: In this paper we present a novel approach for face image coding. The proposed method makes a use of the features of video encoders like motion prediction. At first encoder selects appropriate prototype from the database and warps it according to features of encoding face. Warped prototype is placed as first I frame. Encoding face is placed as second frame as P frame type. Information about features positions, color change, selected prototype and data flow of P frame will be sent to decoder. The condition is both encoder and decoder own the same database of prototypes. We have run experiment with H.264 video encoder and obtained results were compared to results achieved by JPEG and JPEG2000. Obtained results show that our approach is able to achieve 3 times lower bitrate and two times higher PSNR in comparison with JPEG. According to comparison with JPEG2000 the bitrate was very similar, but subjective quality achieved by proposed method is better.
Abstract: Performance of a limited Round-Robin (RR) rule is
studied in order to clarify the characteristics of a realistic sharing
model of a processor. Under the limited RR rule, the processor
allocates to each request a fixed amount of time, called a quantum, in a
fixed order. The sum of the requests being allocated these quanta is
kept below a fixed value. Arriving requests that cannot be allocated
quanta because of such a restriction are queued or rejected. Practical
performance measures, such as the relationship between the mean
sojourn time, the mean number of requests, or the loss probability and
the quantum size are evaluated via simulation. In the evaluation, the
requested service time of an arriving request is converted into a
quantum number. One of these quanta is included in an RR cycle,
which means a series of quanta allocated to each request in a fixed
order. The service time of the arriving request can be evaluated using
the number of RR cycles required to complete the service, the number
of requests receiving service, and the quantum size. Then an increase
or decrease in the number of quanta that are necessary before service is
completed is reevaluated at the arrival or departure of other requests.
Tracking these events and calculations enables us to analyze the
performance of our limited RR rule. In particular, we obtain the most
suitable quantum size, which minimizes the mean sojourn time, for the
case in which the switching time for each quantum is considered.
Abstract: Robot manipulators are highly coupled nonlinear
systems, therefore real system and mathematical model of dynamics
used for control system design are not same. Hence, fine-tuning of
controller is always needed. For better tuning fast simulation speed
is desired. Since, Matlab incorporates LAPACK to increase the speed
and complexity of matrix computation, dynamics, forward and
inverse kinematics of PUMA 560 is modeled on Matlab/Simulink in
such a way that all operations are matrix based which give very less
simulation time. This paper compares PID parameter tuning using
Genetic Algorithm, Simulated Annealing, Generalized Pattern Search
(GPS) and Hybrid Search techniques. Controller performances for all
these methods are compared in terms of joint space ITSE and
cartesian space ISE for tracking circular and butterfly trajectories.
Disturbance signal is added to check robustness of controller. GAGPS
hybrid search technique is showing best results for tuning PID
controller parameters in terms of ITSE and robustness.
Abstract: Application-Specific Instruction (ASI ) set Processors
(ASIP) have become an important design choice for embedded
systems due to runtime flexibility, which cannot be provided by
custom ASIC solutions. One major bottleneck in maximizing ASIP
performance is the limitation on the data bandwidth between the
General Purpose Register File (GPRF) and ASIs. This paper presents
the Implicit Registers (IRs) to provide the desirable data bandwidth.
An ASI Input/Output model is proposed to formulate the overheads of
the additional data transfer between the GPRF and IRs, therefore,
an IRs allocation algorithm is used to achieve the better performance
by minimizing the number of extra data transfer instructions. The
experiment results show an up to 3.33x speedup compared to the
results without using IRs.
Abstract: The protection of parallel transmission lines has been a challenging task due to mutual coupling between the adjacent circuits of the line. This paper presents a novel scheme for detection and classification of faults on parallel transmission lines. The proposed approach uses combination of wavelet transform and neural network, to solve the problem. While wavelet transform is a powerful mathematical tool which can be employed as a fast and very effective means of analyzing power system transient signals, artificial neural network has a ability to classify non-linear relationship between measured signals by identifying different patterns of the associated signals. The proposed algorithm consists of time-frequency analysis of fault generated transients using wavelet transform, followed by pattern recognition using artificial neural network to identify the type of the fault. MATLAB/Simulink is used to generate fault signals and verify the correctness of the algorithm. The adaptive discrimination scheme is tested by simulating different types of fault and varying fault resistance, fault location and fault inception time, on a given power system model. The simulation results show that the proposed scheme for fault diagnosis is able to classify all the faults on the parallel transmission line rapidly and correctly.