Abstract: In this paper we present an off line system for the
recognition of the handwritten numeric chains. Our work is divided
in two big parts. The first part is the realization of a recognition
system of the isolated handwritten digits. In this case the study is
based mainly on the evaluation of neural network performances,
trained with the gradient back propagation algorithm. The used
parameters to form the input vector of the neural network are
extracted on the binary images of the digits by several methods: the
distribution sequence, the Barr features and the centred moments of
the different projections and profiles. The second part is the
extension of our system for the reading of the handwritten numeric
chains constituted of a variable number of digits. The vertical
projection is used to segment the numeric chain at isolated digits and
every digit (or segment) will be presented separately to the entry of
the system achieved in the first part (recognition system of the
isolated handwritten digits). The result of the recognition of the
numeric chain will be displayed at the exit of the global system.
Abstract: The steady mixed convection boundary layer flow from
a vertical cone in a porous medium filled with a nanofluid is
numerically investigated using different types of nanoparticles as Cu
(copper), Al2O3 (alumina) and TiO2 (titania). The boundary value
problem is solved by using the shooting technique by reducing it
into an ordinary differential equation. Results of interest for the local
Nusselt number with various values of the constant mixed convection
parameter and nanoparticle volume fraction parameter are evaluated.
It is found that dual solutions exist for a certain range of mixed
convection parameter.
Abstract: A dead leg is a typical subsea production system
component. CFD is required to model heat transfer within the dead
leg. Unfortunately its solution is time demanding and thus not
suitable for fast prediction or repeated simulations. Therefore there is
a need to create a thermal FEA model, mimicking the heat flows and
temperatures seen in CFD cool down simulations.
This paper describes the conventional way of tuning and a new
automated way using parametric model order reduction (PMOR)
together with an optimization algorithm. The tuned FE analyses
replicate the steady state CFD parameters within a maximum error in
heat flow of 6 % and 3 % using manual and PMOR method
respectively. During cool down, the relative error of the tuned FEA
models with respect to temperature is below 5% comparing to the
CFD. In addition, the PMOR method obtained the correct FEA setup
five times faster than the manually tuned FEA.
Abstract: Software project effort estimation is frequently seen
as complex and expensive for individual software engineers.
Software production is in a crisis. It suffers from excessive costs.
Software production is often out of control. It has been suggested that
software production is out of control because we do not measure.
You cannot control what you cannot measure. During last decade, a
number of researches on cost estimation have been conducted. The
metric-set selection has a vital role in software cost estimation
studies; its importance has been ignored especially in neural network
based studies. In this study we have explored the reasons of those
disappointing results and implemented different neural network
models using augmented new metrics. The results obtained are
compared with previous studies using traditional metrics. To be able
to make comparisons, two types of data have been used. The first
part of the data is taken from the Constructive Cost Model
(COCOMO'81) which is commonly used in previous studies and the
second part is collected according to new metrics in a leading
international company in Turkey. The accuracy of the selected
metrics and the data samples are verified using statistical techniques.
The model presented here is based on Multi-Layer Perceptron
(MLP). Another difficulty associated with the cost estimation studies
is the fact that the data collection requires time and care. To make a
more thorough use of the samples collected, k-fold, cross validation
method is also implemented. It is concluded that, as long as an
accurate and quantifiable set of metrics are defined and measured
correctly, neural networks can be applied in software cost estimation
studies with success
Abstract: One of the main trouble in a steel strip manufacturing
line is the breakage of whatever weld carried out between steel coils,
that are used to produce the continuous strip to be processed. A weld
breakage results in a several hours stop of the manufacturing line. In
this process the damages caused by the breakage must be repaired.
After the reparation and in order to go on with the production it will
be necessary a restarting process of the line. For minimizing this
problem, a human operator must inspect visually and manually each
weld in order to avoid its breakage during the manufacturing process.
The work presented in this paper is based on the Bayesian decision
theory and it presents an approach to detect, on real-time, steel strip
defective welds. This approach is based on quantifying the tradeoffs
between various classification decisions using probability and the
costs that accompany such decisions.
Abstract: ''Cocktail party problem'' is well known as one of the human auditory abilities. We can recognize the specific sound that we want to listen by this ability even if a lot of undesirable sounds or noises are mixed. Blind source separation (BSS) based on independent component analysis (ICA) is one of the methods by which we can separate only a special signal from their mixed signals with simple hypothesis. In this paper, we propose an online approach for blind source separation using the sliding DFT and the time domain independent component analysis. The proposed method can reduce calculation complexity in comparison with conventional methods, and can be applied to parallel processing by using digital signal processors (DSPs) and so on. We evaluate this method and show its availability.
Abstract: The influence of eccentric discharge of stored solids in
squat silos has been highly valued by many researchers. However,
calculation method of lateral pressure under eccentric flowing still
needs to be deeply studied. In particular, the lateral pressure
distribution on vertical wall could not be accurately recognized
mainly because of its asymmetry. In order to build mechanical model
of lateral pressure, flow channel and flow pattern of stored solids in
squat silo are studied. In this passage, based on Janssen-s theory, the
method for calculating lateral static pressure in squat silos after
eccentric discharge is proposed. Calculative formulae are deduced for
each of three possible cases. This method is also focusing on
unsymmetrical distribution characteristic of silo wall normal
pressure. Finite element model is used to analysis and compare the
results of lateral pressure and the numerical results illustrate the
practicability of the theoretical method.
Abstract: Clustering is the process of subdividing an input data set into a desired number of subgroups so that members of the same subgroup are similar and members of different subgroups have diverse properties. Many heuristic algorithms have been applied to the clustering problem, which is known to be NP Hard. Genetic algorithms have been used in a wide variety of fields to perform clustering, however, the technique normally has a long running time in terms of input set size. This paper proposes an efficient genetic algorithm for clustering on very large data sets, especially on image data sets. The genetic algorithm uses the most time efficient techniques along with preprocessing of the input data set. We test our algorithm on both artificial and real image data sets, both of which are of large size. The experimental results show that our algorithm outperforms the k-means algorithm in terms of running time as well as the quality of the clustering.
Abstract: This article is dedicated to development of
mathematical models for determining the dynamics of
concentration of hazardous substances in urban turbulent
atmosphere. Development of the mathematical models implied
taking into account the time-space variability of the fields of
meteorological items and such turbulent atmosphere data as vortex
nature, nonlinear nature, dissipativity and diffusivity. Knowing the
turbulent airflow velocity is not assumed when developing the
model. However, a simplified model implies that the turbulent and
molecular diffusion ratio is a piecewise constant function that
changes depending on vertical distance from the earth surface.
Thereby an important assumption of vertical stratification of urban
air due to atmospheric accumulation of hazardous substances
emitted by motor vehicles is introduced into the mathematical
model. The suggested simplified non-linear mathematical model of
determining the sought exhaust concentration at a priori unknown
turbulent flow velocity through non-degenerate transformation is
reduced to the model which is subsequently solved analytically.
Abstract: This paper considers various channels of gammaquantum
generation via an ultra-short high-power laser pulse
interaction with different targets.We analyse the possibilities to create
a pulsed gamma-radiation source using laser triggering of some
nuclear reactions and isomer targets. It is shown that sub-MeV
monochromatic short pulse of gamma-radiation can be obtained with
pulse energy of sub-mJ level from isomer target irradiated by intense
laser pulse. For nuclear reaction channel in light- atom materials, it is
shown that sub-PW laser pulse gives rise to formation about million
gamma-photons of multi-MeV energy.
Abstract: On the basis of the linearized Phillips-Herffron model of a single-machine power system, a novel method for designing unified power flow controller (UPFC) based output feedback controller is presented. The design problem of output feedback controller for UPFC is formulated as an optimization problem according to with the time domain-based objective function which is solved by iteration particle swarm optimization (IPSO) that has a strong ability to find the most optimistic results. To ensure the robustness of the proposed damping controller, the design process takes into account a wide range of operating conditions and system configurations. The simulation results prove the effectiveness and robustness of the proposed method in terms of a high performance power system. The simulation study shows that the designed controller by Iteration PSO performs better than Classical PSO in finding the solution.
Abstract: The security of their network remains the priorities of almost all companies. Existing security systems have shown their limit; thus a new type of security systems was born: honeypots. Honeypots are defined as programs or intended servers which have to attract pirates to study theirs behaviours. It is in this context that the leurre.com project of gathering about twenty platforms was born. This article aims to specify a model of honeypots attack. Our model describes, on a given platform, the evolution of attacks according to theirs hours. Afterward, we show the most attacked services by the studies of attacks on the various ports. It is advisable to note that this article was elaborated within the framework of the research projects on honeyspots within the LABTIC (Laboratory of Information Technologies and Communication).
Abstract: Performance of a limited Round-Robin (RR) rule is
studied in order to clarify the characteristics of a realistic sharing
model of a processor. Under the limited RR rule, the processor
allocates to each request a fixed amount of time, called a quantum, in a
fixed order. The sum of the requests being allocated these quanta is
kept below a fixed value. Arriving requests that cannot be allocated
quanta because of such a restriction are queued or rejected. Practical
performance measures, such as the relationship between the mean
sojourn time, the mean number of requests, or the loss probability and
the quantum size are evaluated via simulation. In the evaluation, the
requested service time of an arriving request is converted into a
quantum number. One of these quanta is included in an RR cycle,
which means a series of quanta allocated to each request in a fixed
order. The service time of the arriving request can be evaluated using
the number of RR cycles required to complete the service, the number
of requests receiving service, and the quantum size. Then an increase
or decrease in the number of quanta that are necessary before service is
completed is reevaluated at the arrival or departure of other requests.
Tracking these events and calculations enables us to analyze the
performance of our limited RR rule. In particular, we obtain the most
suitable quantum size, which minimizes the mean sojourn time, for the
case in which the switching time for each quantum is considered.
Abstract: Robot manipulators are highly coupled nonlinear
systems, therefore real system and mathematical model of dynamics
used for control system design are not same. Hence, fine-tuning of
controller is always needed. For better tuning fast simulation speed
is desired. Since, Matlab incorporates LAPACK to increase the speed
and complexity of matrix computation, dynamics, forward and
inverse kinematics of PUMA 560 is modeled on Matlab/Simulink in
such a way that all operations are matrix based which give very less
simulation time. This paper compares PID parameter tuning using
Genetic Algorithm, Simulated Annealing, Generalized Pattern Search
(GPS) and Hybrid Search techniques. Controller performances for all
these methods are compared in terms of joint space ITSE and
cartesian space ISE for tracking circular and butterfly trajectories.
Disturbance signal is added to check robustness of controller. GAGPS
hybrid search technique is showing best results for tuning PID
controller parameters in terms of ITSE and robustness.
Abstract: This paper describes the architectural design
considerations for building a new class of application, a Personal
Knowledge Integrator and a particular example a Knowledge Theatre.
It then supports this description by describing a scenario of a child
acquiring knowledge and how this process could be augmented by
the proposed architecture and design of a Knowledge Theatre. David
Merrill-s first “principles of instruction" are kept in focus to provide
a background to view the learning potential.
Abstract: The stereophotogrammetry modality is gaining more widespread use in the clinical setting. Registration and visualization of this data, in conjunction with conventional 3D volumetric image modalities, provides virtual human data with textured soft tissue and internal anatomical and structural information. In this investigation computed tomography (CT) and stereophotogrammetry data is acquired from 4 anatomical phantoms and registered using the trimmed iterative closest point (TrICP) algorithm. This paper fully addresses the issue of imaging artifacts around the stereophotogrammetry surface edge using the registered CT data as a reference. Several iterative algorithms are implemented to automatically identify and remove stereophotogrammetry surface edge outliers, improving the overall visualization of the combined stereophotogrammetry and CT data. This paper shows that outliers at the surface edge of stereophotogrammetry data can be successfully removed automatically.
Abstract: The design of distributed systems involves the
partitioning of the system into components or partitions and the
allocation of these components to physical nodes. Techniques have
been proposed for both the partitioning and allocation process.
However these techniques suffer from a number of limitations. For
instance object replication has the potential to greatly improve the
performance of an object orientated distributed system but can be
difficult to use effectively and there are few techniques that support
the developer in harnessing object replication.
This paper presents a methodological technique that helps
developers decide how objects should be allocated in order to
improve performance in a distributed system that supports
replication. The performance of the proposed technique is
demonstrated and tested on an example system.
Abstract: In this paper we proposed comparison of four content based objective metrics with results of subjective tests from 80 video sequences. We also include two objective metrics VQM and SSIM to our comparison to serve as “reference” objective metrics because their pros and cons have already been published. Each of the video sequence was preprocessed by the region recognition algorithm and then the particular objective video quality metric were calculated i.e. mutual information, angular distance, moment of angle and normalized cross-correlation measure. The Pearson coefficient was calculated to express metrics relationship to accuracy of the model and the Spearman rank order correlation coefficient to represent the metrics relationship to monotonicity. The results show that model with the mutual information as objective metric provides best result and it is suitable for evaluating quality of video sequences.
Abstract: Reducing river sediments through path correction and
preservation of river walls leads to considerable reduction of
sedimentation at the pumping stations. Path correction and
preservation of walls is not limited to one particular method but,
depending on various conditions, a combination of several methods
can be employed. In this article, we try to review and evaluate
methods for preservation of river banks in order to reduce sediments.
Abstract: The protection of parallel transmission lines has been a challenging task due to mutual coupling between the adjacent circuits of the line. This paper presents a novel scheme for detection and classification of faults on parallel transmission lines. The proposed approach uses combination of wavelet transform and neural network, to solve the problem. While wavelet transform is a powerful mathematical tool which can be employed as a fast and very effective means of analyzing power system transient signals, artificial neural network has a ability to classify non-linear relationship between measured signals by identifying different patterns of the associated signals. The proposed algorithm consists of time-frequency analysis of fault generated transients using wavelet transform, followed by pattern recognition using artificial neural network to identify the type of the fault. MATLAB/Simulink is used to generate fault signals and verify the correctness of the algorithm. The adaptive discrimination scheme is tested by simulating different types of fault and varying fault resistance, fault location and fault inception time, on a given power system model. The simulation results show that the proposed scheme for fault diagnosis is able to classify all the faults on the parallel transmission line rapidly and correctly.