Abstract: In this paper we present an off line system for the
recognition of the handwritten numeric chains. Our work is divided
in two big parts. The first part is the realization of a recognition
system of the isolated handwritten digits. In this case the study is
based mainly on the evaluation of neural network performances,
trained with the gradient back propagation algorithm. The used
parameters to form the input vector of the neural network are
extracted on the binary images of the digits by several methods: the
distribution sequence, the Barr features and the centred moments of
the different projections and profiles. The second part is the
extension of our system for the reading of the handwritten numeric
chains constituted of a variable number of digits. The vertical
projection is used to segment the numeric chain at isolated digits and
every digit (or segment) will be presented separately to the entry of
the system achieved in the first part (recognition system of the
isolated handwritten digits). The result of the recognition of the
numeric chain will be displayed at the exit of the global system.
Abstract: This paper focuses on a critical component of the situational awareness (SA), the control of autonomous vertical flight for tactical unmanned aerial vehicle (TUAV). With the SA strategy, we proposed a two stage flight control procedure using two autonomous control subsystems to address the dynamics variation and performance requirement difference in initial and final stages of flight trajectory for an unmanned helicopter model with coaxial rotor and ducted fan configuration. This control strategy for chosen model of TUAV has been verified by simulation of hovering maneuvers using software package Simulink and demonstrated good performance for fast stabilization of engines in hovering, consequently, fast SA with economy in energy can be asserted during search-and-rescue operations.
Abstract: In the world of Peer-to-Peer (P2P) networking
different protocols have been developed to make the resource sharing
or information retrieval more efficient. The SemPeer protocol is a
new layer on Gnutella that transforms the connections of the nodes
based on semantic information to make information retrieval more
efficient. However, this transformation causes high clustering in the
network that decreases the number of nodes reached, therefore the
probability of finding a document is also decreased. In this paper we
describe a mathematical model for the Gnutella and SemPeer
protocols that captures clustering-related issues, followed by a
proposition to modify the SemPeer protocol to achieve moderate
clustering. This modification is a sort of link management for the
individual nodes that allows the SemPeer protocol to be more
efficient, because the probability of a successful query in the P2P
network is reasonably increased. For the validation of the models, we
evaluated a series of simulations that supported our results.
Abstract: Software project effort estimation is frequently seen
as complex and expensive for individual software engineers.
Software production is in a crisis. It suffers from excessive costs.
Software production is often out of control. It has been suggested that
software production is out of control because we do not measure.
You cannot control what you cannot measure. During last decade, a
number of researches on cost estimation have been conducted. The
metric-set selection has a vital role in software cost estimation
studies; its importance has been ignored especially in neural network
based studies. In this study we have explored the reasons of those
disappointing results and implemented different neural network
models using augmented new metrics. The results obtained are
compared with previous studies using traditional metrics. To be able
to make comparisons, two types of data have been used. The first
part of the data is taken from the Constructive Cost Model
(COCOMO'81) which is commonly used in previous studies and the
second part is collected according to new metrics in a leading
international company in Turkey. The accuracy of the selected
metrics and the data samples are verified using statistical techniques.
The model presented here is based on Multi-Layer Perceptron
(MLP). Another difficulty associated with the cost estimation studies
is the fact that the data collection requires time and care. To make a
more thorough use of the samples collected, k-fold, cross validation
method is also implemented. It is concluded that, as long as an
accurate and quantifiable set of metrics are defined and measured
correctly, neural networks can be applied in software cost estimation
studies with success
Abstract: There are various overlay structures that provide
efficient and scalable solutions for point and range query in a peer-topeer
network. Overlay structure based on m-Binary Search Tree
(BST) is one such popular technique. It deals with the division of the
tree into different key intervals and then assigning the key intervals to
a BST. The popularity of the BST makes this overlay structure
vulnerable to different kinds of attacks. Here we present four such
possible attacks namely index poisoning attack, eclipse attack,
pollution attack and syn flooding attack. The functionality of BST is
affected by these attacks. We also provide different security
techniques that can be applied against these attacks.
Abstract: Clustering is the process of subdividing an input data set into a desired number of subgroups so that members of the same subgroup are similar and members of different subgroups have diverse properties. Many heuristic algorithms have been applied to the clustering problem, which is known to be NP Hard. Genetic algorithms have been used in a wide variety of fields to perform clustering, however, the technique normally has a long running time in terms of input set size. This paper proposes an efficient genetic algorithm for clustering on very large data sets, especially on image data sets. The genetic algorithm uses the most time efficient techniques along with preprocessing of the input data set. We test our algorithm on both artificial and real image data sets, both of which are of large size. The experimental results show that our algorithm outperforms the k-means algorithm in terms of running time as well as the quality of the clustering.
Abstract: This article is dedicated to development of
mathematical models for determining the dynamics of
concentration of hazardous substances in urban turbulent
atmosphere. Development of the mathematical models implied
taking into account the time-space variability of the fields of
meteorological items and such turbulent atmosphere data as vortex
nature, nonlinear nature, dissipativity and diffusivity. Knowing the
turbulent airflow velocity is not assumed when developing the
model. However, a simplified model implies that the turbulent and
molecular diffusion ratio is a piecewise constant function that
changes depending on vertical distance from the earth surface.
Thereby an important assumption of vertical stratification of urban
air due to atmospheric accumulation of hazardous substances
emitted by motor vehicles is introduced into the mathematical
model. The suggested simplified non-linear mathematical model of
determining the sought exhaust concentration at a priori unknown
turbulent flow velocity through non-degenerate transformation is
reduced to the model which is subsequently solved analytically.
Abstract: The security of their network remains the priorities of almost all companies. Existing security systems have shown their limit; thus a new type of security systems was born: honeypots. Honeypots are defined as programs or intended servers which have to attract pirates to study theirs behaviours. It is in this context that the leurre.com project of gathering about twenty platforms was born. This article aims to specify a model of honeypots attack. Our model describes, on a given platform, the evolution of attacks according to theirs hours. Afterward, we show the most attacked services by the studies of attacks on the various ports. It is advisable to note that this article was elaborated within the framework of the research projects on honeyspots within the LABTIC (Laboratory of Information Technologies and Communication).
Abstract: Intrusion Detection System is significant in network
security. It detects and identifies intrusion behavior or intrusion
attempts in a computer system by monitoring and analyzing the
network packets in real time. In the recent year, intelligent algorithms
applied in the intrusion detection system (IDS) have been an
increasing concern with the rapid growth of the network security.
IDS data deals with a huge amount of data which contains irrelevant
and redundant features causing slow training and testing process,
higher resource consumption as well as poor detection rate. Since the
amount of audit data that an IDS needs to examine is very large even
for a small network, classification by hand is impossible. Hence, the
primary objective of this review is to review the techniques prior to
classification process suit to IDS data.
Abstract: In this paper we present a novel approach for face image coding. The proposed method makes a use of the features of video encoders like motion prediction. At first encoder selects appropriate prototype from the database and warps it according to features of encoding face. Warped prototype is placed as first I frame. Encoding face is placed as second frame as P frame type. Information about features positions, color change, selected prototype and data flow of P frame will be sent to decoder. The condition is both encoder and decoder own the same database of prototypes. We have run experiment with H.264 video encoder and obtained results were compared to results achieved by JPEG and JPEG2000. Obtained results show that our approach is able to achieve 3 times lower bitrate and two times higher PSNR in comparison with JPEG. According to comparison with JPEG2000 the bitrate was very similar, but subjective quality achieved by proposed method is better.
Abstract: Performance of a limited Round-Robin (RR) rule is
studied in order to clarify the characteristics of a realistic sharing
model of a processor. Under the limited RR rule, the processor
allocates to each request a fixed amount of time, called a quantum, in a
fixed order. The sum of the requests being allocated these quanta is
kept below a fixed value. Arriving requests that cannot be allocated
quanta because of such a restriction are queued or rejected. Practical
performance measures, such as the relationship between the mean
sojourn time, the mean number of requests, or the loss probability and
the quantum size are evaluated via simulation. In the evaluation, the
requested service time of an arriving request is converted into a
quantum number. One of these quanta is included in an RR cycle,
which means a series of quanta allocated to each request in a fixed
order. The service time of the arriving request can be evaluated using
the number of RR cycles required to complete the service, the number
of requests receiving service, and the quantum size. Then an increase
or decrease in the number of quanta that are necessary before service is
completed is reevaluated at the arrival or departure of other requests.
Tracking these events and calculations enables us to analyze the
performance of our limited RR rule. In particular, we obtain the most
suitable quantum size, which minimizes the mean sojourn time, for the
case in which the switching time for each quantum is considered.
Abstract: Character segmentation is an important preprocessing step for text recognition. In degraded documents, existence of touching characters decreases recognition rate drastically, for any optical character recognition (OCR) system. In this paper a study of touching Gurmukhi characters is carried out and these characters have been divided into various categories after a careful analysis.Structural properties of the Gurmukhi characters are used for defining the categories. New algorithms have been proposed to segment the touching characters in middle zone. These algorithms have shown a reasonable improvement in segmenting the touching characters in degraded Gurmukhi script. The algorithms proposed in this paper are applicable only to machine printed text.
Abstract: The stereophotogrammetry modality is gaining more widespread use in the clinical setting. Registration and visualization of this data, in conjunction with conventional 3D volumetric image modalities, provides virtual human data with textured soft tissue and internal anatomical and structural information. In this investigation computed tomography (CT) and stereophotogrammetry data is acquired from 4 anatomical phantoms and registered using the trimmed iterative closest point (TrICP) algorithm. This paper fully addresses the issue of imaging artifacts around the stereophotogrammetry surface edge using the registered CT data as a reference. Several iterative algorithms are implemented to automatically identify and remove stereophotogrammetry surface edge outliers, improving the overall visualization of the combined stereophotogrammetry and CT data. This paper shows that outliers at the surface edge of stereophotogrammetry data can be successfully removed automatically.
Abstract: The design of distributed systems involves the
partitioning of the system into components or partitions and the
allocation of these components to physical nodes. Techniques have
been proposed for both the partitioning and allocation process.
However these techniques suffer from a number of limitations. For
instance object replication has the potential to greatly improve the
performance of an object orientated distributed system but can be
difficult to use effectively and there are few techniques that support
the developer in harnessing object replication.
This paper presents a methodological technique that helps
developers decide how objects should be allocated in order to
improve performance in a distributed system that supports
replication. The performance of the proposed technique is
demonstrated and tested on an example system.
Abstract: Application-Specific Instruction (ASI ) set Processors
(ASIP) have become an important design choice for embedded
systems due to runtime flexibility, which cannot be provided by
custom ASIC solutions. One major bottleneck in maximizing ASIP
performance is the limitation on the data bandwidth between the
General Purpose Register File (GPRF) and ASIs. This paper presents
the Implicit Registers (IRs) to provide the desirable data bandwidth.
An ASI Input/Output model is proposed to formulate the overheads of
the additional data transfer between the GPRF and IRs, therefore,
an IRs allocation algorithm is used to achieve the better performance
by minimizing the number of extra data transfer instructions. The
experiment results show an up to 3.33x speedup compared to the
results without using IRs.
Abstract: In order to guarantee secure communication for wireless sensor networks (WSNs), many user authentication schemes have successfully drawn researchers- attention and been studied widely. In 2012, He et al. proposed a robust biometric-based user authentication scheme for WSNs. However, this paper demonstrates that He et al.-s scheme has some drawbacks: poor reparability problem, user impersonation attack, and sensor node impersonate attack.
Abstract: In this paper we proposed comparison of four content based objective metrics with results of subjective tests from 80 video sequences. We also include two objective metrics VQM and SSIM to our comparison to serve as “reference” objective metrics because their pros and cons have already been published. Each of the video sequence was preprocessed by the region recognition algorithm and then the particular objective video quality metric were calculated i.e. mutual information, angular distance, moment of angle and normalized cross-correlation measure. The Pearson coefficient was calculated to express metrics relationship to accuracy of the model and the Spearman rank order correlation coefficient to represent the metrics relationship to monotonicity. The results show that model with the mutual information as objective metric provides best result and it is suitable for evaluating quality of video sequences.
Abstract: This paper presents a new version of the SVM mixture algorithm initially proposed by Kwok for classification and regression problems. For both cases, a slight modification of the mixture model leads to a standard SVM training problem, to the existence of an exact solution and allows the direct use of well known decomposition and working set selection algorithms. Only the regression case is considered in this paper but classification has been addressed in a very similar way. This method has been successfully applied to engine pollutants emission modeling.
Abstract: The goal of the study reported in the paper was to
determine whether Ambient Occlusion Shading (AOS) has a significant effect on users' perception of American Sign Language (ASL) finger spelling animations. Seventy-one (71) subjects
participated in the study; all subjects were fluent in ASL. The participants were asked to watch forty (40) sign language animation
clips representing twenty (20) finger spelled words. Twenty (20) clips did not show ambient occlusion, whereas the other twenty (20) were
rendered using ambient occlusion shading. After viewing each animation, subjects were asked to type the word being finger-spelled and rate its legibility. Findings show that the presence of AOS had a significant effect on the subjects perception of the signed words.
Subjects were able to recognize the animated words rendered with AOS with higher level of accuracy, and the legibility ratings of the animations showing AOS were consistently higher across subjects.
Abstract: In this study, a new criterion for determining the number of classes an image should be segmented is proposed. This criterion is based on discriminant analysis for measuring the separability among the segmented classes of pixels. Based on the new discriminant criterion, two algorithms for recursively segmenting the image into determined number of classes are proposed. The proposed methods can automatically and correctly segment objects with various illuminations into separated images for further processing. Experiments on the extraction of text strings from complex document images demonstrate the effectiveness of the proposed methods.1