Abstract: Blind Signature were introduced by Chaum. In this
scheme, a signer can “sign” a document without knowing the
document contain. This is particularly important in electronic voting.
CryptO-0N2 is an electronic voting protocol which is development of
CryptO-0N. During its development this protocol has not been
furnished with the requirement of blind signature, so the choice of
voters can be determined by counting center. In this paper will be
presented of implementation of blind signature using RSA algorithm.
Abstract: Nowadays, driving support systems, such as car
navigation systems, are getting common, and they support drivers in
several aspects. It is important for driving support systems to detect
status of driver's consciousness. Particularly, detecting driver's
drowsiness could prevent drivers from collisions caused by drowsy
driving. In this paper, we discuss the various artificial detection
methods for detecting driver's drowsiness processing technique. This
system is based on facial images analysis for warning the driver of
drowsiness or in attention to prevent traffic accidents.
Abstract: Given a large sparse signal, great wishes are to
reconstruct the signal precisely and accurately from lease number of
measurements as possible as it could. Although this seems possible
by theory, the difficulty is in built an algorithm to perform the
accuracy and efficiency of reconstructing. This paper proposes a new
proved method to reconstruct sparse signal depend on using new
method called Least Support Matching Pursuit (LS-OMP) merge it
with the theory of Partial Knowing Support (PSK) given new method
called Partially Knowing of Least Support Orthogonal Matching
Pursuit (PKLS-OMP).
The new methods depend on the greedy algorithm to compute the
support which depends on the number of iterations. So to make it
faster, the PKLS-OMP adds the idea of partial knowing support of its
algorithm. It shows the efficiency, simplicity, and accuracy to get
back the original signal if the sampling matrix satisfies the Restricted
Isometry Property (RIP).
Simulation results also show that it outperforms many algorithms
especially for compressible signals.
Abstract: This paper proposes an algorithm which automatically aligns and stitches the component medical images (fluoroscopic) with varying degrees of overlap into a single composite image. The alignment method is based on similarity measure between the component images. As applied here the technique is intensity based rather than feature based. It works well in domains where feature based methods have difficulty, yet more robust than traditional correlation. Component images are stitched together using the new triangular averaging based blending algorithm. The quality of the resultant image is tested for photometric inconsistencies and geometric misalignments. This method cannot correct rotational, scale and perspective artifacts.
Abstract: An evolutionary computing technique for solving initial value problems in Ordinary Differential Equations is proposed in this paper. Neural network is used as a universal approximator while the adaptive parameters of neural networks are optimized by genetic algorithm. The solution is achieved on the continuous grid of time instead of discrete as in other numerical techniques. The comparison is carried out with classical numerical techniques and the solution is found with a uniform accuracy of MSE ≈ 10-9 .
Abstract: Process planning and production scheduling play
important roles in manufacturing systems. In this paper a multiobjective
mixed integer linear programming model is presented for
the integrated planning and scheduling of multi-product. The aim is
to find a set of high-quality trade-off solutions. This is a
combinatorial optimization problem with substantially large solution
space, suggesting that it is highly difficult to find the best solutions
with the exact search method. To account for it, a PSO-based
algorithm is proposed by fully utilizing the capability of the
exploration search and fast convergence. To fit the continuous PSO
in the discrete modeled problem, a solution representation is used in
the algorithm. The numerical experiments have been performed to
demonstrate the effectiveness of the proposed algorithm.
Abstract: Electric impedance imaging is a method of
reconstructing spatial distribution of electrical conductivity inside a
subject. In this paper, a new method of electrical impedance imaging
using eddy current is proposed. The eddy current distribution in the
body depends on the conductivity distribution and the magnetic field
pattern. By changing the position of magnetic core, a set of voltage
differences is measured with a pair of electrodes. This set of voltage
differences is used in image reconstruction of conductivity
distribution. The least square error minimization method is used as a
reconstruction algorithm. The back projection algorithm is used to
get two dimensional images. Based on this principle, a measurement
system is developed and some model experiments were performed
with a saline filled phantom. The shape of each model in the
reconstructed image is similar to the corresponding model,
respectively. From the results of these experiments, it is confirmed
that the proposed method is applicable in the realization of electrical
imaging.
Abstract: In the framework of adaptive parametric modelling of images, we propose in this paper a new technique based on the Chandrasekhar fast adaptive filter for texture characterization. An Auto-Regressive (AR) linear model of texture is obtained by scanning the image row by row and modelling this data with an adaptive Chandrasekhar linear filter. The characterization efficiency of the obtained model is compared with the model adapted with the Least Mean Square (LMS) 2-D adaptive algorithm and with the cooccurrence method features. The comparison criteria is based on the computation of a characterization degree using the ratio of "betweenclass" variances with respect to "within-class" variances of the estimated coefficients. Extensive experiments show that the coefficients estimated by the use of Chandrasekhar adaptive filter give better results in texture discrimination than those estimated by other algorithms, even in a noisy context.
Abstract: A synchronous network-on-chip using wormhole packet switching
and supporting guaranteed-completion best-effort with low-priority (LP)
and high-priority (HP) wormhole packet delivery service is presented in
this paper. Both our proposed LP and HP message services deliver a good
quality of service in term of lossless packet completion and in-order message
data delivery. However, the LP message service does not guarantee minimal
completion bound. The HP packets will absolutely use 100% bandwidth of
their reserved links if the HP packets are injected from the source node with
maximum injection. Hence, the service are suitable for small size messages
(less than hundred bytes). Otherwise the other HP and LP messages, which
require also the links, will experience relatively high latency depending on the
size of the HP message. The LP packets are routed using a minimal adaptive
routing, while the HP packets are routed using a non-minimal adaptive routing
algorithm. Therefore, an additional 3-bit field, identifying the packet type,
is introduced in their packet headers to classify and to determine the type
of service committed to the packet. Our NoC prototypes have been also
synthesized using a 180-nm CMOS standard-cell technology to evaluate the
cost of implementing the combination of both services.
Abstract: The purpose of this paper primarily intends to develop GIS interface for estimating sequences of stream-flows at ungauged stations based on known flows at gauged stations. The integrated GIS interface is composed of three major steps. The first, precipitation characteristics using statistical analysis is the procedure for making multiple linear regression equation to get the long term mean daily flow at ungauged stations. The independent variables in regression equation are mean daily flow and drainage area. Traditionally, mean flow data are generated by using Thissen polygon method. However, method for obtaining mean flow data can be selected by user such as Kriging, IDW (Inverse Distance Weighted), Spline methods as well as other traditional methods. At the second, flow duration curve (FDC) is computing at unguaged station by FDCs in gauged stations. Finally, the mean annual daily flow is computed by spatial interpolation algorithm. The third step is to obtain watershed/topographic characteristics. They are the most important factors which govern stream-flows. In summary, the simulated daily flow time series are compared with observed times series. The results using integrated GIS interface are closely similar and are well fitted each other. Also, the relationship between the topographic/watershed characteristics and stream flow time series is highly correlated.
Abstract: In this article, we propose a methodology for the
characterization of the suspended matter along Algiers-s bay. An
approach by multi layers perceptron (MLP) with training by back
propagation of the gradient optimized by the algorithm of Levenberg
Marquardt (LM) is used. The accent was put on the choice of the
components of the base of training where a comparative study made
for four methods: Random and three alternatives of classification by
K-Means. The samples are taken from suspended matter image,
obtained by analytical model based on polynomial regression by
taking account of in situ measurements. The mask which selects the
zone of interest (water in our case) was carried out by using a multi
spectral classification by ISODATA algorithm. To improve the
result of classification, a cleaning of this mask was carried out using
the tools of mathematical morphology. The results of this study
presented in the forms of curves, tables and of images show the
founded good of our methodology.
Abstract: This paper presents a possibilistic (fuzzy) model in optimal siting and sizing of Distributed Generation (DG) for loss reduction and improve voltage profile in power distribution system. Multi-objective problem is developed in two phases. In the first one, the set of non-dominated planning solutions is obtained (with respect to the objective functions of fuzzy economic cost, and exposure) using genetic algorithm. In the second phase, one solution of the set of non-dominated solutions is selected as optimal solution, using a suitable max-min approach. This method can be determined operation-mode (PV or PQ) of DG. Because of considering load uncertainty in this paper, it can be obtained realistic results. The whole process of this method has been implemented in the MATLAB7 environment with technical and economic consideration for loss reduction and voltage profile improvement. Through numerical example the validity of the proposed method is verified.
Abstract: This paper evaluates the performance of a novel
algorithm for tracking of a mobile node, interms of execution time
and root mean square error (RMSE). Particle Filter algorithm is used
to track the mobile node, however a new technique in particle filter
algorithm is also proposed to reduce the execution time. The
stationary points were calculated through trilateration and finally by
averaging the number of points collected for a specific time, whereas
tracking is done through trilateration as well as particle filter
algorithm. Wi-Fi signal is used to get initial guess of the position of
mobile node in x-y coordinates system. Commercially available
software “Wireless Mon" was used to read the WiFi signal strength
from the WiFi card. Visual Cµ version 6 was used to interact with
this software to read only the required data from the log-file
generated by “Wireless Mon" software. Results are evaluated through
mathematical modeling and MATLAB simulation.
Abstract: The VoIP networks as alternative method to traditional PSTN system has been implemented in a wide variety of structures
with multiple protocols, codecs, software and hardware–based
distributions. The use of cryptographic techniques let the users to have a secure communication, but the calculate throughput as well as the QoS parameters are affected according to the used algorithm. This
paper analyzes the VoIP throughput and the QoS parameters with
different commercial encryption methods. The measurement–based
approach uses lab scenarios to simulate LAN and WAN
environments. Security mechanisms such as TLS, SIAX2, SRTP,
IPSEC and ZRTP are analyzed with μ-LAW and GSM codecs.
Abstract: Both image steganography and image encryption have
advantages and disadvantages. Steganograhy allows us to hide a
desired image containing confidential information in a covered or
host image while image encryption is decomposing the desired image
to a non-readable, non-comprehended manner. The encryption
methods are usually much more robust than the steganographic ones.
However, they have a high visibility and would provoke the attackers
easily since it usually is obvious from an encrypted image that
something is hidden! The combination of steganography and
encryption will cover both of their weaknesses and therefore, it
increases the security. In this paper an image encryption method
based on sinc-convolution along with using an encryption key of 128
bit length is introduced. Then, the encrypted image is covered by a
host image using a modified version of JSteg steganography
algorithm. This method could be applied to almost all image formats
including TIF, BMP, GIF and JPEG. The experiment results show
that our method is able to hide a desired image with high security and
low visibility.
Abstract: In this study, a network quality of service (QoS)
evaluation system was proposed. The system used a combination of
fuzzy C-means (FCM) and regression model to analyse and assess the
QoS in a simulated network. Network QoS parameters of multimedia
applications were intelligently analysed by FCM clustering
algorithm. The QoS parameters for each FCM cluster centre were
then inputted to a regression model in order to quantify the overall
QoS. The proposed QoS evaluation system provided valuable
information about the network-s QoS patterns and based on this
information, the overall network-s QoS was effectively quantified.
Abstract: Recent developments in automotive technology are focused on economy, comfort and safety. Vehicle tracking and collision detection systems are attracting attention of many investigators focused on safety of driving in the field of automotive mechatronics. In this paper, a vision-based vehicle detection system is presented. Developed system is intended to be used in collision detection and driver alert. The system uses RGB images captured by a camera in a car driven in the highway. Images captured by the moving camera are used to detect the moving vehicles in the image. A vehicle ahead of the camera is detected in daylight conditions. The proposed method detects moving vehicles by subtracting successive images. Plate height of the vehicle is determined by using a plate recognition algorithm. Distance of the moving object is calculated by using the plate height. After determination of the distance of the moving vehicle relative speed of the vehicle and Time-to-Collision are calculated by using distances measured in successive images. Results obtained in road tests are discussed in order to validate the use of the proposed method.
Abstract: This paper focuses on a critical component of the situational awareness (SA), the neural control of autonomous constant depth flight of an autonomous underwater vehicle (AUV). Autonomous constant depth flight is a challenging but important task for AUVs to achieve high level of autonomy under adverse conditions. The fundamental requirement for constant depth flight is the knowledge of the depth, and a properly designed controller to govern the process. The AUV, named VORAM, is used as a model for the verification of the proposed hybrid control algorithm. Three neural network controllers, named NARMA-L2 controllers, are designed for fast and stable diving maneuvers of chosen AUV model. This hybrid control strategy for chosen AUV model has been verified by simulation of diving maneuvers using software package Simulink and demonstrated good performance for fast SA in real-time searchand- rescue operations.
Abstract: There are two major variants of the Simplex
Algorithm: the revised method and the standard, or tableau method.
Today, all serious implementations are based on the revised method
because it is more efficient for sparse linear programming problems.
Moreover, there are a number of applications that lead to dense linear
problems so our aim in this paper is to present some computational
results on parallel implementation of dense Simplex Method. Our
implementation is implemented on a SMP cluster using C
programming language and the Message Passing Interface MPI.
Preliminary computational results on randomly generated dense
linear programs support our results.
Abstract: Nowadays, ontologies are the only widely accepted paradigm for the management of sharable and reusable knowledge in a way that allows its automatic interpretation. They are collaboratively created across the Web and used to index, search and annotate documents. The vast majority of the ontology based approaches, however, focus on indexing texts at document level. Recently, with the advances in ontological engineering, it became clear that information indexing can largely benefit from the use of general purpose ontologies which aid the indexing of documents at word level. This paper presents a concept indexing algorithm, which adds ontology information to words and phrases and allows full text to be searched, browsed and analyzed at different levels of abstraction. This algorithm uses a general purpose ontology, OntoRo, and an ontologically tagged corpus, OntoCorp, both developed for the purpose of this research. OntoRo and OntoCorp are used in a two-stage supervised machine learning process aimed at generating ontology tagging rules. The first experimental tests show a tagging accuracy of 78.91% which is encouraging in terms of the further improvement of the algorithm.