Abstract: For several high speed networks, providing resilience against failures is an essential requirement. The main feature for designing next generation optical networks is protecting and restoring high capacity WDM networks from the failures. Quick detection, identification and restoration make networks more strong and consistent even though the failures cannot be avoided. Hence, it is necessary to develop fast, efficient and dependable fault localization or detection mechanisms. In this paper we propose a new fault localization algorithm for WDM networks which can identify the location of a failure on a failed lightpath. Our algorithm detects the failed connection and then attempts to reroute data stream through an alternate path. In addition to this, we develop an algorithm to analyze the information of the alarms generated by the components of an optical network, in the presence of a fault. It uses the alarm correlation in order to reduce the list of suspected components shown to the network operators. By our simulation results, we show that our proposed algorithms achieve less blocking probability and delay while getting higher throughput.
Abstract: Automated storage and retrieval systems (AS/RS)
become frequently used systems in warehouses. There has been a
transition from human based forklift applications to fast and safe
AS/RS applications in firm-s warehouse systems. In this study, basic
components and automation systems of the AS/RS are examined.
Proposed system's automation components and their tasks in the
system control algorithm were stated. According to this control
algorithm the control system structure was obtained.
Abstract: This paper presents dynamic voltage collapse prediction on an actual power system using support vector machines.
Dynamic voltage collapse prediction is first determined based on the PTSI calculated from information in dynamic simulation output. Simulations were carried out on a practical 87 bus test system by considering load increase as the contingency. The data collected from the time domain simulation is then used as input to the SVM in which support vector regression is used as a predictor to determine the
dynamic voltage collapse indices of the power system. To reduce training time and improve accuracy of the SVM, the Kernel function type and Kernel parameter are considered. To verify the
effectiveness of the proposed SVM method, its performance is compared with the multi layer perceptron neural network (MLPNN). Studies show that the SVM gives faster and more accurate results for dynamic voltage collapse prediction compared with the MLPNN.
Abstract: Composite steel-concrete slabs using thin-walled
corrugated steel sheets with embossments represent a modern and
effective combination of steel and concrete. However, the design
of new types of sheeting is conditional on the execution of expensive
and time-consuming laboratory testing. The effort to develop
a cheaper and faster method has lead to many investigations all over
the world. In our paper we compare the results from our experiments
involving vacuum loading, four-point bending and small-scale shear
tests.
Abstract: In this paper we present the algorithm which allows
us to have an object tracking close to real time in Full HD videos.
The frame rate (FR) of a video stream is considered to be between
5 and 30 frames per second. The real time track building will be
achieved if the algorithm can follow 5 or more frames per second. The
principle idea is to use fast algorithms when doing preprocessing to
obtain the key points and track them after. The procedure of matching
points during assignment is hardly dependent on the number of points.
Because of this we have to limit pointed number of points using the
most informative of them.
Abstract: The Ministry of Defense (MoD) spends hundreds of
millions of dollars on software to support its infrastructure, operate
its weapons and provide command, control, communications,
computing, intelligence, surveillance, and reconnaissance (C4ISR)
functions. These and other all new advanced systems have a common
critical component is information technology. Defense and
Aerospace environment is continuously striving to keep up with
increasingly sophisticated Information Technology (IT) in order to
remain effective in today-s dynamic and unpredictable threat
environment. This makes it one of the largest and fastest growing
expenses of Defense. Hundreds of millions of dollars spent a year on
IT projects. But, too many of those millions are wasted on costly
mistakes. Systems that do not work properly, new components that
are not compatible with old once, trendily new applications that do
not really satisfy defense needs or lost though poorly managed
contracts.
This paper investigates and compiles the effective strategies that
aim to end exasperation with low returns and high cost of
Information Technology Acquisition for defense; it tries to show how
to maximize value while reducing time and expenditure.
Abstract: Quasigroups are algebraic structures closely related to
Latin squares which have many different applications. The
construction of block cipher is based on quasigroup string
transformation. This article describes a block cipher based
Quasigroup of order 256, suitable for fast software encryption of
messages written down in universal ASCII code. The novelty of this
cipher lies on the fact that every time the cipher is invoked a new set
of two randomly generated quasigroups are used which in turn is
used to create a pair of quasigroup of dual operations. The
cryptographic strength of the block cipher is examined by calculation
of the xor-distribution tables. In this approach some algebraic
operations allows quasigroups of huge order to be used without any
requisite to be stored.
Abstract: Emotion recognition is an important research field that finds lots of applications nowadays. This work emphasizes on recognizing different emotions from speech signal. The extracted features are related to statistics of pitch, formants, and energy contours, as well as spectral, perceptual and temporal features, jitter, and shimmer. The Artificial Neural Networks (ANN) was chosen as the classifier. Working on finding a robust and fast ANN classifier suitable for different real life application is our concern. Several experiments were carried out on different ANN to investigate the different factors that impact the classification success rate. Using a database containing 7 different emotions, it will be shown that with a proper and careful adjustment of features format, training data sorting, number of features selected and even the ANN type and architecture used, a success rate of 85% or even more can be achieved without increasing the system complicity and the computation time
Abstract: This work deals with aspects of support vector machine learning for large-scale data mining tasks. Based on a decomposition algorithm for support vector machine training that can be run in serial as well as shared memory parallel mode we introduce a transformation of the training data that allows for the usage of an expensive generalized kernel without additional costs. We present experiments for the Gaussian kernel, but usage of other kernel functions is possible, too. In order to further speed up the decomposition algorithm we analyze the critical problem of working set selection for large training data sets. In addition, we analyze the influence of the working set sizes onto the scalability of the parallel decomposition scheme. Our tests and conclusions led to several modifications of the algorithm and the improvement of overall support vector machine learning performance. Our method allows for using extensive parameter search methods to optimize classification accuracy.
Abstract: This paper objects to extend Jon Kleinberg-s research. He introduced the structure of small-world in a grid and shows with a greedy algorithm using only local information able to find route between source and target in delivery time O(log2n). His fundamental model for distributed system uses a two-dimensional grid with longrange random links added between any two node u and v with a probability proportional to distance d(u,v)-2. We propose with an additional information of the long link nearby, we can find the shorter path. We apply the ant colony system as a messenger distributed their pheromone, the long-link details, in surrounding area. The subsequence forwarding decision has more option to move to, select among local neighbors or send to node has long link closer to its target. Our experiment results sustain our approach, the average routing time by Color Pheromone faster than greedy method.
Abstract: In this paper an algorithm for fast wavelength calibration of Optical Spectrum Analyzers (OSAs) using low power reference gas spectra is proposed. In existing OSAs a reference spectrum with low noise for precise detection of the reference extreme values is needed. To generate this spectrum costly hardware with high optical power is necessary. With this new wavelength calibration algorithm it is possible to use a noisy reference spectrum and therefore hardware costs can be cut. With this algorithm the reference spectrum is filtered and the key information is extracted by segmenting and finding the local minima and maxima. Afterwards slope and offset of a linear correction function for best matching the measured and theoretical spectra are found by correlating the measured with the stored minima. With this algorithm a reliable wavelength referencing of an OSA can be implemented on a microcontroller with a calculation time of less than one second.
Abstract: The impact force of a rockfall is mainly determined by
its moving behavior and velocity, which are contingent on the rock
shape, slope gradient, height, and surface roughness of the moving
path. It is essential to precisely calculate the moving path of the
rockfall in order to effectively minimize and prevent damages caused
by the rockfall. By applying the Colorado Rockfall Simulation
Program (CRSP) program as the analysis tool, this research studies the
influence of three shapes of rock (spherical, cylindrical and discoidal)
and surface roughness on the moving path of a single rockfall. As
revealed in the analysis, in addition to the slope gradient, the geometry
of the falling rock and joint roughness coefficient ( JRC ) of the slope
are the main factors affecting the moving behavior of a rockfall. On a
single flat slope, both the rock-s bounce height and moving velocity
increase as the surface gradient increases, with a critical gradient value
of 1:m = 1 . Bouncing behavior and faster moving velocity occur more
easily when the rock geometry is more oval. A flat piece tends to cause
sliding behavior and is easily influenced by the change of surface
undulation. When JRC
Abstract: Both the minimum energy consumption and
smoothness, which is quantified as a function of jerk, are generally
needed in many dynamic systems such as the automobile and the
pick-and-place robot manipulator that handles fragile equipments.
Nevertheless, many researchers come up with either solely
concerning on the minimum energy consumption or minimum jerk
trajectory. This research paper considers the indirect minimum Jerk
method for higher order differential equation in dynamics
optimization proposes a simple yet very interesting indirect jerks
approaches in designing the time-dependent system yielding an
alternative optimal solution. Extremal solutions for the cost functions
of indirect jerks are found using the dynamic optimization methods
together with the numerical approximation. This case considers the
linear equation of a simple system, for instance, mass, spring and
damping. The simple system uses two mass connected together by
springs. The boundary initial is defined the fix end time and end
point. The higher differential order is solved by Galerkin-s methods
weight residual. As the result, the 6th higher differential order shows
the faster solving time.
Abstract: In this paper we present the deep study about the Bio-
Medical Images and tag it with some basic extracting features (e.g.
color, pixel value etc). The classification is done by using a nearest
neighbor classifier with various distance measures as well as the
automatic combination of classifier results. This process selects a
subset of relevant features from a group of features of the image. It
also helps to acquire better understanding about the image by
describing which the important features are. The accuracy can be
improved by increasing the number of features selected. Various
types of classifications were evolved for the medical images like
Support Vector Machine (SVM) which is used for classifying the
Bacterial types. Ant Colony Optimization method is used for optimal
results. It has high approximation capability and much faster
convergence, Texture feature extraction method based on Gabor
wavelets etc..
Abstract: Neural processors have shown good results for
detecting a certain character in a given input matrix. In this paper, a
new idead to speed up the operation of neural processors for character
detection is presented. Such processors are designed based on cross
correlation in the frequency domain between the input matrix and the
weights of neural networks. This approach is developed to reduce the
computation steps required by these faster neural networks for the
searching process. The principle of divide and conquer strategy is
applied through image decomposition. Each image is divided into
small in size sub-images and then each one is tested separately by
using a single faster neural processor. Furthermore, faster character
detection is obtained by using parallel processing techniques to test the
resulting sub-images at the same time using the same number of faster
neural networks. In contrast to using only faster neural processors, the
speed up ratio is increased with the size of the input image when using
faster neural processors and image decomposition. Moreover, the
problem of local subimage normalization in the frequency domain is
solved. The effect of image normalization on the speed up ratio of
character detection is discussed. Simulation results show that local
subimage normalization through weight normalization is faster than
subimage normalization in the spatial domain. The overall speed up
ratio of the detection process is increased as the normalization of
weights is done off line.
Abstract: The H.264/AVC video coding standard contains a number of advanced features. Ones of the new features introduced in this standard is the multiple intramode prediction. Its function exploits directional spatial correlation with adjacent block for intra prediction. With this new features, intra coding of H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression standard, but computational complexity is increased significantly when brut force rate distortion optimization (RDO) algorithm is used. In this paper, we propose a new fast intra prediction mode decision method for the complexity reduction of H.264 video coding. for luma intra prediction, the proposed method consists of two step: in the first step, we make the RDO for four mode of intra 4x4 block, based the distribution of RDO cost of those modes and the idea that the fort correlation with adjacent mode, we select the best mode of intra 4x4 block. In the second step, we based the fact that the dominating direction of a smaller block is similar to that of bigger block, the candidate modes of 8x8 blocks and 16x16 macroblocks are determined. So, in case of chroma intra prediction, the variance of the chroma pixel values is much smaller than that of luma ones, since our proposed uses only the mode DC. Experimental results show that the new fast intra mode decision algorithm increases the speed of intra coding significantly with negligible loss of PSNR.
Abstract: The residue number system (RNS), due to its
properties, is used in applications in which high performance
computation is needed. The carry free nature, which makes the
arithmetic, carry bounded as well as the paralleling facility is the
reason of its capability of high speed rendering. Since carry is not
propagated between the moduli in this system, the performance is
only restricted by the speed of the operations in each modulus. In this
paper a novel method of number representation by use of redundancy
is suggested in which {rn- 2,rn-1,rn} is the reference moduli set
where r=2k+1 and k =1, 2,3,.. This method achieves fast
computations and conversions and makes the circuits of them much
simpler.
Abstract: Currently in many major cities, public transit schedules
are disseminated through lists of routes, grids of stop times and
static maps. This paper describes a web based geographic information
system which disseminates the same schedule information through
intuitive GIS techniques. Using data from Calgary, Canada, an map
based interface has been created to allow users to see routes, stops and
moving buses all at once. Zoom and pan controls as well as satellite
imagery allows users to apply their personal knowledge about the
local geography to achieve faster, and more pertinent transit results.
Using asynchronous requests to web services, users are immersed
in an application where buses and stops can be added and removed
interactively, without the need to wait for responses to HTTP requests.
Abstract: This paper investigates the problem of sampling from transactional data streams. We introduce CFISDS as a content based sampling algorithm that works on a landmark window model of data streams and preserve more informed sample in sample space. This algorithm that work based on closed frequent itemset mining tasks, first initiate a concept lattice using initial data, then update lattice structure using an incremental mechanism.Incremental mechanism insert, update and delete nodes in/from concept lattice in batch manner. Presented algorithm extracts the final samples on demand of user. Experimental results show the accuracy of CFISDS on synthetic and real datasets, despite on CFISDS algorithm is not faster than exist sampling algorithms such as Z and DSS.
Abstract: This paper proposes a novel multi-format stream grid
architecture for real-time image monitoring system. The system, based
on a three-tier architecture, includes stream receiving unit, stream
processor unit, and presentation unit. It is a distributed computing and
a loose coupling architecture. The benefit is the amount of required
servers can be adjusted depending on the loading of the image
monitoring system. The stream receive unit supports multi capture
source devices and multi-format stream compress encoder. Stream
processor unit includes three modules; they are stream clipping
module, image processing module and image management module.
Presentation unit can display image data on several different platforms.
We verified the proposed grid architecture with an actual test of image
monitoring. We used a fast image matching method with the
adjustable parameters for different monitoring situations. Background
subtraction method is also implemented in the system. Experimental
results showed that the proposed architecture is robust, adaptive, and
powerful in the image monitoring system.