Abstract: One of the leading problems in Cyber Security today
is the emergence of targeted attacks conducted by adversaries with
access to sophisticated tools. These attacks usually steal senior level
employee system privileges, in order to gain unauthorized access to
confidential knowledge and valuable intellectual property. Malware
used for initial compromise of the systems are sophisticated and
may target zero-day vulnerabilities. In this work we utilize common
behaviour of malware called ”beacon”, which implies that infected
hosts communicate to Command and Control servers at regular
intervals that have relatively small time variations. By analysing
such beacon activity through passive network monitoring, it is
possible to detect potential malware infections. So, we focus on
time gaps as indicators of possible C2 activity in targeted enterprise
networks. We represent DNS log files as a graph, whose vertices
are destination domains and edges are timestamps. Then by using
four periodicity detection algorithms for each pair of internal-external
communications, we check timestamp sequences to identify the
beacon activities. Finally, based on the graph structure, we infer the
existence of other infected hosts and malicious domains enrolled in
the attack activities.
Abstract: Nowadays, the dissemination of information touches the distributed world, where selecting the relevant servers to a user request is an important problem in distributed information retrieval. During the last decade, several research studies on this issue have been launched to find optimal solutions and many approaches of collection selection have been proposed. In this paper, we propose a new collection selection approach that takes into consideration the number of documents in a collection that contains terms of the query and the weights of those terms in these documents. We tested our method and our studies show that this technique can compete with other state-of-the-art algorithms that we choose to test the performance of our approach.
Abstract: Although most digital cameras acquire images in a raw
format, based on a Color Filter Array that arranges RGB color
filters on a square grid of photosensors, most image compression
techniques do not use the raw data; instead, they use the rgb result
of an interpolation algorithm of the raw data. This approach is
inefficient and by performing a lossless compression of the raw data,
followed by pixel interpolation, digital cameras could be more power
efficient and provide images with increased resolution given that the
interpolation step could be shifted to an external processing unit. In
this paper, we conduct a survey on the use of lossless compression
algorithms with raw Bayer images. Moreover, in order to reduce the
effect of the transition between colors that increase the entropy of
the raw Bayer image, we split the image into three new images
corresponding to each channel (red, green and blue) and we study
the same compression algorithms applied to each one individually.
This simple pre-processing stage allows an improvement of more than
15% in predictive based methods.
Abstract: This paper presents a classifier ensemble approach for
predicting the survivability of the breast cancer patients using the
latest database version of the Surveillance, Epidemiology, and End
Results (SEER) Program of the National Cancer Institute. The system
consists of two main components; features selection and classifier
ensemble components. The features selection component divides the
features in SEER database into four groups. After that it tries to find
the most important features among the four groups that maximizes the
weighted average F-score of a certain classification algorithm. The
ensemble component uses three different classifiers, each of which
models different set of features from SEER through the features
selection module. On top of them, another classifier is used to give
the final decision based on the output decisions and confidence
scores from each of the underlying classifiers. Different classification
algorithms have been examined; the best setup found is by using the
decision tree, Bayesian network, and Na¨ıve Bayes algorithms for the
underlying classifiers and Na¨ıve Bayes for the classifier ensemble
step. The system outperforms all published systems to date when
evaluated against the exact same data of SEER (period of 1973-2002).
It gives 87.39% weighted average F-score compared to 85.82% and
81.34% of the other published systems. By increasing the data size to
cover the whole database (period of 1973-2014), the overall weighted
average F-score jumps to 92.4% on the held out unseen test set.
Abstract: Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.
Abstract: Medical digital images usually have low resolution because of nature of their acquisition. Therefore, this paper focuses on zooming these images to obtain better level of information, required for the purpose of medical diagnosis. For this purpose, a strategy for selecting pixels in zooming operation is proposed. It is based on the principle of analog clock and utilizes a combination of point and neighborhood image processing. In this approach, the hour hand of clock covers the portion of image to be processed. For alignment, the center of clock points at middle pixel of the selected portion of image. The minute hand is longer in length, and is used to gain information about pixels of the surrounding area. This area is called neighborhood pixels region. This information is used to zoom the selected portion of the image. The proposed algorithm is implemented and its performance is evaluated for many medical images obtained from various sources such as X-ray, Computerized Tomography (CT) scan and Magnetic Resonance Imaging (MRI). However, for illustration and simplicity, the results obtained from a CT scanned image of head is presented. The performance of algorithm is evaluated in comparison to various traditional algorithms in terms of Peak signal-to-noise ratio (PSNR), maximum error, SSIM index, mutual information and processing time. From the results, the proposed algorithm is found to give better performance than traditional algorithms.
Abstract: Cloud computing is the outcome of rapid growth of internet. Due to elastic nature of cloud computing and unpredictable behavior of user, load balancing is the major issue in cloud computing paradigm. An efficient load balancing technique can improve the performance in terms of efficient resource utilization and higher customer satisfaction. Load balancing can be implemented through task scheduling, resource allocation and task migration. Various parameters to analyze the performance of load balancing approach are response time, cost, data processing time and throughput. This paper demonstrates a two level load balancer approach by combining join idle queue and join shortest queue approach. Authors have used cloud analyst simulator to test proposed two level load balancer approach. The results are analyzed and compared with the existing algorithms and as observed, proposed work is one step ahead of existing techniques.
Abstract: A conventional optical coherence tomography (OCT) system has limited imaging depth, which is 1-2 mm, and suffers unwanted noise such as speckle noise. The motorized-stage-based OCT system, using a common-path Fourier-domain optical coherence tomography (CP-FD-OCT) configuration, provides enhanced imaging depth and less noise so that we can overcome these limitations. Using this OCT systems, OCT images were obtained from an onion, and their subsurface structure was observed. As a result, the images obtained using the developed motorized-stage-based system showed enhanced imaging depth than the conventional system, since it is real-time accurate depth tracking. Consequently, the developed CP-FD-OCT systems and algorithms have good potential for the further development of endoscopic OCT for microsurgery.
Abstract: In this paper, we present the use of the discriminant analysis to select evolutionary algorithms that better solve instances of the vehicle routing problem with time windows. We use indicators as independent variables to obtain the classification criteria, and the best algorithm from the generic genetic algorithm (GA), random search (RS), steady-state genetic algorithm (SSGA), and sexual genetic algorithm (SXGA) as the dependent variable for the classification. The discriminant classification was trained with classic instances of the vehicle routing problem with time windows obtained from the Solomon benchmark. We obtained a classification of the discriminant analysis of 66.7%.
Abstract: In the SHP, LVDT sensor is for detecting the length
changes of the EHA output, and the thrust of the EHA is controlled by
the pressure sensor. Sensor is possible to cause hardware fault by
internal problem or external disturbance. The EHA of SHP is able to
be uncontrollable due to control by feedback from uncertain
information, on this paper; the sliding mode observer algorithm
estimates the original sensor output information in permanent sensor
fault. The proposed algorithm shows performance to recovery fault of
disconnection and short circuit basically, also the algorithm detect
various of sensor fault mode.
Abstract: In this era of online communication, which transacts data in 0s and 1s, confidentiality is a priced commodity. Ensuring safe transmission of encrypted data and their uncorrupted recovery is a matter of prime concern. Among the several techniques for secure sharing of images, this paper proposes a k out of n region incrementing image sharing scheme for color images. The highlight of this scheme is the use of simple Boolean and arithmetic operations for generating shares and the Lagrange interpolation polynomial for authenticating shares. Additionally, this scheme addresses problems faced by existing algorithms such as color reversal and pixel expansion. This paper regenerates the original secret image whereas the existing systems regenerates only the half toned secret image.
Abstract: The laws of Newtonian mechanics allow ab-initio
molecular dynamics to model and simulate particle trajectories in
material science by defining a differentiable potential function. This
paper discusses some considerations for the coding of ab-initio
programs for simulation on a standalone computer and illustrates
the approach by C language codes in the context of embedded
metallic atoms in the face-centred cubic structure. The algorithms use
velocity-time integration to determine particle parameter evolution
for up to several thousands of particles in a thermodynamical
ensemble. Such functions are reusable and can be placed in a
redistributable header library file. While there are both commercial
and free packages available, their heuristic nature prevents dissection.
In addition, developing own codes has the obvious advantage of
teaching techniques applicable to new problems.
Abstract: We present a family of data-reusing and affine
projection algorithms. For identification of a noisy linear finite
impulse response channel, a partial knowledge of a channel,
especially noise, can be used to improve the performance of
the adaptive filter. Motivated by this fact, the proposed scheme
incorporates an estimate of a knowledge of noise. A constraint, called
the adaptive noise constraint, estimates an unknown information of
noise. By imposing this constraint on a cost function of data-reusing
and affine projection algorithms, a cost function based on the adaptive
noise constraint and Lagrange multiplier is defined. Minimizing the
new cost function leads to the adaptive noise constrained (ANC)
data-reusing and affine projection algorithms. Experimental results
comparing the proposed schemes to standard data-reusing and affine
projection algorithms clearly indicate their superior performance.
Abstract: Due to the increasing growth of internet users, the emerging applications of multicast are growing day by day and there is a requisite for the design of high-speed switches/routers. Huge amounts of effort have been done into the research area of multicast switch fabric design and algorithms. Different traffic scenarios are the influencing factor which affect the throughput and delay of the switch. The pointer based multicast scheduling algorithms are not performed well under non-uniform traffic conditions. In this work, performance of the switch has been analyzed by applying the advanced multicast scheduling algorithm OQSMS (Optimal Queue Selection Based Multicast Scheduling Algorithm), MDDR (Multicast Due Date Round-Robin Scheduling Algorithm) and MDRR (Multicast Dual Round-Robin Scheduling Algorithm). The results show that OQSMS achieves better switching performance than other algorithms under the uniform, non-uniform and bursty traffic conditions and it estimates optimal queue in each time slot so that it achieves maximum possible throughput.
Abstract: Particle swarm optimization (PSO) is becoming one of
the most important swarm intelligent paradigms for solving global
optimization problems. Although some progress has been made to
improve PSO algorithms over the last two decades, additional work
is still needed to balance parameters to achieve better numerical
properties of accuracy, efficiency, and stability. In the optimal
PSO algorithm, the optimal weightings of (√ 5 − 1)/2 and (3 − √5)/2 are used for the cognitive factor and the social factor,
respectively. By the same token, the same optimal weightings have
been applied for intensification searches and diversification searches,
respectively. Perturbation and constriction effects are optimally
balanced. Simulations of the de Jong, the Rosenbrock, and the
Griewank functions show that the optimal PSO algorithm indeed
achieves better numerical properties and outperforms the canonical
PSO algorithm.
Abstract: Evolutionary Fuzzy PID Speed Controller for Permanent Magnet Synchronous Motor (PMSM) is developed to achieve the Speed control of PMSM in Closed Loop operation and to deal with the existence of transients. Consider a Fuzzy PID control design problem, based on common control Engineering Knowledge. If the transient error is big, that Good transient performance can be obtained by increasing the P and I gains and decreasing the D gains. To autotune the control parameters of the Fuzzy PID controller, the Evolutionary Algorithms (EA) are developed. EA based Fuzzy PID controller provides better speed control and guarantees the closed loop stability. The Evolutionary Fuzzy PID controller can be implemented in real time Applications without any concern about instabilities that leads to system failure or damage.
Abstract: This paper presents the modeling and the control of a grid-connected photovoltaic system (PVS). Firstly, the MPPT control of the PVS and its associated DC/DC converter has been analyzed in order to extract the maximum of available power. Secondly, the control system of the grid side converter (GSC) which is a three-phase voltage source inverter (VSI) has been presented. A special attention has been paid to the control algorithms of the GSC converter during grid voltages imbalances. Especially, three different control objectives are to achieve; the mitigation of the grid imbalance adverse effects, at the point of common coupling (PCC), on the injected currents, the elimination of double frequency oscillations in active power flow, and the elimination of double frequency oscillations in reactive power flow. Simulation results of two control strategies have been performed via MATLAB software in order to demonstrate the particularities of each control strategy according to power quality standards.
Abstract: Most of self-tuning fuzzy systems, which are
automatically constructed from learning data, are based on the
steepest descent method (SDM). However, this approach often
requires a large convergence time and gets stuck into a shallow
local minimum. One of its solutions is to use fuzzy rule modules
with a small number of inputs such as DIRMs (Double-Input Rule
Modules) and SIRMs (Single-Input Rule Modules). In this paper,
we consider a (generalized) DIRMs model composed of double
and single-input rule modules. Further, in order to reduce the
redundant modules for the (generalized) DIRMs model, pruning and
generative learning algorithms for the model are suggested. In order
to show the effectiveness of them, numerical simulations for function
approximation, Box-Jenkins and obstacle avoidance problems are
performed.
Abstract: We propose two affine projection algorithms (APA)
with variable regularization parameter. The proposed algorithms
dynamically update the regularization parameter that is fixed in the
conventional regularized APA (R-APA) using a gradient descent
based approach. By introducing the normalized gradient, the proposed
algorithms give birth to an efficient and a robust update scheme for
the regularization parameter. Through experiments we demonstrate
that the proposed algorithms outperform conventional R-APA in
terms of the convergence rate and the misadjustment error.
Abstract: We present a normalized LMS (NLMS) algorithm
with robust regularization. Unlike conventional NLMS with the
fixed regularization parameter, the proposed approach dynamically
updates the regularization parameter. By exploiting a gradient
descent direction, we derive a computationally efficient and robust
update scheme for the regularization parameter. In simulation, we
demonstrate the proposed algorithm outperforms conventional NLMS
algorithms in terms of convergence rate and misadjustment error.