Abstract: e-mail has become an important means of electronic
communication but the viability of its usage is marred by Unsolicited
Bulk e-mail (UBE) messages. UBE consists of many types
like pornographic, virus infected and 'cry-for-help' messages as well
as fake and fraudulent offers for jobs, winnings and medicines. UBE
poses technical and socio-economic challenges to usage of e-mails.
To meet this challenge and combat this menace, we need to
understand UBE. Towards this end, the current paper presents a
content-based textual analysis of more than 2700 body enhancement
medicinal UBE. Technically, this is an application of Text Parsing
and Tokenization for an un-structured textual document and we
approach it using Bag Of Words (BOW) and Vector Space Document
Model techniques. We have attempted to identify the most
frequently occurring lexis in the UBE documents that advertise
various products for body enhancement. The analysis of such top
100 lexis is also presented. We exhibit the relationship between
occurrence of a word from the identified lexis-set in the given UBE
and the probability that the given UBE will be the one advertising for
fake medicinal product. To the best of our knowledge and survey of
related literature, this is the first formal attempt for identification of
most frequently occurring lexis in such UBE by its textual analysis.
Finally, this is a sincere attempt to bring about alertness against and
mitigate the threat of such luring but fake UBE.
Abstract: Exact expressions for bit-error probability (BEP) for
coherent square detection of uncoded and coded M-ary quadrature
amplitude modulation (MQAM) using an array of antennas with
maximal ratio combining (MRC) in a flat fading channel interference
limited system in a Nakagami-m fading environment is derived. The
analysis assumes an arbitrary number of independent and identically
distributed Nakagami interferers. The results for coded MQAM are
computed numerically for the case of (24,12) extended Golay code
and compared with uncoded MQAM by plotting error probabilities
versus average signal-to-interference ratio (SIR) for various values of
order of diversity N, number of distinct symbols M, in order to
examine the effect of cochannel interferers on the performance of the
digital communication system. The diversity gains and net gains are
also presented in tabular form in order to examine the performance of
digital communication system in the presence of interferers, as the
order of diversity increases. The analytical results presented in this
paper are expected to provide useful information needed for design
and analysis of digital communication systems with space diversity
in wireless fading channels.
Abstract: Finding the shortest path between two positions is a
fundamental problem in transportation, routing, and communications
applications. In robot motion planning, the robot should pass around
the obstacles touching none of them, i.e. the goal is to find a
collision-free path from a starting to a target position. This task has
many specific formulations depending on the shape of obstacles,
allowable directions of movements, knowledge of the scene, etc.
Research of path planning has yielded many fundamentally different
approaches to its solution, mainly based on various decomposition
and roadmap methods. In this paper, we show a possible use of
visibility graphs in point-to-point motion planning in the Euclidean
plane and an alternative approach using Voronoi diagrams that
decreases the probability of collisions with obstacles. The second
application area, investigated here, is focused on problems of finding
minimal networks connecting a set of given points in the plane using
either only straight connections between pairs of points (minimum
spanning tree) or allowing the addition of auxiliary points to the set
to obtain shorter spanning networks (minimum Steiner tree).
Abstract: The study of non-equilibrium systems has attracted
increasing interest in recent years, mainly due to the lack of
theoretical frameworks, unlike their equilibrium counterparts.
Studying the steady state and/or simple systems is thus one of the
main interests. Hence in this work we have focused our attention on
the driven lattice gas model (DLG model) consisting of interacting
particles subject to an external field E. The dynamics of the system
are given by hopping of particles to nearby empty sites with rates
biased for jumps in the direction of E. Having used small two
dimensional systems of DLG model, the stochastic properties at nonequilibrium
steady state were analytically studied. To understand the
non-equilibrium phenomena, we have applied the analytic approach
via master equation to calculate probability function and analyze
violation of detailed balance in term of the fluctuation-dissipation
theorem. Monte Carlo simulations have been performed to validate
the analytic results.
Abstract: This paper focuses on reducing the power consumption
of wireless sensor networks. Therefore, a communication protocol
named LEACH (Low-Energy Adaptive Clustering Hierarchy) is modified.
We extend LEACHs stochastic cluster-head selection algorithm
by a modifying the probability of each node to become cluster-head
based on its required energy to transmit to the sink. We present
an efficient energy aware routing algorithm for the wireless sensor
networks. Our contribution consists in rotation selection of clusterheads
considering the remoteness of the nodes to the sink, and then,
the network nodes residual energy. This choice allows a best distribution
of the transmission energy in the network. The cluster-heads
selection algorithm is completely decentralized. Simulation results
show that the energy is significantly reduced compared with the
previous clustering based routing algorithm for the sensor networks.
Abstract: This paper analyzes different techniques of the fine grained security of relational databases for the two variables-data accessibility and inference. Data accessibility measures the amount of data available to the users after applying a security technique on a table. Inference is the proportion of information leakage after suppressing a cell containing secret data. A row containing a secret cell which is suppressed can become a security threat if an intruder generates useful information from the related visible information of the same row. This paper measures data accessibility and inference associated with row, cell, and column level security techniques. Cell level security offers greatest data accessibility as it suppresses secret data only. But on the other hand, there is a high probability of inference in cell level security. Row and column level security techniques have least data accessibility and inference. This paper introduces cell plus innocent security technique that utilizes the cell level security method but suppresses some innocent data to dodge an intruder that a suppressed cell may not necessarily contain secret data. Four variations of the technique namely cell plus innocent 1/4, cell plus innocent 2/4, cell plus innocent 3/4, and cell plus innocent 4/4 respectively have been introduced to suppress innocent data equal to 1/4, 2/4, 3/4, and 4/4 percent of the true secret data inside the database. Results show that the new technique offers better control over data accessibility and inference as compared to the state-of-theart security techniques. This paper further discusses the combination of techniques together to be used. The paper shows that cell plus innocent 1/4, 2/4, and 3/4 techniques can be used as a replacement for the cell level security.
Abstract: The two-dimensional gel electrophoresis method
(2-DE) is widely used in Proteomics to separate thousands of proteins
in a sample. By comparing the protein expression levels of proteins in
a normal sample with those in a diseased one, it is possible to identify
a meaningful set of marker proteins for the targeted disease. The major
shortcomings of this approach involve inherent noises and irregular
geometric distortions of spots observed in 2-DE images. Various
experimental conditions can be the major causes of these problems. In
the protein analysis of samples, these problems eventually lead to
incorrect conclusions. In order to minimize the influence of these
problems, this paper proposes a partition based pair extension method
that performs spot-matching on a set of gel images multiple times and
segregates more reliable mapping results which can improve the
accuracy of gel image analysis. The improved accuracy of the
proposed method is analyzed through various experiments on real
2-DE images of human liver tissues.
Abstract: In this paper, we investigate the appearance of the giant component in random subgraphs G(p) of a given large finite graph family Gn = (Vn, En) in which each edge is present independently with probability p. We show that if the graph Gn satisfies a weak isoperimetric inequality and has bounded degree, then the probability p under which G(p) has a giant component of linear order with some constant probability is bounded away from zero and one. In addition, we prove the probability of abnormally large order of the giant component decays exponentially. When a contact graph is modeled as Gn, our result is of special interest in the study of the spread of infectious diseases or the identification of community in various social networks.
Abstract: Position based routing protocols are the kinds of
routing protocols, which they use of nodes location information,
instead of links information to routing. In position based routing
protocols, it supposed that the packet source node has position
information of itself and it's neighbors and packet destination node.
Greedy is a very important position based routing protocol. In one of
it's kinds, named MFR (Most Forward Within Radius), source node
or packet forwarder node, sends packet to one of it's neighbors with
most forward progress towards destination node (closest neighbor to
destination). Using distance deciding metric in Greedy to forward
packet to a neighbor node, is not suitable for all conditions. If closest
neighbor to destination node, has high speed, in comparison with
source node or intermediate packet forwarder node speed or has very
low remained battery power, then packet loss probability is
increased. Proposed strategy uses combination of metrics distancevelocity
similarity-power, to deciding about giving the packet to
which neighbor. Simulation results show that the proposed strategy
has lower lost packets average than Greedy, so it has more reliability.
Abstract: In this paper, a Markovian risk model with two-type claims is considered. In such a risk model, the occurrences of the two type claims are described by two point processes {Ni(t), t ¸ 0}, i = 1, 2, where {Ni(t), t ¸ 0} is the number of jumps during the interval (0, t] for the Markov jump process {Xi(t), t ¸ 0} . The ruin probability ª(u) of a company facing such a risk model is mainly discussed. An integral equation satisfied by the ruin probability ª(u) is obtained and the bounds for the convergence rate of the ruin probability ª(u) are given by using key-renewal theorem.
Abstract: We study how the outcome of evolutionary dynamics on
graphs depends on a randomness on the graph structure. We gradually
change the underlying graph from completely regular (e.g. a square lattice) to completely random. We find that the fixation probability increases as the randomness increases; nevertheless, the increase is
not significant and thus the fixation probability could be estimated by the known formulas for underlying regular graphs.
Abstract: In this paper, we use an M/G/C/C state dependent
queuing model within a complex network topology to determine the
different performance measures for pedestrian traffic flow. The
occupants in this network topology need to go through some source
corridors, from which they can choose their suitable exiting
corridors. The performance measures were calculated using arrival
rates that maximize the throughputs of source corridors. In order to
increase the throughput of the network, the result indicates that the
flow direction of pedestrian through the corridors has to be restricted
and the arrival rates to the source corridor need to be controlled.
Abstract: In this paper, we consider a risk model involving two independent classes of insurance risks and random premium income. We assume that the premium income process is a Poisson Process, and the claim number processes are independent Poisson and generalized Erlang(n) processes, respectively. Both of the Gerber- Shiu functions with zero initial surplus and the probability generating functions (p.g.f.) of the Gerber-Shiu functions are obtained.
Abstract: The fault current levels through the electric devices
have a significant impact on failure probability. New fault current
results in exceeding the rated capacity of circuit breaker and switching
equipments and changes operation characteristic of overcurrent relay.
In order to solve these problems, SFCL (Superconducting Fault
Current Limiter) has rising as one of new alternatives so as to improve
these problems. A fault current reduction differs depending on
installed location. Therefore, a location of SFCL is very important.
Also, SFCL decreases the fault current, and it prevents surrounding
protective devices to be exposed to fault current, it then will bring a
change of reliability. In this paper, we propose method which
determines the optimal location when SFCL is installed in power
system. In addition, the reliability about the power system which
SFCL was installed is evaluated. The efficiency and effectiveness of
this method are also shown by numerical examples and the reliability
indices are evaluated in this study at each load points. These results
show a reliability change of a system when SFCL was installed.
Abstract: The quality-of-service (QoS) support for wireless
LANs has been a hot research topic during the past few years. In this paper, two QoS provisioning mechanisms are proposed for the employment in 802.11e EDCA MAC scheme. First, the proposed call
admission control mechanism can not only guarantee the QoS for the higher priority existing connections but also provide the minimum reserved bandwidth for traffic flows with lower priority. In addition, the adaptive contention window adjustment mechanism can adjust the
maximum and minimum contention window size dynamically according to the existing connection number of each AC. The collision
probability as well as the packet delay will thus be reduced effectively.
Performance results via simulations have revealed the enhanced QoS property achieved by employing these two mechanisms.
Abstract: Scheduling algorithm is a key technology in satellite
switching system with input-buffer. In this paper, a new scheduling
algorithm and its realization are proposed. Based on Crossbar
switching fabric, the algorithm adopts serial scheduling strategy and
adjusts the output port arbitrating strategy for the better equity of every
port. Consequently, it increases the matching probability. The
algorithm can greatly reduce the scheduling delay and cell loss rate.
The analysis and simulation results by OPNET show that the proposed
algorithm has the better performance than others in average delay and
cell loss rate, and has the equivalent complexity. On the basis of these
results, the hardware realization and simulation based on FPGA are
completed, which validate the feasibility of the new scheduling
algorithm.
Abstract: The use of buffer thresholds, blocking and adequate
service strategies are well-known techniques for computer networks
traffic congestion control. This motivates the study of series queues
with blocking, feedback (service under Head of Line (HoL) priority
discipline) and finite capacity buffers with thresholds. In this paper,
the external traffic is modelled using the Poisson process and the
service times have been modelled using the exponential distribution.
We consider a three-station network with two finite buffers, for
which a set of thresholds (tm1 and tm2) is defined. This computer
network behaves as follows. A task, which finishes its service at
station B, gets sent back to station A for re-processing with
probability o. When the number of tasks in the second buffer exceeds
a threshold tm2 and the number of task in the first buffer is less than
tm1, the fed back task is served under HoL priority discipline. In
opposite case, for fed backed tasks, “no two priority services in
succession" procedure (preventing a possible overflow in the first
buffer) is applied. Using an open Markovian queuing schema with
blocking, priority feedback service and thresholds, a closed form
cost-effective analytical solution is obtained. The model of servers
linked in series is very accurate. It is derived directly from a twodimensional
state graph and a set of steady-state equations, followed
by calculations of main measures of effectiveness. Consequently,
efficient expressions of the low computational cost are determined.
Based on numerical experiments and collected results we conclude
that the proposed model with blocking, feedback and thresholds can
provide accurate performance estimates of linked in series networks.
Abstract: This paper uses p-tolerance with the lowest posterior
loss, quadratic loss function, average length criteria, average
coverage criteria, and worst outcome criterion for computing of
sample size to estimate proportion in Binomial probability function
with Beta prior distribution. The proposed methodology is examined,
and its effectiveness is shown.
Abstract: Our objective in this paper is to propose an approach
capable of clustering web messages. The clustering is carried out by
assigning, with a certain probability, texts written by the same web
user to the same cluster based on Stylometric features and using
fuzzy clustering algorithms. Focus in the present work is on
comparing the most popular algorithms in fuzzy clustering theory
namely, Fuzzy C-means, Possibilistic C-means and Fuzzy
Possibilistic C-Means.
Abstract: In this paper a new method is suggested for
distributed data-mining by the probability patterns. These patterns
use decision trees and decision graphs. The patterns are cared to be
valid, novel, useful, and understandable. Considering a set of
functions, the system reaches to a good pattern or better objectives.
By using the suggested method we will be able to extract the useful
information from massive and multi-relational data bases.