Abstract: Deaths from cardiovascular diseases have decreased substantially over the past two decades, largely as a result of advances in acute care and cardiac surgery. These developments have produced a growing population of patients who have survived a myocardial infarction. These patients need to be continuously monitored so that the initiation of treatment can be given within the crucial golden hour. The available conventional methods of monitoring mostly perform offline analysis and restrict the mobility of these patients within a hospital or room. Hence the aim of this paper is to design a Portable Cardiac Telemedicine System to aid the patients to regain their independence and return to an active work schedule, there by improving the psychological well being. The portable telemedicine system consists of a Wearable ECG Transmitter (WET) and a slightly modified mobile phone, which has an inbuilt ECG analyzer. The WET is placed on the body of the patient that continuously acquires the ECG signals from the high-risk cardiac patients who can move around anywhere. This WET transmits the ECG to the patient-s Bluetooth enabled mobile phone using blue tooth technology. The ECG analyzer inbuilt in the mobile phone continuously analyzes the heartbeats derived from the received ECG signals. In case of any panic condition, the mobile phone alerts the patients care taker by an SMS and initiates the transmission of a sample ECG signal to the doctor, via the mobile network.
Abstract: In ad hoc networks, the main issue about designing of protocols is quality of service, so that in wireless sensor networks the main constraint in designing protocols is limited energy of sensors. In fact, protocols which minimize the power consumption in sensors are more considered in wireless sensor networks. One approach of reducing energy consumption in wireless sensor networks is to reduce the number of packages that are transmitted in network. The technique of collecting data that combines related data and prevent transmission of additional packages in network can be effective in the reducing of transmitted packages- number. According to this fact that information processing consumes less power than information transmitting, Data Aggregation has great importance and because of this fact this technique is used in many protocols [5]. One of the Data Aggregation techniques is to use Data Aggregation tree. But finding one optimum Data Aggregation tree to collect data in networks with one sink is a NP-hard problem. In the Data Aggregation technique, related information packages are combined in intermediate nodes and form one package. So the number of packages which are transmitted in network reduces and therefore, less energy will be consumed that at last results in improvement of longevity of network. Heuristic methods are used in order to solve the NP-hard problem that one of these optimization methods is to solve Simulated Annealing problems. In this article, we will propose new method in order to build data collection tree in wireless sensor networks by using Simulated Annealing algorithm and we will evaluate its efficiency whit Genetic Algorithm.
Abstract: Image Compression using Artificial Neural Networks
is a topic where research is being carried out in various directions
towards achieving a generalized and economical network.
Feedforward Networks using Back propagation Algorithm adopting
the method of steepest descent for error minimization is popular and
widely adopted and is directly applied to image compression.
Various research works are directed towards achieving quick
convergence of the network without loss of quality of the restored
image. In general the images used for compression are of different
types like dark image, high intensity image etc. When these images
are compressed using Back-propagation Network, it takes longer
time to converge. The reason for this is, the given image may
contain a number of distinct gray levels with narrow difference with
their neighborhood pixels. If the gray levels of the pixels in an image
and their neighbors are mapped in such a way that the difference in
the gray levels of the neighbors with the pixel is minimum, then
compression ratio as well as the convergence of the network can be
improved. To achieve this, a Cumulative distribution function is
estimated for the image and it is used to map the image pixels. When
the mapped image pixels are used, the Back-propagation Neural
Network yields high compression ratio as well as it converges
quickly.
Abstract: The aim of this article is to explain how features of attacks could be extracted from the packets. It also explains how vectors could be built and then applied to the input of any analysis stage. For analyzing, the work deploys the Feedforward-Back propagation neural network to act as misuse intrusion detection system. It uses ten types if attacks as example for training and testing the neural network. It explains how the packets are analyzed to extract features. The work shows how selecting the right features, building correct vectors and how correct identification of the training methods with nodes- number in hidden layer of any neural network affecting the accuracy of system. In addition, the work shows how to get values of optimal weights and use them to initialize the Artificial Neural Network.
Abstract: This paper presents an intrusion detection system of hybrid neural network model based on RBF and Elman. It is used for anomaly detection and misuse detection. This model has the memory function .It can detect discrete and related aggressive behavior effectively. RBF network is a real-time pattern classifier, and Elman network achieves the memory ability for former event. Based on the hybrid model intrusion detection system uses DARPA data set to do test evaluation. It uses ROC curve to display the test result intuitively. After the experiment it proves this hybrid model intrusion detection system can effectively improve the detection rate, and reduce the rate of false alarm and fail.
Abstract: In 1990 [1] the subband-DFT (SB-DFT) technique was proposed. This technique used the Hadamard filters in the decomposition step to split the input sequence into low- and highpass sequences. In the next step, either two DFTs are needed on both bands to compute the full-band DFT or one DFT on one of the two bands to compute an approximate DFT. A combination network with correction factors was to be applied after the DFTs. Another approach was proposed in 1997 [2] for using a special discrete wavelet transform (DWT) to compute the discrete Fourier transform (DFT). In the first step of the algorithm, the input sequence is decomposed in a similar manner to the SB-DFT into two sequences using wavelet decomposition with Haar filters. The second step is to perform DFTs on both bands to obtain the full-band DFT or to obtain a fast approximate DFT by implementing pruning at both input and output sides. In this paper, the wavelet-based DFT (W-DFT) with Haar filters is interpreted as SB-DFT with Hadamard filters. The only difference is in a constant factor in the combination network. This result is very important to complete the analysis of the W-DFT, since all the results concerning the accuracy and approximation errors in the SB-DFT are applicable. An application example in spectral analysis is given for both SB-DFT and W-DFT (with different filters). The adaptive capability of the SB-DFT is included in the W-DFT algorithm to select the band of most energy as the band to be computed. Finally, the W-DFT is extended to the two-dimensional case. An application in image transformation is given using two different types of wavelet filters.
Abstract: A study on the performance of TCP Vegas versus
different TCP variants in homogeneous and heterogeneous wired
networks are performed via simulation experiment using network
simulator (ns-2). This performance evaluation prepared a comparison
medium for the performance evaluation of enhanced-TCP Vegas in
wired network and for wireless network. In homogeneous network,
the performance of TCP Tahoe, TCP Reno, TCP NewReno, TCP
Vegas and TCP SACK are analyzed. In heterogeneous network, the
performances of TCP Vegas against TCP variants are analyzed. TCP
Vegas outperforms other TCP variants in homogeneous wired
network. However, TCP Vegas achieves unfair throughput in
heterogeneous wired network.
Abstract: Virtually all existing networked system management
tools use a Manager/Agent paradigm. That is, distributed agents are
deployed on managed devices to collect local information and report
it back to some management unit. Even those that use standard
protocols such as SNMP fall into this model. Using standard protocol
has the advantage of interoperability among devices from different
vendors. However, it may not be able to provide customized
information that is of interest to satisfy specific management needs.
In this dissertation work, different approaches are used to
collect information regarding the devices attached to a Local Area
Network. An SNMP aware application is being developed that will
manage the discovery procedure and will be used as data collector.
Abstract: In this paper, a neural network technique is applied to
real-time classifying media while a projectile is penetrating through
them. A laboratory-scaled penetrating setup was built for the
experiment. Features used as the network inputs were extracted from
the acceleration of penetrator. 6000 set of features from a single
penetration with known media and status were used to train the neural
network. The trained system was tested on 30 different penetration
experiments. The system produced an accuracy of 100% on the
training data set. And, their precision could be 99% for the test data
from 30 tests.
Abstract: It is expected that ubiquitous era will come soon. A ubiquitous environment has features like peer-to-peer and nomadic environments. Such features can be represented by peer-to-peer systems and mobile ad-hoc networks (MANETs). The features of P2P systems and MANETs are similar, appealing for implementing P2P systems in MANET environment. It has been shown that, however, the performance of the P2P systems designed for wired networks do not perform satisfactorily in mobile ad-hoc environment. Subsequently, this paper proposes a method to improve P2P performance using cross-layer design and the goodness of a node as a peer. The proposed method uses routing metric as well as P2P metric to choose favorable peers to connect. It also utilizes proactive approach for distributing peer information. According to the simulation results, the proposed method provides higher query success rate, shorter query response time and less energy consumption by constructing an efficient overlay network.
Abstract: Due to insufficient frequency band and tremendous growth of the mobile users, complex computation is needed for the use of resources. Long distance communication began with the introduction of telegraphs and simple coded pulses, which were used to transmit short messages. Since then numerous advances have rendered reliable transfer of information both easier and quicker. Wireless network refers to any type of computer network that is wireless, and is commonly associated with a telecommunications network whose interconnections between nodes is implemented without the use of wires. Wireless network can be broadly categorized in infrastructure network and infrastructure less network. Infrastructure network is one in which we have a base station to serve the mobile users and in the infrastructure less network is one in which no infrastructure is available to serve the mobile users this kind of networks are also known as mobile Adhoc networks. In this paper we have simulated the result for different scenarios with protocols like AODV and DSR; we simulated the result for throughput, delay and receiving traffic in the given scenario.
Abstract: This paper proposes an implementation for the
directed diffusion paradigm aids in studying this paradigm-s
operations and evaluates its behavior according to this
implementation. The directed diffusion is evaluated with respect to
the loss percentage, lifetime, end-to-end delay, and throughput.
From these evaluations some suggestions and modifications are
proposed to improve the directed diffusion behavior according to
this implementation with respect to these metrics. The proposed
modifications reflect the effect of local path repair by introducing a
technique called Loop-free Local Path Repair (LLPR) which
improves the directed diffusion behavior especially with respect to
packet loss percentage by about 92.69%. Also LLPR improves the
throughput and end-to-end delay by about 55.31% and 14.06%
respectively, while the lifetime decreases by about 29.79%.
Abstract: Over the years, many implementations have been
proposed for solving IA networks. These implementations are
concerned with finding a solution efficiently. The primary goal of
our implementation is simplicity and ease of use.
We present an IA network implementation based on finite domain
non-binary CSPs, and constraint logic programming. The
implementation has a GUI which permits the drawing of arbitrary IA
networks. We then show how the implementation can be extended to
find all the solutions to an IA network. One application of finding all
the solutions, is solving probabilistic IA networks.
Abstract: The security of power systems against malicious cyberphysical
data attacks becomes an important issue. The adversary
always attempts to manipulate the information structure of the power
system and inject malicious data to deviate state variables while
evading the existing detection techniques based on residual test. The
solutions proposed in the literature are capable of immunizing the
power system against false data injection but they might be too costly
and physically not practical in the expansive distribution network.
To this end, we define an algebraic condition for trustworthy power
system to evade malicious data injection. The proposed protection
scheme secures the power system by deterministically reconfiguring
the information structure and corresponding residual test. More
importantly, it does not require any physical effort in either microgrid
or network level. The identification scheme of finding meters being
attacked is proposed as well. Eventually, a well-known IEEE 30-bus
system is adopted to demonstrate the effectiveness of the proposed
schemes.
Abstract: In this paper we consider a one-dimensional random
geometric graph process with the inter-nodal gaps evolving according
to an exponential AR(1) process. The transition probability matrix
and stationary distribution are derived for the Markov chains concerning
connectivity and the number of components. We analyze the
algorithm for hitting time regarding disconnectivity. In addition to
dynamical properties, we also study topological properties for static
snapshots. We obtain the degree distributions as well as asymptotic
precise bounds and strong law of large numbers for connectivity
threshold distance and the largest nearest neighbor distance amongst
others. Both exact results and limit theorems are provided in this
paper.
Abstract: This paper proposes a Particle Swarm Optimization
(PSO) based technique for the optimal allocation of Distributed
Generation (DG) units in the power systems. In this paper our aim is
to decide optimal number, type, size and location of DG units for
voltage profile improvement and power loss reduction in distribution
network. Two types of DGs are considered and the distribution load
flow is used to calculate exact loss. Load flow algorithm is combined
appropriately with PSO till access to acceptable results of this
operation. The suggested method is programmed under MATLAB
software. Test results indicate that PSO method can obtain better
results than the simple heuristic search method on the 30-bus and 33-
bus radial distribution systems. It can obtain maximum loss reduction
for each of two types of optimally placed multi-DGs. Moreover,
voltage profile improvement is achieved.
Abstract: Wireless Sensor networks have a wide spectrum of civil and military applications that call for secure communication such as the terrorist tracking, target surveillance in hostile environments. For the secure communication in these application areas, we propose a method for generating a hierarchical key structure for the efficient group key management. In this paper, we apply A* algorithm in generating a hierarchical key structure by considering the history data of the ratio of addition and eviction of sensor nodes in a location where sensor nodes are deployed. Thus generated key tree structure provides an efficient way of managing the group key in terms of energy consumption when addition and eviction event occurs. A* algorithm tries to minimize the number of messages needed for group key management by the history data. The experimentation with the tree shows efficiency of the proposed method.
Abstract: IEEE 802.11e is the enhanced version of the IEEE
802.11 MAC dedicated to provide Quality of Service of wireless
network. It supports QoS by the service differentiation and
prioritization mechanism. Data traffic receives different priority
based on QoS requirements. Fundamentally, applications are divided
into four Access Categories (AC). Each AC has its own buffer queue
and behaves as an independent backoff entity. Every frame with a
specific priority of data traffic is assigned to one of these access
categories. IEEE 802.11e EDCA (Enhanced Distributed Channel
Access) is designed to enhance the IEEE 802.11 DCF (Distributed
Coordination Function) mechanisms by providing a distributed
access method that can support service differentiation among
different classes of traffic. Performance of IEEE 802.11e MAC layer
with different ACs is evaluated to understand the actual benefits
deriving from the MAC enhancements.
Abstract: In this paper, we consider a multi user multiple input
multiple output (MU-MIMO) based cooperative reporting system for
cognitive radio network. In the reporting network, the secondary
users forward the primary user data to the common fusion center
(FC). The FC is equipped with linear equalizers and an energy
detector to make the decision about the spectrum. The primary user
data are considered to be a digital video broadcasting - terrestrial
(DVB-T) signal. The sensing channel and the reporting channel are
assumed to be an additive white Gaussian noise and an independent
identically distributed Raleigh fading respectively. We analyzed the
detection probability of MU-MIMO system with linear equalizers and
arrived at the closed form expression for average detection
probability. Also the system performance is investigated under
various MIMO scenarios through Monte Carlo simulations.
Abstract: Continuously growing needs for Internet applications
that transmit massive amount of data have led to the emergence of
high speed network. Data transfer must take place without any
congestion and hence feedback parameters must be transferred from
the receiver end to the sender end so as to restrict the sending rate in
order to avoid congestion. Even though TCP tries to avoid
congestion by restricting the sending rate and window size, it never
announces the sender about the capacity of the data to be sent and
also it reduces the window size by half at the time of congestion
therefore resulting in the decrease of throughput, low utilization of
the bandwidth and maximum delay. In this paper, XCP protocol is
used and feedback parameters are calculated based on arrival rate,
service rate, traffic rate and queue size and hence the receiver
informs the sender about the throughput, capacity of the data to be
sent and window size adjustment, resulting in no drastic decrease in
window size, better increase in sending rate because of which there is
a continuous flow of data without congestion. Therefore as a result of
this, there is a maximum increase in throughput, high utilization of
the bandwidth and minimum delay. The result of the proposed work
is presented as a graph based on throughput, delay and window size.
Thus in this paper, XCP protocol is well illustrated and the various
parameters are thoroughly analyzed and adequately presented.