Abstract: With optimized bandwidth and latency discrepancy ratios, Node Gain Scores (NGSs) are determined and used as a basis for shaping the max-heap overlay. The NGSs - determined as the respective bandwidth-latency-products - govern the construction of max-heap-form overlays. Each NGS is earned as a synergy of discrepancy ratio of the bandwidth requested with respect to the estimated available bandwidth, and latency discrepancy ratio between the nodes and the source node. The tree leads to enhanceddelivery overlay multicasting – increasing packet delivery which could, otherwise, be hindered by induced packet loss occurring in other schemes not considering the synergy of these parameters on placing the nodes on the overlays. The NGS is a function of four main parameters – estimated available bandwidth, Ba; individual node's requested bandwidth, Br; proposed node latency to its prospective parent (Lp); and suggested best latency as advised by source node (Lb). Bandwidth discrepancy ratio (BDR) and latency discrepancy ratio (LDR) carry weights of α and (1,000 - α ) , respectively, with arbitrary chosen α ranging between 0 and 1,000 to ensure that the NGS values, used as node IDs, maintain a good possibility of uniqueness and balance between the most critical factor between the BDR and the LDR. A max-heap-form tree is constructed with assumption that all nodes possess NGS less than the source node. To maintain a sense of load balance, children of each level's siblings are evenly distributed such that a node can not accept a second child, and so on, until all its siblings able to do so, have already acquired the same number of children. That is so logically done from left to right in a conceptual overlay tree. The records of the pair-wise approximate available bandwidths as measured by a pathChirp scheme at individual nodes are maintained. Evaluation measures as compared to other schemes – Bandwidth Aware multicaSt architecturE (BASE), Tree Building Control Protocol (TBCP), and Host Multicast Tree Protocol (HMTP) - have been conducted. This new scheme generally performs better in terms of trade-off between packet delivery ratio; link stress; control overhead; and end-to-end delays.
Abstract: Use of the Internet and the World-Wide-Web
(WWW) has become widespread in recent years and mobile agent
technology has proliferated at an equally rapid rate. In this scenario
load balancing becomes important for P2P systems. Beside P2P
systems can be highly heterogeneous, i.e., they may consists of peers
that range from old desktops to powerful servers connected to
internet through high-bandwidth lines. There are various loads
balancing policies came into picture. Primitive one is Message
Passing Interface (MPI). Its wide availability and portability make it
an attractive choice; however the communication requirements are
sometimes inefficient when implementing the primitives provided by
MPI. In this scenario we use the concept of mobile agent because
Mobile agent (MA) based approach have the merits of high
flexibility, efficiency, low network traffic, less communication
latency as well as highly asynchronous. In this study we present
decentralized load balancing scheme using mobile agent technology
in which when a node is overloaded, task migrates to less utilized
nodes so as to share the workload. However, the decision of which
nodes receive migrating task is made in real-time by defining certain
load balancing policies. These policies are executed on PMADE (A
Platform for Mobile Agent Distribution and Execution) in
decentralized manner using JuxtaNet and various load balancing
metrics are discussed.
Abstract: Long Term Evolution (LTE) is a 4G wireless
broadband technology developed by the Third Generation
Partnership Project (3GPP) release 8, and it's represent the
competitiveness of Universal Mobile Telecommunications System
(UMTS) for the next 10 years and beyond. The concepts for LTE
systems have been introduced in 3GPP release 8, with objective of
high-data-rate, low-latency and packet-optimized radio access
technology. In this paper, performance of different TCP variants
during LTE network investigated. The performance of TCP over
LTE is affected mostly by the links of the wired network and total
bandwidth available at the serving base station. This paper describes
an NS-2 based simulation analysis of TCP-Vegas, TCP-Tahoe, TCPReno,
TCP-Newreno, TCP-SACK, and TCP-FACK, with full
modeling of all traffics of LTE system. The Evaluation of the
network performance with all TCP variants is mainly based on
throughput, average delay and lost packet. The analysis of TCP
performance over LTE ensures that all TCP's have a similar
throughput and the best performance return to TCP-Vegas than other
variants.
Abstract: Traffic Engineering (TE) is the process of controlling
how traffic flows through a network in order to facilitate efficient and
reliable network operations while simultaneously optimizing network
resource utilization and traffic performance. TE improves the
management of data traffic within a network and provides the better
utilization of network resources. Many research works considers intra
and inter Traffic Engineering separately. But in reality one influences
the other. Hence the effective network performances of both inter and
intra Autonomous Systems (AS) are not optimized properly. To
achieve a better Joint Optimization of both Intra and Inter AS TE, we
propose a joint Optimization technique by considering intra-AS
features during inter – AS TE and vice versa. This work considers the
important criterion say latency within an AS and between ASes. and
proposes a Bi-Criteria Latency optimization model. Hence an overall
network performance can be improved by considering this jointoptimization
technique in terms of Latency.
Abstract: With the rapid popularization of internet services, it is apparent that the next generation terrestrial communication systems must be capable of supporting various applications like voice, video, and data. This paper presents the performance evaluation of turbo- coded mobile terrestrial communication systems, which are capable of providing high quality services for delay sensitive (voice or video) and delay tolerant (text transmission) multimedia applications in urban and suburban areas. Different types of multimedia information require different service qualities, which are generally expressed in terms of a maximum acceptable bit-error-rate (BER) and maximum tolerable latency. The breakthrough discovery of turbo codes allows us to significantly reduce the probability of bit errors with feasible latency. In a turbo-coded system, a trade-off between latency and BER results from the choice of convolutional component codes, interleaver type and size, decoding algorithm, and the number of decoding iterations. This trade-off can be exploited for multimedia applications by using optimal and suboptimal performance parameter amalgamations to achieve different service qualities. The results are therefore proposing an adaptive framework for turbo-coded wireless multimedia communications which incorporate a set of performance parameters that achieve an appropriate set of service qualities, depending on the application's requirements.
Abstract: A four element prototype phased array surface probe
has been designed and constructed to improve clinical human
prostate spectroscopic data. The probe consists of two pairs of
adjacent rectangular coils with an optimum overlap to reduce the
mutual inductance. The two pairs are positioned on the anterior and
the posterior pelvic region and two couples of varactors at the input
of each coil undertake the procedures of tuning and matching. The
probe switches off and on automatically during the consecutive
phases of the MR experiment with the use of an analog switch that is
triggered by a microcontroller. Experimental tests that were carried
out resulted in high levels of tuning accuracy. Also, the switching
mechanism functions properly for various applied loads and pulse
sequence characteristics, producing only 10 μs of latency.
Abstract: Network on Chip (NoC) has emerged as a promising
on chip communication infrastructure. Three Dimensional Integrate
Circuit (3D IC) provides small interconnection length between layers
and the interconnect scalability in the third dimension, which can
further improve the performance of NoC. Therefore, in this paper,
a hierarchical cluster-based interconnect architecture is merged with
the 3D IC. This interconnect architecture significantly reduces the
number of long wires. Since this architecture only has approximately
a quarter of routers in 3D mesh-based architecture, the average
number of hops is smaller, which leads to lower latency and higher
throughput. Moreover, smaller number of routers decreases the area
overhead. Meanwhile, some dual links are inserted into the bottlenecks
of communication to improve the performance of NoC.
Simulation results demonstrate our theoretical analysis and show the
advantages of our proposed architecture in latency, throughput and
area, when compared with 3D mesh-based architecture.
Abstract: There are two paradigms proposed to provide QoS for Internet applications: Integrated service (IntServ) and Differentiated service (DiffServ).Intserv is not appropriate for large network like Internet. Because is very complex. Therefore, to reduce the complexity of QoS management, DiffServ was introduced to provide QoS within a domain using aggregation of flow and per- class service. In theses networks QoS between classes is constant and it allows low priority traffic to be effected from high priority traffic, which is not suitable. In this paper, we proposed a fuzzy controller, which reduced the effect of low priority class on higher priority ones. Our simulations shows that, our approach reduces the latency dependency of low priority class on higher priority ones, in an effective manner.
Abstract: In MPEG and H.26x standards, to eliminate the
temporal redundancy we use motion estimation. Given that the
motion estimation stage is very complex in terms of computational
effort, a hardware implementation on a re-configurable circuit is
crucial for the requirements of different real time multimedia
applications. In this paper, we present hardware architecture for
motion estimation based on "Full Search Block Matching" (FSBM)
algorithm. This architecture presents minimum latency, maximum
throughput, full utilization of hardware resources such as embedded
memory blocks, and combining both pipelining and parallel
processing techniques. Our design is described in VHDL language,
verified by simulation and implemented in a Stratix II
EP2S130F1020C4 FPGA circuit. The experiment result show that the
optimum operating clock frequency of the proposed design is 89MHz
which achieves 160M pixels/sec.
Abstract: In this paper, we proposed a new routing protocol for
Unmanned Aerial Vehicles (UAVs) that equipped with directional
antenna. We named this protocol Directional Optimized Link State
Routing Protocol (DOLSR). This protocol is based on the well
known protocol that is called Optimized Link State Routing Protocol
(OLSR). We focused in our protocol on the multipoint relay (MPR)
concept which is the most important feature of this protocol. We
developed a heuristic that allows DOLSR protocol to minimize
the number of the multipoint relays. With this new protocol the
number of overhead packets will be reduced and the End-to-End
delay of the network will also be minimized. We showed through
simulation that our protocol outperformed Optimized Link State
Routing Protocol, Dynamic Source Routing (DSR) protocol and Ad-
Hoc On demand Distance Vector (AODV) routing protocol in
reducing the End-to-End delay and enhancing the overall
throughput. Our evaluation of the previous protocols was based
on the OPNET network simulation tool.
Abstract: Internet is without any doubt the fastest and effective mean of communication making it possible to reach a great number of people in the world. It draws its base from exchange points. Indeed exchange points are used to inter-connect various Internet suppliers and operators in order to allow them to exchange traffic and it is with these interconnections that Internet made its great strides. They thus make it possible to limit the traffic delivered via the operators of transits. This limitation allows a significant improvement of the quality of service, a reduction in the latency time just as a reduction of the cost of connection for the final subscriber. Through this article we will show how the installation of an IXP allows an improvement and a diversification of the services just as a reduction of the Internet connection costs.
Abstract: The speculative locking (SL) protocol extends the twophase locking (2PL) protocol to allow for parallelism among conflicting transactions. The adaptive speculative locking (ASL) protocol provided further enhancements and outperformed SL protocols under most conditions. Neither of these protocols consider the impact of network latency on the performance of the distributed database systems. We have studied the performance of ASL protocol taking into account the communication overhead. The results indicate that though system load can counter network latency, it can still become a bottleneck in many situations. The impact of latency on performance depends on many factors including the system resources. A flexible discrete event simulator was used as the testbed for this study.
Abstract: Nowadays, people are going more and more mobile, both in terms of devices and associated applications. Moreover, services that these devices are offering are getting wider and much more complex. Even though actual handheld devices have considerable computing power, their contexts of utilization are different. These contexts are affected by the availability of connection, high latency of wireless networks, battery life, size of the screen, on-screen or hard keyboard, etc. Consequently, development of mobile applications and their associated mobile Web services, if any, should follow a concise methodology so they will provide a high Quality of Service. The aim of this paper is to highlight and discuss main issues to consider when developing mobile applications and mobile Web services and then propose a framework that leads developers through different steps and modules toward development of efficient and secure mobile applications. First, different challenges in developing such applications are elicited and deeply discussed. Second, a development framework is presented with different modules addressing each of these challenges. Third, the paper presents an example of a mobile application, Eivom Cinema Guide, which benefits from following our development framework.
Abstract: Mobile IPv6 (MIPv6) describes how mobile node can change its point of attachment from one access router to another. As a demand for wireless mobile devices increases, many enhancements for macro-mobility (inter-domain) protocols have been proposed, designed and implemented in Mobile IPv6. Hierarchical Mobile IPv6 (HMIPv6) is one of them that is designed to reduce the amount of signaling required and to improve handover speed for mobile connections. This is achieved by introducing a new network entity called Mobility Anchor Point (MAP). This report presents a comparative study of the Hierarchical Mobility IPv6 and Mobile IPv6 protocols and we have narrowed down the scope to micro-mobility (intra-domain). The architecture and operation of each protocol is studied and they are evaluated based on the Quality of Service (QoS) parameter; handover latency. The simulation was carried out by using the Network Simulator-2. The outcome from this simulation has been discussed. From the results, it shows that, HMIPv6 performs best under intra-domain mobility compared to MIPv6. The MIPv6 suffers large handover latency. As enhancement we proposed to HMIPv6 to locate the MAP to be in the middle of the domain with respect to all Access Routers. That gives approximately same distance between MAP and Mobile Node (MN) regardless of the new location of MN, and possible shorter distance. This will reduce the delay since the distance is shorter. As a future work performance analysis is to be carried for the proposed HMIPv6 and compared to HMIPv6.
Abstract: HSDPA is a new feature which is introduced in
Release-5 specifications of the 3GPP WCDMA/UTRA standard to
realize higher speed data rate together with lower round-trip times.
Moreover, the HSDPA concept offers outstanding improvement of
packet throughput and also significantly reduces the packet call
transfer delay as compared to Release -99 DSCH. Till now the
HSDPA system uses turbo coding which is the best coding technique
to achieve the Shannon limit. However, the main drawbacks of turbo
coding are high decoding complexity and high latency which makes
it unsuitable for some applications like satellite communications,
since the transmission distance itself introduces latency due to
limited speed of light. Hence in this paper it is proposed to use LDPC
coding in place of Turbo coding for HSDPA system which decreases
the latency and decoding complexity. But LDPC coding increases the
Encoding complexity. Though the complexity of transmitter
increases at NodeB, the End user is at an advantage in terms of
receiver complexity and Bit- error rate. In this paper LDPC Encoder
is implemented using “sparse parity check matrix" H to generate a
codeword at Encoder and “Belief Propagation algorithm "for LDPC
decoding .Simulation results shows that in LDPC coding the BER
suddenly drops as the number of iterations increase with a small
increase in Eb/No. Which is not possible in Turbo coding. Also same
BER was achieved using less number of iterations and hence the
latency and receiver complexity has decreased for LDPC coding.
HSDPA increases the downlink data rate within a cell to a theoretical
maximum of 14Mbps, with 2Mbps on the uplink. The changes that
HSDPA enables includes better quality, more reliable and more
robust data services. In other words, while realistic data rates are
only a few Mbps, the actual quality and number of users achieved
will improve significantly.
Abstract: In this paper, we have presented the effect of varying
time-delays on performance and stability in the single-channel multirate
sampled-data system in hard real-time (RT-Linux) environment.
The sampling task require response time that might exceed the
capacity of RT-Linux. So a straight implementation with RT-Linux is
not feasible, because of the latency of the systems and hence,
sampling period should be less to handle this task. The best sampling
rate is chosen for the sampled-data system, which is the slowest rate
meets all performance requirements. RT-Linux is consistent with its
specifications and the resolution of the real-time is considered 0.01
seconds to achieve an efficient result. The test results of our
laboratory experiment shows that the multi-rate control technique in
hard real-time operating system (RTOS) can improve the stability
problem caused by the random access delays and asynchronization.
Abstract: This paper focuses on wormhole attacks detection in wireless sensor networks. The wormhole attack is particularly challenging to deal with since the adversary does not need to compromise any nodes and can use laptops or other wireless devices to send the packets on a low latency channel. This paper introduces an easy and effective method to detect and locate the wormholes: Since beacon nodes are assumed to know their coordinates, the straight line distance between each pair of them can be calculated and then compared with the corresponding hop distance, which in this paper equals hop counts × node-s transmission range R. Dramatic difference may emerge because of an existing wormhole. Our detection mechanism is based on this. The approximate location of the wormhole can also be derived in further steps based on this information. To the best of our knowledge, our method is much easier than other wormhole detecting schemes which also use beacon nodes, and to those have special requirements on each nodes (e.g., GPS receivers or tightly synchronized clocks or directional antennas), ours is more economical. Simulation results show that the algorithm is successful in detecting and locating wormholes when the density of beacon nodes reaches 0.008 per m2.
Abstract: This study focuses on examining why the range of
experience with respect to HIV infection is so diverse, especially in
regard to the latency period. An agent-based approach in modelling
the infection is used to extract high-level behaviour which cannot be
obtained analytically from the set of interaction rules at the cellular
level. A prototype model encompasses local variation in baseline
properties, contributing to the individual disease experience, and is
included in a network which mimics the chain of lymph nodes. The
model also accounts for stochastic events such as viral mutations.
The size and complexity of the model require major computational
effort and parallelisation methods are used.
Abstract: With increasing utilization of the wireless devices in
different fields such as medical devices and industrial fields, the
paper presents a method for simplify the Bluetooth packets with
throughput enhancing. The paper studies a vital issue in wireless
communications, which is the throughput of data over wireless
networks. In fact, the Bluetooth and ZigBee are a Wireless Personal
Area Network (WPAN). With taking these two systems competition
consideration, the paper proposes different schemes for improve the
throughput of Bluetooth network over a reliable channel. The
proposition depends on the Channel Quality Driven Data Rate
(CQDDR) rules, which determines the suitable packet in the
transmission process according to the channel conditions. The
proposed packet is studied over additive White Gaussian Noise
(AWGN) and fading channels. The Experimental results reveal the
capability of extension of the PL length by 8, 16, 24 bytes for classic
and EDR packets, respectively. Also, the proposed method is suitable
for the low throughput Bluetooth.
Abstract: The study of the variability of the postural strategies
in low back pain patients, as a criterion in evaluation of the
adaptability of this system to the environmental demands is the
purpose of this study. A cross-sectional case-control study was
performed on 21 recurrent non-specific low back pain patients and 21
healthy volunteers. The electromyography activity of Deltoid,
External Oblique (EO), Transverse Abdominis/Internal Oblique
(TrA/IO) and Erector Spine (ES) muscles of each person was
recorded in 75 rapid arm flexion with maximum acceleration.
Standard deviation of trunk muscles onset relative to deltoid muscle
onset were statistically analyzed by MANOVA . The results show
that chronic low back pain patients exhibit less variability in their
anticipatory postural adjustments (APAs) in comparison with the
control group. There is a decrease in variability of postural control
system of recurrent non-specific low back pain patients that can
result in the persistence of pain and chronicity by decreasing the
adaptability to environmental demands.