Abstract: The dynamic spectrum allocation solutions such as
cognitive radio networks have been proposed as a key technology to
exploit the frequency segments that are spectrally underutilized.
Cognitive radio users work as secondary users who need to
constantly and rapidly sense the presence of primary users or
licensees to utilize their frequency bands if they are inactive. Short
sensing cycles should be run by the secondary users to achieve
higher throughput rates as well as to provide low level of interference
to the primary users by immediately vacating their channels once
they have been detected. In this paper, the throughput-sensing time
relationship in local and cooperative spectrum sensing has been
investigated under two distinct scenarios, namely, constant primary
user protection (CPUP) and constant secondary user spectrum
usability (CSUSU) scenarios. The simulation results show that the
design of sensing slot duration is very critical and depends on the
number of cooperating users under CPUP scenario whereas under
CSUSU, cooperating more users has no effect if the sensing time
used exceeds 5% of the total frame duration.
Abstract: Network on Chip (NoC) has emerged as a promising
on chip communication infrastructure. Three Dimensional Integrate
Circuit (3D IC) provides small interconnection length between layers
and the interconnect scalability in the third dimension, which can
further improve the performance of NoC. Therefore, in this paper,
a hierarchical cluster-based interconnect architecture is merged with
the 3D IC. This interconnect architecture significantly reduces the
number of long wires. Since this architecture only has approximately
a quarter of routers in 3D mesh-based architecture, the average
number of hops is smaller, which leads to lower latency and higher
throughput. Moreover, smaller number of routers decreases the area
overhead. Meanwhile, some dual links are inserted into the bottlenecks
of communication to improve the performance of NoC.
Simulation results demonstrate our theoretical analysis and show the
advantages of our proposed architecture in latency, throughput and
area, when compared with 3D mesh-based architecture.
Abstract: Grid computing is a form of distributed computing
that involves coordinating and sharing computational power, data
storage and network resources across dynamic and geographically
dispersed organizations. Scheduling onto the Grid is NP-complete,
so there is no best scheduling algorithm for all grid computing
systems. An alternative is to select an appropriate scheduling
algorithm to use in a given grid environment because of the
characteristics of the tasks, machines and network connectivity. Job
and resource scheduling is one of the key research area in grid
computing. The goal of scheduling is to achieve highest possible
system throughput and to match the application need with the
available computing resources. Motivation of the survey is to
encourage the amateur researcher in the field of grid computing, so
that they can understand easily the concept of scheduling and can
contribute in developing more efficient scheduling algorithm. This
will benefit interested researchers to carry out further work in this
thrust area of research.
Abstract: Continuously growing needs for Internet applications
that transmit massive amount of data have led to the emergence of
high speed network. Data transfer must take place without any
congestion and hence feedback parameters must be transferred from
the receiver end to the sender end so as to restrict the sending rate in
order to avoid congestion. Even though TCP tries to avoid
congestion by restricting the sending rate and window size, it never
announces the sender about the capacity of the data to be sent and
also it reduces the window size by half at the time of congestion
therefore resulting in the decrease of throughput, low utilization of
the bandwidth and maximum delay. In this paper, XCP protocol is
used and feedback parameters are calculated based on arrival rate,
service rate, traffic rate and queue size and hence the receiver
informs the sender about the throughput, capacity of the data to be
sent and window size adjustment, resulting in no drastic decrease in
window size, better increase in sending rate because of which there is
a continuous flow of data without congestion. Therefore as a result of
this, there is a maximum increase in throughput, high utilization of
the bandwidth and minimum delay. The result of the proposed work
is presented as a graph based on throughput, delay and window size.
Thus in this paper, XCP protocol is well illustrated and the various
parameters are thoroughly analyzed and adequately presented.
Abstract: An effective approach for realizing the binary tree structure, representing a combinational logic functionality with enhanced throughput, is discussed in this paper. The optimization in maximum operating frequency was achieved through delay minimization, which in turn was possible by means of reducing the depth of the binary network. The proposed synthesis methodology has been validated by experimentation with FPGA as the target technology. Though our proposal is technology independent, yet the heuristic enables better optimization in throughput even after technology mapping for such Boolean functionality; whose reduced CNF form is associated with a lesser literal cost than its reduced DNF form at the Boolean equation level. For cases otherwise, our method converges to similar results as that of [12]. The practical results obtained for a variety of case studies demonstrate an improvement in the maximum throughput rate for Spartan IIE (XC2S50E-7FT256) and Spartan 3 (XC3S50-4PQ144) FPGA logic families by 10.49% and 13.68% respectively. With respect to the LUTs and IOBUFs required for physical implementation of the requisite non-regenerative logic functionality, the proposed method enabled savings to the tune of 44.35% and 44.67% respectively, over the existing efficient method available in literature [12].
Abstract: In this paper, the implementation of low power,
high throughput convolutional filters for the one dimensional
Discrete Wavelet Transform and its inverse are presented. The
analysis filters have already been used for the implementation of a
high performance DWT encoder [15] with minimum memory
requirements for the JPEG 2000 standard. This paper presents the
design techniques and the implementation of the convolutional filters
included in the JPEG2000 standard for the forward and inverse DWT
for achieving low-power operation, high performance and reduced
memory accesses. Moreover, they have the ability of performing
progressive computations so as to minimize the buffering between
the decomposition and reconstruction phases. The experimental
results illustrate the filters- low power high throughput characteristics
as well as their memory efficient operation.
Abstract: This paper proposes a VPN Accelerator Board
(VPN-AB), a virtual private network (VPN) protocol designed for
trust channel security system (TCSS). TCSS supports safety
communication channel between security nodes in internet. It
furnishes authentication, confidentiality, integrity, and access control
to security node to transmit data packets with IPsec protocol. TCSS
consists of internet key exchange block, security association block,
and IPsec engine block. The internet key exchange block negotiates
crypto algorithm and key used in IPsec engine block. Security
Association blocks setting-up and manages security association
information. IPsec engine block treats IPsec packets and consists of
networking functions for communication. The IPsec engine block
should be embodied by H/W and in-line mode transaction for high
speed IPsec processing. Our VPN-AB is implemented with high speed
security processor that supports many cryptographic algorithms and
in-line mode. We evaluate a small TCSS communication environment,
and measure a performance of VPN-AB in the environment. The
experiment results show that VPN-AB gets a performance throughput
of maximum 15.645Gbps when we set the IPsec protocol with
3DES-HMAC-MD5 tunnel mode.
Abstract: Guard channels improve the probability of successful
handoffs by reserving a number of channels exclusively for handoffs.
This concept has the risk of underutilization of radio spectrum due to
the fact that fewer channels are granted to originating calls even if
these guard channels are not always used, when originating calls are
starving for the want of channels. The penalty is the reduction of
total carried traffic. The optimum number of guard channels can help
reduce this problem. This paper presents fuzzy logic based guard
channel scheme wherein guard channels are reorganized on the basis
of traffic density, so that guard channels are provided on need basis.
This will help in incorporating more originating calls and hence high
throughput of the radio spectrum
Abstract: Wireless Sensor Network is widely used in electronics. Wireless sensor networks are now used in many applications including military, environmental, healthcare applications, home automation and traffic control. We will study one area of wireless sensor networks, which is the routing protocol. Routing protocols are needed to send data between sensor nodes and the base station. In this paper, we will discuss two routing protocols, such as datacentric and hierarchical routing protocol. We will show the output of the protocols using the NS-2 simulator. This paper will compare the simulation output of the two routing protocol using Nam. We will simulate using Xgraph to find the throughput and delay of the protocol.
Abstract: In this paper, we study the cooperative communications where multiple cognitive radio (CR) transmit-receive pairs competitive maximize their own throughputs. In CR networks, the influences of primary users and the spectrum availability are usually different among CR users. Due to the existence of multiple relay nodes and the different spectrum availability, each CR transmit-receive pair should not only select the relay node but also choose the appropriate channel. For this distributed problem, we propose a game theoretic framework to formulate this problem and we apply a regret-matching learning algorithm which is leading to correlated equilibrium. We further formulate a modified regret-matching learning algorithm which is fully distributed and only use the local information of each CR transmit-receive pair. This modified algorithm is more practical and suitable for the cooperative communications in CR network. Simulation results show the algorithm convergence and the modified learning algorithm can achieve comparable performance to the original regretmatching learning algorithm.
Abstract: The k-nearest neighbors (knn) is a simple but effective method of classification. In this paper we present an extended version of this technique for chemical compounds used in High Throughput Screening, where the distances of the nearest neighbors can be taken into account. Our algorithm uses kernel weight functions as guidance for the process of defining activity in screening data. Proposed kernel weight function aims to combine properties of graphical structure and molecule descriptors of screening compounds. We apply the modified knn method on several experimental data from biological screens. The experimental results confirm the effectiveness of the proposed method.
Abstract: In MPEG and H.26x standards, to eliminate the
temporal redundancy we use motion estimation. Given that the
motion estimation stage is very complex in terms of computational
effort, a hardware implementation on a re-configurable circuit is
crucial for the requirements of different real time multimedia
applications. In this paper, we present hardware architecture for
motion estimation based on "Full Search Block Matching" (FSBM)
algorithm. This architecture presents minimum latency, maximum
throughput, full utilization of hardware resources such as embedded
memory blocks, and combining both pipelining and parallel
processing techniques. Our design is described in VHDL language,
verified by simulation and implemented in a Stratix II
EP2S130F1020C4 FPGA circuit. The experiment result show that the
optimum operating clock frequency of the proposed design is 89MHz
which achieves 160M pixels/sec.
Abstract: New generation mobile communication networks have
the ability of supporting triple play. In order that, Orthogonal
Frequency Division Multiplexing (OFDM) access techniques have
been chosen to enlarge the system ability for high data rates
networks. Many of cross-layer modeling and optimization schemes
for Quality of Service (QoS) and capacity of downlink multiuser
OFDM system were proposed. In this paper, the Maximum Weighted
Capacity (MWC) based resource allocation at the Physical (PHY)
layer is used. This resource allocation scheme provides a much better
QoS than the previous resource allocation schemes, while
maintaining the highest or nearly highest capacity and costing similar
complexity. In addition, the Delay Satisfaction (DS) scheduling at the
Medium Access Control (MAC) layer, which allows more than one
connection to be served in each slot is used. This scheduling
technique is more efficient than conventional scheduling to
investigate both of the number of users as well as the number of
subcarriers against system capacity. The system will be optimized for
different operational environments: the outdoor deployment scenarios
as well as the indoor deployment scenarios are investigated and also
for different channel models. In addition, effective capacity approach
[1] is used not only for providing QoS for different mobile users, but
also to increase the total wireless network's throughput.
Abstract: In this paper, we proposed a new routing protocol for
Unmanned Aerial Vehicles (UAVs) that equipped with directional
antenna. We named this protocol Directional Optimized Link State
Routing Protocol (DOLSR). This protocol is based on the well
known protocol that is called Optimized Link State Routing Protocol
(OLSR). We focused in our protocol on the multipoint relay (MPR)
concept which is the most important feature of this protocol. We
developed a heuristic that allows DOLSR protocol to minimize
the number of the multipoint relays. With this new protocol the
number of overhead packets will be reduced and the End-to-End
delay of the network will also be minimized. We showed through
simulation that our protocol outperformed Optimized Link State
Routing Protocol, Dynamic Source Routing (DSR) protocol and Ad-
Hoc On demand Distance Vector (AODV) routing protocol in
reducing the End-to-End delay and enhancing the overall
throughput. Our evaluation of the previous protocols was based
on the OPNET network simulation tool.
Abstract: In this paper we proposed multistage adaptive
ARQ/HARQ/HARQ scheme. This method combines pure ARQ
(Automatic Repeat reQuest) mode in low channel bit error rate and
hybrid ARQ method using two different Reed-Solomon codes in
middle and high error rate conditions. It follows, that our scheme has
three stages. The main goal is to increase number of states in adaptive
HARQ methods and be able to achieve maximum throughput for
every channel bit error rate. We will prove the proposal by
calculation and then with simulations in land mobile satellite channel
environment. Optimization of scheme system parameters is described
in order to maximize the throughput in the whole defined Signal-to-
Noise Ratio (SNR) range in selected channel environment.
Abstract: With the advent of DSL services, high data rates are now available over phone lines, yet higher rates are in demand. In this paper, we optimize the transmit filters that can be used over wireline channels. Results showing the bit error rates when optimized filters are used, and with a decision feedback equalizer (DFE) employed in the receiver, are given. We then show that significantly higher throughput can be achieved by modeling the channel as a multiple input multiple output (MIMO) channel. A receiver that employs a MIMO-DFE that deals jointly with several users is proposed and shown to provide significant improvement over the conventional DFE.
Abstract: South Africa is facing a crisis with not being able to produce enough graduates in the scarce skills areas to sustain economic growth. The crisis is fuelled by a school system that does not produce enough potential students with Mathematics, Accounting and Science. Since the introduction of the new school curriculum in 2008, there is no longer an option to take pure maths on a standard grade level. Instead, only two mathematical subjects are offered: pure maths (which is on par with higher grade maths) and mathematical literacy. It is compulsory to take one or the other. As a result, lees student finishes Grade 12 with pure mathematics every year. This national problem needs urgent attention if South Africa is to make any headway in critical skills development as mathematics is a gateway to scarce skills professions. Higher education institutions initiated several initiatives in an attempt to address the above, including preparatory courses, bridging programmes and extended curricula with foundation provisions. In view of the above, and government policy directives to broaden access in the scarce skills areas to increase student throughput, foundation provision was introduced for Commerce and Information Technology programmes at the Vaal Triangle Campus (VTC) of North-West University (NWU) in 2010. Students enrolling for extended programmes do not comply with the minimum prerequisites for the normal programmes. The question then arises as to whether these programmes have the intended impact? This paper reports the results of a two year longitudinal study, tracking the first year academic achievement of the two cohorts of enrolments since 2010. The results provide valuable insight into the structuring of an extended programme and its potential impact.
Abstract: The Connection Admission Control (CAC) problem is formulated in this paper as a discrete time optimal control problem. The control variables account for the acceptance/ rejection of new connections and forced dropping of in-progress connections. These variables are constrained to meet suitable conditions which account for the QoS requirements (Link Availability, Blocking Probability, Dropping Probability). The performance index evaluates the total throughput. At each discrete time, the problem is solved as an integer-valued linear programming one. The proposed procedure was successfully tested against suitably simulated data.
Abstract: HSDPA is a new feature which is introduced in
Release-5 specifications of the 3GPP WCDMA/UTRA standard to
realize higher speed data rate together with lower round-trip times.
Moreover, the HSDPA concept offers outstanding improvement of
packet throughput and also significantly reduces the packet call
transfer delay as compared to Release -99 DSCH. Till now the
HSDPA system uses turbo coding which is the best coding technique
to achieve the Shannon limit. However, the main drawbacks of turbo
coding are high decoding complexity and high latency which makes
it unsuitable for some applications like satellite communications,
since the transmission distance itself introduces latency due to
limited speed of light. Hence in this paper it is proposed to use LDPC
coding in place of Turbo coding for HSDPA system which decreases
the latency and decoding complexity. But LDPC coding increases the
Encoding complexity. Though the complexity of transmitter
increases at NodeB, the End user is at an advantage in terms of
receiver complexity and Bit- error rate. In this paper LDPC Encoder
is implemented using “sparse parity check matrix" H to generate a
codeword at Encoder and “Belief Propagation algorithm "for LDPC
decoding .Simulation results shows that in LDPC coding the BER
suddenly drops as the number of iterations increase with a small
increase in Eb/No. Which is not possible in Turbo coding. Also same
BER was achieved using less number of iterations and hence the
latency and receiver complexity has decreased for LDPC coding.
HSDPA increases the downlink data rate within a cell to a theoretical
maximum of 14Mbps, with 2Mbps on the uplink. The changes that
HSDPA enables includes better quality, more reliable and more
robust data services. In other words, while realistic data rates are
only a few Mbps, the actual quality and number of users achieved
will improve significantly.
Abstract: Scheduling of diversified service requests in
distributed computing is a critical design issue. Cloud is a type of
parallel and distributed system consisting of a collection of
interconnected and virtual computers. It is not only the clusters and
grid but also it comprises of next generation data centers. The paper
proposes an initial heuristic algorithm to apply modified ant colony
optimization approach for the diversified service allocation and
scheduling mechanism in cloud paradigm. The proposed optimization
method is aimed to minimize the scheduling throughput to service all
the diversified requests according to the different resource allocator
available under cloud computing environment.