Abstract: Decrease in hardware costs and advances in computer
networking technologies have led to increased interest in the use of
large-scale parallel and distributed computing systems. One of the
biggest issues in such systems is the development of effective
techniques/algorithms for the distribution of the processes/load of a
parallel program on multiple hosts to achieve goal(s) such as
minimizing execution time, minimizing communication delays,
maximizing resource utilization and maximizing throughput.
Substantive research using queuing analysis and assuming job
arrivals following a Poisson pattern, have shown that in a multi-host
system the probability of one of the hosts being idle while other host
has multiple jobs queued up can be very high. Such imbalances in
system load suggest that performance can be improved by either
transferring jobs from the currently heavily loaded hosts to the lightly
loaded ones or distributing load evenly/fairly among the hosts .The
algorithms known as load balancing algorithms, helps to achieve the
above said goal(s). These algorithms come into two basic categories -
static and dynamic. Whereas static load balancing algorithms (SLB)
take decisions regarding assignment of tasks to processors based on
the average estimated values of process execution times and
communication delays at compile time, Dynamic load balancing
algorithms (DLB) are adaptive to changing situations and take
decisions at run time.
The objective of this paper work is to identify qualitative
parameters for the comparison of above said algorithms. In future this
work can be extended to develop an experimental environment to
study these Load balancing algorithms based on comparative
parameters quantitatively.
Abstract: To successfully provide a fast FIR filter with FTT algorithms, overlapped-save algorithms can be used to lower the computational complexity and achieve the desired real-time processing. As the length of the input block increases in order to improve the efficiency, a larger volume of zero padding will greatly increase the computation length of the FFT. In this paper, we use the overlapped block digital filtering to construct a parallel structure. As long as the down-sampling (or up-sampling) factor is an exact multiple lengths of the impulse response of a FIR filter, we can process the input block by using a parallel structure and thus achieve a low-complex fast FIR filter with overlapped-save algorithms. With a long filter length, the performance and the throughput of the digital filtering system will also be greatly enhanced.
Abstract: All the available algorithms for blind estimation namely constant modulus algorithm (CMA), Decision-Directed Algorithm (DDA/DFE) suffer from the problem of convergence to local minima. Also, if the channel drifts considerably, any DDA looses track of the channel. So, their usage is limited in varying channel conditions. The primary limitation in such cases is the requirement of certain overhead bits in the transmit framework which leads to wasteful use of the bandwidth. Also such arrangements fail to use channel state information (CSI) which is an important aid in improving the quality of reception. In this work, the main objective is to reduce the overhead imposed by the pilot symbols, which in effect reduces the system throughput. Also we formulate an arrangement based on certain dynamic Artificial Neural Network (ANN) topologies which not only contributes towards the lowering of the overhead but also facilitates the use of the CSI. A 2×2 Multiple Input Multiple Output (MIMO) system is simulated and the performance variation with different channel estimation schemes are evaluated. A new semi blind approach based on dynamic ANN is proposed for channel tracking in varying channel conditions and the performance is compared with perfectly known CSI and least square (LS) based estimation.
Abstract: In a state-of-the-art industrial production line of
photovoltaic products the handling and automation processes are of
particular importance and implication. While processing a fully
functional crystalline solar cell an as-cut photovoltaic wafer is subject
to numerous repeated handling steps. With respect to stronger
requirements in productivity and decreasing rejections due to defects
the mechanical stress on the thin wafers has to be reduced to a
minimum as the fragility increases by decreasing wafer thicknesses.
In relation to the increasing wafer fragility, researches at the
Fraunhofer Institutes IPA and CSP showed a negative correlation
between multiple handling processes and the wafer integrity. Recent
work therefore focused on the analysis and optimization of the dry
wafer stack separation process with compressed air. The achievement
of a wafer sensitive process capability and a high production
throughput rate is the basic motivation in this research.
Abstract: When architecting an application, key nonfunctional requirements such as performance, scalability, availability and security, which influence the architecture of the system, are some times not adequately addressed. Performance of the application may not be looked at until there is a concern. There are several problems with this reactive approach. If the system does not meet its performance objectives, the application is unlikely to be accepted by the stakeholders. This paper suggests an approach for performance modeling for web based J2EE and .Net applications to address performance issues early in the development life cycle. It also includes a Performance Modeling Case Study, with Proof-of-Concept (PoC) and implementation details for .NET and J2EE platforms.
Abstract: A new approach for timestamp ordering problem in
serializable schedules is presented. Since the number of users using
databases is increasing rapidly, the accuracy and needing high
throughput are main topics in database area. Strict 2PL does not
allow all possible serializable schedules and so does not result high
throughput. The main advantages of the approach are the ability to
enforce the execution of transaction to be recoverable and the high
achievable performance of concurrent execution in central databases.
Comparing to Strict 2PL, the general structure of the algorithm is
simple, free deadlock, and allows executing all possible serializable
schedules which results high throughput. Various examples which
include different orders of database operations are discussed.
Abstract: Encryption and decryption in RSA are done by modular exponentiation which is achieved by repeated modular multiplication. Hence efficiency of modular multiplication directly determines the efficiency of RSA cryptosystem. This paper designs a Modified Montgomery Modular Multiplication in which addition of operands is computed by 4:2 compressor. The basic logic operations in addition are partitioned over two iterations such that parallel computations are performed. This reduces the critical path delay of proposed Montgomery design. The proposed design and RSA are implemented on Virtex 2 and Virtex 5 FPGAs. The two factors partitioning and parallelism have improved the frequency and throughput of proposed design.
Abstract: In this paper, a new approach based on the extent of
friendship between the nodes is proposed which makes the nodes to
co-operate in an ad hoc environment. The extended DSR protocol is
tested under different scenarios by varying the number of malicious
nodes and node moving speed. It is also tested varying the number of
nodes in simulation used. The result indicates the achieved
throughput by extended DSR is greater than the standard DSR and
indicates the percentage of malicious drops over total drops are less
in the case of extended DSR than the standard DSR.
Abstract: This research paper evaluates and compares the
performance of equal cost adaptive multi-path routing algorithms
taking the transport protocols TCP (Transmission Control Protocol)
and UDP (User Datagram Protocol) using network simulator ns2 and
concludes which one is better.
Abstract: It has been established that microRNAs (miRNAs) play
an important role in gene expression by post-transcriptional regulation
of messengerRNAs (mRNAs). However, the precise relationships
between microRNAs and their target genes in sense of numbers,
types and biological relevance remain largely unclear. Dissecting the
miRNA-target relationships will render more insights for miRNA
targets identification and validation therefore promote the understanding
of miRNA function. In miRBase, miRanda is the key
algorithm used for target prediction for Zebrafish. This algorithm
is high-throughput but brings lots of false positives (noise). Since
validation of a large scale of targets through laboratory experiments
is very time consuming, several computational methods for miRNA
targets validation should be developed. In this paper, we present an
integrative method to investigate several aspects of the relationships
between miRNAs and their targets with the final purpose of extracting
high confident targets from miRanda predicted targets pool. This is
achieved by using the techniques ranging from statistical tests to
clustering and association rules. Our research focuses on Zebrafish.
It was found that validated targets do not necessarily associate with
the highest sequence matching. Besides, for some miRNA families,
the frequency of their predicted targets is significantly higher in the
genomic region nearby their own physical location. Finally, in a case
study of dre-miR-10 and dre-miR-196, it was found that the predicted
target genes hoxd13a, hoxd11a, hoxd10a and hoxc4a of dre-miR-
10 while hoxa9a, hoxc8a and hoxa13a of dre-miR-196 have similar
characteristics as validated target genes and therefore represent high
confidence target candidates.
Abstract: With increasing utilization of the wireless devices in
different fields such as medical devices and industrial fields, the
paper presents a method for simplify the Bluetooth packets with
throughput enhancing. The paper studies a vital issue in wireless
communications, which is the throughput of data over wireless
networks. In fact, the Bluetooth and ZigBee are a Wireless Personal
Area Network (WPAN). With taking these two systems competition
consideration, the paper proposes different schemes for improve the
throughput of Bluetooth network over a reliable channel. The
proposition depends on the Channel Quality Driven Data Rate
(CQDDR) rules, which determines the suitable packet in the
transmission process according to the channel conditions. The
proposed packet is studied over additive White Gaussian Noise
(AWGN) and fading channels. The Experimental results reveal the
capability of extension of the PL length by 8, 16, 24 bytes for classic
and EDR packets, respectively. Also, the proposed method is suitable
for the low throughput Bluetooth.
Abstract: Grid networks provide the ability to perform higher throughput computing by taking advantage of many networked computer-s resources to solve large-scale computation problems. As the popularity of the Grid networks has increased, there is a need to efficiently distribute the load among the resources accessible on the network. In this paper, we present a stochastic network system that gives a distributed load-balancing scheme by generating almost regular networks. This network system is self-organized and depends only on local information for load distribution and resource discovery. The in-degree of each node is refers to its free resources, and job assignment and resource discovery processes required for load balancing is accomplished by using fitted random sampling. Simulation results show that the generated network system provides an effective, scalable, and reliable load-balancing scheme for the distributed resources accessible on Grid networks.
Abstract: Workload and resource management are two essential functions provided at the service level of the grid software infrastructure. To improve the global throughput of these software environments, workloads have to be evenly scheduled among the available resources. To realize this goal several load balancing strategies and algorithms have been proposed. Most strategies were developed in mind, assuming homogeneous set of sites linked with homogeneous and fast networks. However for computational grids we must address main new issues, namely: heterogeneity, scalability and adaptability. In this paper, we propose a layered algorithm which achieve dynamic load balancing in grid computing. Based on a tree model, our algorithm presents the following main features: (i) it is layered; (ii) it supports heterogeneity and scalability; and, (iii) it is totally independent from any physical architecture of a grid.
Abstract: Wireless ad hoc nodes are freely and dynamically
self-organize in communicating with others. Each node can act as
host or router. However it actually depends on the capability of
nodes in terms of its current power level, signal strength, number
of hops, routing protocol, interference and others. In this research,
a study was conducted to observe the effect of hops count over
different network topologies that contribute to TCP Congestion
Control performance degradation. To achieve this objective, a
simulation using NS-2 with different topologies have been
evaluated. The comparative analysis has been discussed based on
standard observation metrics: throughput, delay and packet loss
ratio. As a result, there is a relationship between types of topology
and hops counts towards the performance of ad hoc network. In
future, the extension study will be carried out to investigate the
effect of different error rate and background traffic over same
topologies.
Abstract: De novo genome assembly is always fragmented. Assembly fragmentation is more serious using the popular next generation sequencing (NGS) data because NGS sequences are shorter than the traditional Sanger sequences. As the data throughput of NGS is high, the fragmentations in assemblies are usually not the result of missing data. On the contrary, the assembled sequences, called contigs, are often connected to more than one other contigs in a complicated manner, leading to the fragmentations. False connections in such complicated connections between contigs, named a contig graph, are inevitable because of repeats and sequencing/assembly errors. Simplifying a contig graph by removing false connections directly improves genome assembly. In this work, we have developed a tool, SIMGraph, to resolve ambiguous connections between contigs using NGS data. Applying SIMGraph to the assembly of a fungus and a fish genome, we resolved 27.6% and 60.3% ambiguous contig connections, respectively. These results can reduce the experimental efforts in resolving contig connections.
Abstract: In this paper, we address the problem of adaptive radio
resource allocation (RRA) and packet scheduling in the downlink of a
cellular OFDMA system, and propose a downlink multi-carrier
proportional fair (MPF) scheduler and its joint with adaptive RRA
algorithm to distribute radio resources among multiple users according
to their individual QoS requirements. The allocation and scheduling
objective is to maximize the total throughput, while at the same time
maintaining the fairness among users. The simulation results
demonstrate that the methods presented provide for user more explicit
fairness relative to RRA algorithm, but the joint scheme achieves the
higher sum-rate capacity with flexible parameters setting compared
with MPF scheduler.
Abstract: Self-propelled forage harvesters in the 850
horsepower range were tested over three years for fuel consumption,
throughput and quality of chop for corn silage. Cut length had a
significant effect on fuel consumption, throughput and some aspects
of chop quality. Measure cut length was often different than
theoretical length of cut. Where cut length was equivalent fuel
consumption and throughput were equivalent across brands.
Shortening cut length from 17 to 11mm increases fuel consumption
53 percent measured as Mg of silage harvested per gallon of fuel used
and a 42 percent decrease in capacity as tons of fresh material per
hour run time.
Abstract: Due to the complex network architecture, the mobile
adhoc network-s multihop feature gives additional problems to the
users. When the traffic load at each node gets increased, the
additional contention due its traffic pattern might cause the nodes
which are close to destination to starve the nodes more away from the
destination and also the capacity of network is unable to satisfy the
total user-s demand which results in an unfairness problem. In this
paper, we propose to create an algorithm to compute the optimal
MAC-layer bandwidth assigned to each flow in the network. The
bottleneck links contention area determines the fair time share which
is necessary to calculate the maximum allowed transmission rate used
by each flow. To completely utilize the network resources, we
compute two optimal rates namely, the maximum fair share and
minimum fair share. We use the maximum fair share achieved in
order to limit the input rate of those flows which crosses the
bottleneck links contention area when the flows that are not allocated
to the optimal transmission rate and calculate the following highest
fair share. Through simulation results, we show that the proposed
protocol achieves improved fair share and throughput with reduced
delay.
Abstract: A real time distributed computing has
heterogeneously networked computers to solve a single problem. So
coordination of activities among computers is a complex task and
deadlines make more complex. The performances depend on many
factors such as traffic workloads, database system architecture,
underlying processors, disks speeds, etc. Simulation study have been
performed to analyze the performance under different transaction
scheduling: different workloads, arrival rate, priority policies,
altering slack factors and Preemptive Policy. The performance metric
of the experiments is missed percent that is the percentage of
transaction that the system is unable to complete. The throughput of
the system is depends on the arrival rate of transaction. The
performance can be enhanced with altering the slack factor value.
Working on slack value for the transaction can helps to avoid some
of transactions from killing or aborts. Under the Preemptive Policy,
many extra executions of new transactions can be carried out.
Abstract: The IEEE 802.11e which is an enhanced version of the 802.11 WLAN standards incorporates the Quality of Service (QoS) which makes it a better choice for multimedia and real time applications. In this paper we study various aspects concerned with 802.11e standard. Further, the analysis results for this standard are compared with the legacy 802.11 standard. Simulation results show that IEEE 802.11e out performs legacy IEEE 802.11 in terms of quality of service due to its flow differentiated channel allocation and better queue management architecture. We also propose a method to improve the unfair allocation of bandwidth for downlink and uplink channels by varying the medium access priority level.