Abstract: Tandem mass spectrometry (MS/MS) is the engine
driving high-throughput protein identification. Protein mixtures possibly
representing thousands of proteins from multiple species are
treated with proteolytic enzymes, cutting the proteins into smaller
peptides that are then analyzed generating MS/MS spectra. The
task of determining the identity of the peptide from its spectrum
is currently the weak point in the process. Current approaches to de
novo sequencing are able to compute candidate peptides efficiently.
The problem lies in the limitations of current scoring functions. In this
paper we introduce the concept of proteome signature. By examining
proteins and compiling proteome signatures (amino acid usage) it is
possible to characterize likely combinations of amino acids and better
distinguish between candidate peptides. Our results strongly support
the hypothesis that a scoring function that considers amino acid usage
patterns is better able to distinguish between candidate peptides. This
in turn leads to higher accuracy in peptide prediction.
Abstract: This paper proposes a power-controlled scheduling scheme for devices using a directional antenna in smart home. In the case of the home network using directional antenna, devices can concurrently transmit data in the same frequency band. Accordingly, the throughput increases compared to that of devices using omni-directional antenna in proportional to the number of concurrent transmissions. Also, the number of concurrent transmissions depends on the beamwidth of antenna, the number of devices operating in the network , transmission power, interference and so on. In particular, the less transmission power is used, the more concurrent transmissions occur due to small transmission range. In this paper, we considered sub-optimal scheduling scheme for throughput maximization and power consumption minimization. In the scheme, each device is equipped with a directional antenna. Various beamwidths, path loss components, and antenna radiation efficiencies are considered. Numerical results show that the proposed schemes outperform the scheduling scheme using directional antennas without power control.
Abstract: This paper presents a new algorithm for the channel estimation of the OFDM system based on a pilot signal for the new generation of high data rate communication systems. In orthogonal frequency division multiplexing (OFDM) systems over fast-varying fading channels, channel estimation and tracking is generally carried out by transmitting known pilot symbols in given positions of the frequency-time grid. In this paper, we propose to derive an improved algorithm based on the calculation of the mean and the variance of the adjacent pilot signals for a specific distribution of the pilot signals in the OFDM frequency-time grid then calculating of the entire unknown channel coefficients from the equation of the mean and the variance. Simulation results shows that the performance of the OFDM system increase as the length of the channel increase where the accuracy of the estimated channel will be increased using this low complexity algorithm, also the number of the pilot signal needed to be inserted in the OFDM signal will be reduced which lead to increase in the throughput of the signal over the OFDM system in compared with other type of the distribution such as Comb type and Block type channel estimation.
Abstract: Uncertainties of a serial production line affect on the
production throughput. The uncertainties cannot be prevented in a
real production line. However the uncertain conditions can be
controlled by a robust prediction model. Thus, a hybrid model
including autoregressive integrated moving average (ARIMA) and
multiple polynomial regression, is proposed to model the nonlinear
relationship of production uncertainties with throughput. The
uncertainties under consideration of this study are demand, breaktime,
scrap, and lead-time. The nonlinear relationship of production
uncertainties with throughput are examined in the form of quadratic
and cubic regression models, where the adjusted R-squared for
quadratic and cubic regressions was 98.3% and 98.2%. We optimized
the multiple quadratic regression (MQR) by considering the time
series trend of the uncertainties using ARIMA model. Finally the
hybrid model of ARIMA and MQR is formulated by better adjusted
R-squared, which is 98.9%.
Abstract: Discovering new biological knowledge from the highthroughput biological data is a major challenge to bioinformatics today. To address this challenge, we developed a new approach for protein classification. Proteins that are evolutionarily- and thereby functionally- related are said to belong to the same classification. Identifying protein classification is of fundamental importance to document the diversity of the known protein universe. It also provides a means to determine the functional roles of newly discovered protein sequences. Our goal is to predict the functional classification of novel protein sequences based on a set of features extracted from each protein sequence. The proposed technique used datasets extracted from the Structural Classification of Proteins (SCOP) database. A set of spectral domain features based on Fast Fourier Transform (FFT) is used. The proposed classifier uses multilayer back propagation (MLBP) neural network for protein classification. The maximum classification accuracy is about 91% when applying the classifier to the full four levels of the SCOP database. However, it reaches a maximum of 96% when limiting the classification to the family level. The classification results reveal that spectral domain contains information that can be used for classification with high accuracy. In addition, the results emphasize that sequence similarity measures are of great importance especially at the family level.
Abstract: CONWIP (constant work-in-process) as a pull
production system have been widely studied by researchers to date.
The CONWIP pull production system is an alternative to pure push
and pure pull production systems. It lowers and controls inventory
levels which make the throughput better, reduces production lead
time, delivery reliability and utilization of work. In this article a
CONWIP pull production system was simulated. It was simulated
push and pull planning system. To compare these systems via a
production planning system (PPS) game were adjusted parameters of
each production planning system. The main target was to reduce the
total WIP and achieve throughput and delivery reliability to
minimum values. Data was recorded and evaluated. A future state
was made for real production of plastic components and the setup of
the two indicators with CONWIP pull production system which can
greatly help the company to be more competitive on the market.
Abstract: In this paper, we propose a fully-utilized, block-based 2D DWT (discrete wavelet transform) architecture, which consists of four 1D DWT filters with two-channel QMF lattice structure. The proposed architecture requires about 2MN-3N registers to save the intermediate results for higher level decomposition, where M and N stand for the filter length and the row width of the image respectively. Furthermore, the proposed 2D DWT processes in horizontal and vertical directions simultaneously without an idle period, so that it computes the DWT for an N×N image in a period of N2(1-2-2J)/3. Compared to the existing approaches, the proposed architecture shows 100% of hardware utilization and high throughput rates. To mitigate the long critical path delay due to the cascaded lattices, we can apply the pipeline technique with four stages, while retaining 100% of hardware utilization. The proposed architecture can be applied in real-time video signal processing.
Abstract: Throughput is an important measure of performance of production system. Analyzing and modeling of production throughput is complex in today-s dynamic production systems due to uncertainties of production system. The main reasons are that uncertainties are materialized when the production line faces changes in setup time, machinery break down, lead time of manufacturing, and scraps. Besides, demand changes are fluctuating from time to time for each product type. These uncertainties affect the production performance. This paper proposes Bayesian inference for throughput modeling under five production uncertainties. Bayesian model utilized prior distributions related to previous information about the uncertainties where likelihood distributions are associated to the observed data. Gibbs sampling algorithm as the robust procedure of Monte Carlo Markov chain was employed for sampling unknown parameters and estimating the posterior mean of uncertainties. The Bayesian model was validated with respect to convergence and efficiency of its outputs. The results presented that the proposed Bayesian models were capable to predict the production throughput with accuracy of 98.3%.
Abstract: Long Term Evolution (LTE) is a 4G wireless
broadband technology developed by the Third Generation
Partnership Project (3GPP) release 8, and it's represent the
competitiveness of Universal Mobile Telecommunications System
(UMTS) for the next 10 years and beyond. The concepts for LTE
systems have been introduced in 3GPP release 8, with objective of
high-data-rate, low-latency and packet-optimized radio access
technology. In this paper, performance of different TCP variants
during LTE network investigated. The performance of TCP over
LTE is affected mostly by the links of the wired network and total
bandwidth available at the serving base station. This paper describes
an NS-2 based simulation analysis of TCP-Vegas, TCP-Tahoe, TCPReno,
TCP-Newreno, TCP-SACK, and TCP-FACK, with full
modeling of all traffics of LTE system. The Evaluation of the
network performance with all TCP variants is mainly based on
throughput, average delay and lost packet. The analysis of TCP
performance over LTE ensures that all TCP's have a similar
throughput and the best performance return to TCP-Vegas than other
variants.
Abstract: Long term rainfall analysis and prediction is a
challenging task especially in the modern world where the impact of
global warming is creating complications in environmental issues.
These factors which are data intensive require high performance
computational modeling for accurate prediction. This research paper
describes a prototype which is designed and developed on grid
environment using a number of coupled software infrastructural
building blocks. This grid enabled system provides the demanding
computational power, efficiency, resources, user-friendly interface,
secured job submission and high throughput. The results obtained
using sequential execution and grid enabled execution shows that
computational performance has enhanced among 36% to 75%, for
decade of climate parameters. Large variation in performance can be
attributed to varying degree of computational resources available for
job execution.
Grid Computing enables the dynamic runtime selection, sharing
and aggregation of distributed and autonomous resources which plays
an important role not only in business, but also in scientific
implications and social surroundings. This research paper attempts to
explore the grid enabled computing capabilities on weather indices
from HOAPS data for climate impact modeling and change
detection.
Abstract: The Mobile Ad-hoc Network (MANET) is a collection of self-configuring and rapidly deployed mobile nodes (routers) without any central infrastructure. Routing is one of the potential issues. Many routing protocols are reported but it is difficult to decide which one is best in all scenarios. In this paper on demand routing protocols DSR and DYMO based on IEEE 802.11 DCF MAC protocol are examined and characteristic summary of these routing protocols is presented. Their performance is analyzed and compared on performance measuring metrics throughput, dropped packets due to non availability of routes, duplicate RREQ generated for route discovery and normalized routing load by varying CBR data traffic load using QualNet 5.0.2 network simulator.
Abstract: In 1990 [1] the subband-DFT (SB-DFT) technique was proposed. This technique used the Hadamard filters in the decomposition step to split the input sequence into low- and highpass sequences. In the next step, either two DFTs are needed on both bands to compute the full-band DFT or one DFT on one of the two bands to compute an approximate DFT. A combination network with correction factors was to be applied after the DFTs. Another approach was proposed in 1997 [2] for using a special discrete wavelet transform (DWT) to compute the discrete Fourier transform (DFT). In the first step of the algorithm, the input sequence is decomposed in a similar manner to the SB-DFT into two sequences using wavelet decomposition with Haar filters. The second step is to perform DFTs on both bands to obtain the full-band DFT or to obtain a fast approximate DFT by implementing pruning at both input and output sides. In this paper, the wavelet-based DFT (W-DFT) with Haar filters is interpreted as SB-DFT with Hadamard filters. The only difference is in a constant factor in the combination network. This result is very important to complete the analysis of the W-DFT, since all the results concerning the accuracy and approximation errors in the SB-DFT are applicable. An application example in spectral analysis is given for both SB-DFT and W-DFT (with different filters). The adaptive capability of the SB-DFT is included in the W-DFT algorithm to select the band of most energy as the band to be computed. Finally, the W-DFT is extended to the two-dimensional case. An application in image transformation is given using two different types of wavelet filters.
Abstract: A study on the performance of TCP Vegas versus
different TCP variants in homogeneous and heterogeneous wired
networks are performed via simulation experiment using network
simulator (ns-2). This performance evaluation prepared a comparison
medium for the performance evaluation of enhanced-TCP Vegas in
wired network and for wireless network. In homogeneous network,
the performance of TCP Tahoe, TCP Reno, TCP NewReno, TCP
Vegas and TCP SACK are analyzed. In heterogeneous network, the
performances of TCP Vegas against TCP variants are analyzed. TCP
Vegas outperforms other TCP variants in homogeneous wired
network. However, TCP Vegas achieves unfair throughput in
heterogeneous wired network.
Abstract: Due to insufficient frequency band and tremendous growth of the mobile users, complex computation is needed for the use of resources. Long distance communication began with the introduction of telegraphs and simple coded pulses, which were used to transmit short messages. Since then numerous advances have rendered reliable transfer of information both easier and quicker. Wireless network refers to any type of computer network that is wireless, and is commonly associated with a telecommunications network whose interconnections between nodes is implemented without the use of wires. Wireless network can be broadly categorized in infrastructure network and infrastructure less network. Infrastructure network is one in which we have a base station to serve the mobile users and in the infrastructure less network is one in which no infrastructure is available to serve the mobile users this kind of networks are also known as mobile Adhoc networks. In this paper we have simulated the result for different scenarios with protocols like AODV and DSR; we simulated the result for throughput, delay and receiving traffic in the given scenario.
Abstract: This paper proposes an implementation for the
directed diffusion paradigm aids in studying this paradigm-s
operations and evaluates its behavior according to this
implementation. The directed diffusion is evaluated with respect to
the loss percentage, lifetime, end-to-end delay, and throughput.
From these evaluations some suggestions and modifications are
proposed to improve the directed diffusion behavior according to
this implementation with respect to these metrics. The proposed
modifications reflect the effect of local path repair by introducing a
technique called Loop-free Local Path Repair (LLPR) which
improves the directed diffusion behavior especially with respect to
packet loss percentage by about 92.69%. Also LLPR improves the
throughput and end-to-end delay by about 55.31% and 14.06%
respectively, while the lifetime decreases by about 29.79%.
Abstract: The dynamic spectrum allocation solutions such as
cognitive radio networks have been proposed as a key technology to
exploit the frequency segments that are spectrally underutilized.
Cognitive radio users work as secondary users who need to
constantly and rapidly sense the presence of primary users or
licensees to utilize their frequency bands if they are inactive. Short
sensing cycles should be run by the secondary users to achieve
higher throughput rates as well as to provide low level of interference
to the primary users by immediately vacating their channels once
they have been detected. In this paper, the throughput-sensing time
relationship in local and cooperative spectrum sensing has been
investigated under two distinct scenarios, namely, constant primary
user protection (CPUP) and constant secondary user spectrum
usability (CSUSU) scenarios. The simulation results show that the
design of sensing slot duration is very critical and depends on the
number of cooperating users under CPUP scenario whereas under
CSUSU, cooperating more users has no effect if the sensing time
used exceeds 5% of the total frame duration.
Abstract: Network on Chip (NoC) has emerged as a promising
on chip communication infrastructure. Three Dimensional Integrate
Circuit (3D IC) provides small interconnection length between layers
and the interconnect scalability in the third dimension, which can
further improve the performance of NoC. Therefore, in this paper,
a hierarchical cluster-based interconnect architecture is merged with
the 3D IC. This interconnect architecture significantly reduces the
number of long wires. Since this architecture only has approximately
a quarter of routers in 3D mesh-based architecture, the average
number of hops is smaller, which leads to lower latency and higher
throughput. Moreover, smaller number of routers decreases the area
overhead. Meanwhile, some dual links are inserted into the bottlenecks
of communication to improve the performance of NoC.
Simulation results demonstrate our theoretical analysis and show the
advantages of our proposed architecture in latency, throughput and
area, when compared with 3D mesh-based architecture.
Abstract: Grid computing is a form of distributed computing
that involves coordinating and sharing computational power, data
storage and network resources across dynamic and geographically
dispersed organizations. Scheduling onto the Grid is NP-complete,
so there is no best scheduling algorithm for all grid computing
systems. An alternative is to select an appropriate scheduling
algorithm to use in a given grid environment because of the
characteristics of the tasks, machines and network connectivity. Job
and resource scheduling is one of the key research area in grid
computing. The goal of scheduling is to achieve highest possible
system throughput and to match the application need with the
available computing resources. Motivation of the survey is to
encourage the amateur researcher in the field of grid computing, so
that they can understand easily the concept of scheduling and can
contribute in developing more efficient scheduling algorithm. This
will benefit interested researchers to carry out further work in this
thrust area of research.
Abstract: Continuously growing needs for Internet applications
that transmit massive amount of data have led to the emergence of
high speed network. Data transfer must take place without any
congestion and hence feedback parameters must be transferred from
the receiver end to the sender end so as to restrict the sending rate in
order to avoid congestion. Even though TCP tries to avoid
congestion by restricting the sending rate and window size, it never
announces the sender about the capacity of the data to be sent and
also it reduces the window size by half at the time of congestion
therefore resulting in the decrease of throughput, low utilization of
the bandwidth and maximum delay. In this paper, XCP protocol is
used and feedback parameters are calculated based on arrival rate,
service rate, traffic rate and queue size and hence the receiver
informs the sender about the throughput, capacity of the data to be
sent and window size adjustment, resulting in no drastic decrease in
window size, better increase in sending rate because of which there is
a continuous flow of data without congestion. Therefore as a result of
this, there is a maximum increase in throughput, high utilization of
the bandwidth and minimum delay. The result of the proposed work
is presented as a graph based on throughput, delay and window size.
Thus in this paper, XCP protocol is well illustrated and the various
parameters are thoroughly analyzed and adequately presented.
Abstract: An effective approach for realizing the binary tree structure, representing a combinational logic functionality with enhanced throughput, is discussed in this paper. The optimization in maximum operating frequency was achieved through delay minimization, which in turn was possible by means of reducing the depth of the binary network. The proposed synthesis methodology has been validated by experimentation with FPGA as the target technology. Though our proposal is technology independent, yet the heuristic enables better optimization in throughput even after technology mapping for such Boolean functionality; whose reduced CNF form is associated with a lesser literal cost than its reduced DNF form at the Boolean equation level. For cases otherwise, our method converges to similar results as that of [12]. The practical results obtained for a variety of case studies demonstrate an improvement in the maximum throughput rate for Spartan IIE (XC2S50E-7FT256) and Spartan 3 (XC3S50-4PQ144) FPGA logic families by 10.49% and 13.68% respectively. With respect to the LUTs and IOBUFs required for physical implementation of the requisite non-regenerative logic functionality, the proposed method enabled savings to the tune of 44.35% and 44.67% respectively, over the existing efficient method available in literature [12].