Abstract: This paper presents the decoder design for the single error correcting and double error detecting code proposed by the authors in an earlier paper. The speed of error detection and correction of a code is largely dependent upon the associated encoder and decoder circuits. The complexity and the speed of such circuits are determined by the number of 1?s in the parity check matrix (PCM). The number of 1?s in the parity check matrix for the code proposed by the authors are fewer than in any currently known single error correcting/double error detecting code. This results in simplified encoding and decoding circuitry for error detection and correction.
Abstract: Wireless mobile communications have experienced
the phenomenal growth through last decades. The advances in
wireless mobile technologies have brought about a demand for high
quality multimedia applications and services. For such applications
and services to work, signaling protocol is required for establishing,
maintaining and tearing down multimedia sessions. The Session
Initiation Protocol (SIP) is an application layer signaling protocols,
based on request/response transaction model. This paper considers
SIP INVITE transaction over an unreliable medium, since it has been
recently modified in Request for Comments (RFC) 6026. In order to
help in assuring that the functional correctness of this modification is
achieved, the SIP INVITE transaction is modeled and analyzed using
Colored Petri Nets (CPNs). Based on the model analysis, it is
concluded that the SIP INVITE transaction is free of livelocks and
dead codes, and in the same time it has both desirable and
undesirable deadlocks. Therefore, SIP INVITE transaction should be
subjected for additional updates in order to eliminate undesirable
deadlocks. In order to reduce the cost of implementation and
maintenance of SIP, additional remodeling of the SIP INVITE
transaction is recommended.
Abstract: Orthogonal Frequency Division Multiplexing
(OFDM) is an efficient method of data transmission for high speed
communication systems. However, the main drawback of OFDM
systems is that, it suffers from the problem of high Peak-to-Average
Power Ratio (PAPR) which causes inefficient use of the High Power
Amplifier and could limit transmission efficiency. OFDM consist of
large number of independent subcarriers, as a result of which the
amplitude of such a signal can have high peak values. In this paper,
we propose an effective reduction scheme that combines DCT and
SLM techniques. The scheme is composed of the DCT followed by
the SLM using the Riemann matrix to obtain phase sequences for the
SLM technique. The simulation results show PAPR can be greatly
reduced by applying the proposed scheme. In comparison with
OFDM, while OFDM had high values of PAPR –about 10.4dB our
proposed method achieved about 4.7dB reduction of the PAPR with
low complexities computation. This approach also avoids
randomness in phase sequence selection, which makes it simpler to
decode at the receiver. As an added benefit, the matrices can be
generated at the receiver end to obtain the data signal and hence it is
not required to transmit side information (SI).
Abstract: Accident in spent fuel pool (SFP) of Fukushima
Daiichi Unit 4 showed the importance of continuous monitoring of the
key environmental parameters such as water temperature, water level,
and radiation level in the SFP at accident conditions. Because the SFP
water temperature is one of the key parameters indicating SFP
conditions, its behavior at accident conditions shall be understood to
prepare appropriate measures. This study estimated temporal change
in the SFP water temperature at Kori Unit 1 with 587 MWe for 1 hour
after initiation of a loss-of-pool-cooling accident. For the estimation,
ANSYS CFX 13.0 code was used. The estimation showed that the
increasing rate of the water temperature was 3.90C per hour and the
SFP water temperature could reach 1000C in 25.6 hours after the
initiation of loss-of-pool-cooling accident.
Abstract: Testing accounts for the major percentage of technical
contribution in the software development process. Typically, it
consumes more than 50 percent of the total cost of developing a
piece of software. The selection of software tests is a very important
activity within this process to ensure the software reliability
requirements are met. Generally tests are run to achieve maximum
coverage of the software code and very little attention is given to the
achieved reliability of the software. Using an existing methodology,
this paper describes how to use Bayesian Belief Networks (BBNs) to
select unit tests based on their contribution to the reliability of the
module under consideration. In particular the work examines how the
approach can enhance test-first development by assessing the quality
of test suites resulting from this development methodology and
providing insight into additional tests that can significantly reduce
the achieved reliability. In this way the method can produce an
optimal selection of inputs and the order in which the tests are
executed to maximize the software reliability. To illustrate this
approach, a belief network is constructed for a modern software
system incorporating the expert opinion, expressed through
probabilities of the relative quality of the elements of the software,
and the potential effectiveness of the software tests. The steps
involved in constructing the Bayesian Network are explained as is a
method to allow for the test suite resulting from test-driven
development.
Abstract: Multi-site damage (MSD) has been a challenge to
aircraft, civil and power plant structures. In real life components are subjected to cracking at many vulnerable locations such as the bolt
holes. However, we do not consider for the presence of multiple cracks. Unlike components with a single crack, these components are
difficult to predict. When two cracks approach one another, their
stress fields influence each other and produce enhancing or shielding effect depending on the position of the cracks. In the present study,
numerical studies on fracture analysis have been conducted by using
the developed code based on the modified virtual crack closure integral (MVCCI) technique and finite element analysis (FEA) software ABAQUS for computing SIF of plates with multiple cracks.
Various parametric studies have been carried out and the results have
been compared with literature where ever available and also with the solution, obtained by using ABAQUS. By conducting extensive
numerical studies expressions for SIF have been obtained for collinear cracks and non-aligned cracks.
Abstract: Seismic design may require non-conventional
concept, due to the fact that the stiffness and layout of the structure
have a great effect on the overall structural behaviour, on the seismic
load intensity as well as on the internal force distribution. To find an
economical and optimal structural configuration the key issue is the
optimal design of the lateral load resisting system. This paper focuses
on the optimal design of regular, concentric braced frame (CBF)
multi-storey steel building structures. The optimal configurations are
determined by a numerical method using genetic algorithm approach,
developed by the authors. Aim is to find structural configurations
with minimum structural cost. The design constraints of objective
function are assigned in accordance with Eurocode 3 and Eurocode 8
guidelines. In this paper the results are presented for various building
geometries, different seismic intensities, and levels of energy
dissipation.
Abstract: In this paper, we propose a Perceptually Optimized Embedded ZeroTree Image Coder (POEZIC) that introduces a perceptual weighting to wavelet transform coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to the coding quality obtained using the SPIHT algorithm only. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEZIC quality assessment. Our POEZIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) luminance masking and Contrast masking, 2) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting, 3) the Wavelet Error Sensitivity WES used to reduce the perceptual quantization errors. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.
Abstract: When the foundations of structures under cyclic
loading with amplitudes less than their permissible load, the concern exists often for the amount of uniform and non-uniform settlement of
such structures. Storage tank foundations with numerous filling and discharging and railways ballast course under repeating
transportation loads are examples of such conditions. This paper
deals with the effects of using the new generation of reinforcements,
Grid-Anchor, for the purpose of reducing the permanent settlement
of these foundations under the influence of different proportions of
the ultimate load. Other items such as the type and the number of
reinforcements as well as the number of loading cycles are studied numerically. Numerical models were made using the Plaxis3D
Tunnel finite element code. The results show that by using gridanchor
and increasing the number of their layers in the same
proportion as that of the cyclic load being applied, the amount of
permanent settlement decreases up to 42% relative to unreinforced
condition depends on the number of reinforcement layers and percent
of applied load and the number of loading cycles to reach a constant
value of dimensionless settlement decreases up to 20% relative to
unreinforced condition.
Abstract: Heart failure is the most common reason of death
nowadays, but if the medical help is given directly, the patient-s life
may be saved in many cases. Numerous heart diseases can be
detected by means of analyzing electrocardiograms (ECG). Artificial
Neural Networks (ANN) are computer-based expert systems that
have proved to be useful in pattern recognition tasks. ANN can be
used in different phases of the decision-making process, from
classification to diagnostic procedures. This work concentrates on a
review followed by a novel method.
The purpose of the review is to assess the evidence of healthcare
benefits involving the application of artificial neural networks to the
clinical functions of diagnosis, prognosis and survival analysis, in
ECG signals. The developed method is based on a compound neural
network (CNN), to classify ECGs as normal or carrying an
AtrioVentricular heart Block (AVB). This method uses three
different feed forward multilayer neural networks. A single output
unit encodes the probability of AVB occurrences. A value between 0
and 0.1 is the desired output for a normal ECG; a value between 0.1
and 1 would infer an occurrence of an AVB. The results show that
this compound network has a good performance in detecting AVBs,
with a sensitivity of 90.7% and a specificity of 86.05%. The accuracy
value is 87.9%.
Abstract: The study in this paper underlines the importance of
correct joint selection of the spreading codes for uplink of multicarrier
code division multiple access (MC-CDMA) at the transmitter
side and detector at the receiver side in the presence of nonlinear
distortion due to high power amplifier (HPA). The bit error rate
(BER) of system for different spreading sequences (Walsh code, Gold
code, orthogonal Gold code, Golay code and Zadoff-Chu code) and
different kinds of receivers (minimum mean-square error receiver
(MMSE-MUD) and microstatistic multi-user receiver (MSF-MUD))
is compared by means of simulations for MC-CDMA transmission
system. Finally, the results of analysis will show, that the application
of MSF-MUD in combination with Golay codes can outperform
significantly the other tested spreading codes and receivers for all
mostly used models of HPA.
Abstract: This paper presents modeling and optimization of two NP-hard problems in flexible manufacturing system (FMS), part type selection problem and loading problem. Due to the complexity and extent of the problems, the paper was split into two parts. The first part of the papers has discussed the modeling of the problems and showed how the real coded genetic algorithms (RCGA) can be applied to solve the problems. This second part discusses the effectiveness of the RCGA which uses an array of real numbers as chromosome representation. The novel proposed chromosome representation produces only feasible solutions which minimize a computational time needed by GA to push its population toward feasible search space or repair infeasible chromosomes. The proposed RCGA improves the FMS performance by considering two objectives, maximizing system throughput and maintaining the balance of the system (minimizing system unbalance). The resulted objective values are compared to the optimum values produced by branch-and-bound method. The experiments show that the proposed RCGA could reach near optimum solutions in a reasonable amount of time.
Abstract: The decoding of Low-Density Parity-Check (LDPC) codes is operated over a redundant structure known as the bipartite graph, meaning that the full set of bit nodes is not absolutely necessary for decoder convergence. In 2008, Soyjaudah and Catherine designed a recovery algorithm for LDPC codes based on this assumption and showed that the error-correcting performance of their codes outperformed conventional LDPC Codes. In this work, the use of the recovery algorithm is further explored to test the performance of LDPC codes while the number of iterations is progressively increased. For experiments conducted with small blocklengths of up to 800 bits and number of iterations of up to 2000, the results interestingly demonstrate that contrary to conventional wisdom, the error-correcting performance keeps increasing with increasing number of iterations.
Abstract: Results in one field necessarily give insight into the
others, and all have much potential for scientific and technological
application. The Hadamard-transform technique once been applied to
the spectrometry also has its use in the SNR Enhancement of OTDR.
In this report, a new set of code (Simplex-codes) is discussed and
where the addition gain of SNR come from is implied.
Abstract: Due to the stringent legislation for emission of diesel
engines and also increasing demand on fuel consumption, the
importance of detailed 3D simulation of fuel injection, mixing and
combustion have been increased in the recent years. In the present
work, FIRE code has been used to study the detailed modeling of
spray and mixture formation in a Caterpillar heavy-duty diesel
engine. The paper provides an overview of the submodels
implemented, which account for liquid spray atomization, droplet
secondary break-up, droplet collision, impingement, turbulent
dispersion and evaporation. The simulation was performed from
intake valve closing (IVC) to exhaust valve opening (EVO). The
predicted in-cylinder pressure is validated by comparing with
existing experimental data. A good agreement between the predicted
and experimental values ensures the accuracy of the numerical
predictions collected with the present work. Predictions of engine
emissions were also performed and a good quantitative agreement
between measured and predicted NOx and soot emission data were
obtained with the use of the present Zeldowich mechanism and
Hiroyasu model. In addition, the results reported in this paper
illustrate that the numerical simulation can be one of the most
powerful and beneficial tools for the internal combustion engine
design, optimization and performance analysis.
Abstract: In this paper we study the use of a new code called
Random Diagonal (RD) code for Spectral Amplitude Coding (SAC)
optical Code Division Multiple Access (CDMA) networks, using
Fiber Bragg-Grating (FBG), FBG consists of a fiber segment whose
index of reflection varies periodically along its length. RD code is
constructed using code level and data level, one of the important
properties of this code is that the cross correlation at data level is
always zero, which means that Phase intensity Induced Phase (PIIN)
is reduced. We find that the performance of the RD code will be
better than Modified Frequency Hopping (MFH) and Hadamard code
It has been observed through experimental and theoretical simulation
that BER for RD code perform significantly better than other codes.
Proof –of-principle simulations of encoding with 3 channels, and 10
Gbps data transmission have been successfully demonstrated together
with FBG decoding scheme for canceling the code level from SAC-signal.
Abstract: This paper demonstrates the application of craziness based particle swarm optimization (CRPSO) technique for designing the 8th order low pass Infinite Impulse Response (IIR) filter. CRPSO, the much improved version of PSO, is a population based global heuristic search algorithm which finds near optimal solution in terms of a set of filter coefficients. Effectiveness of this algorithm is justified with a comparative study of some well established algorithms, namely, real coded genetic algorithm (RGA) and particle swarm optimization (PSO). Simulation results affirm that the proposed algorithm CRPSO, outperforms over its counterparts not only in terms of quality output i.e. sharpness at cut-off, pass band ripple, stop band ripple, and stop band attenuation but also in convergence speed with assured stability.
Abstract: Many difficulties are faced in the process of learning
computer programming. This paper will propose a system framework
intended to reduce cognitive load in learning programming. In first
section focus is given on the process of learning and the
shortcomings of the current approaches to learning programming.
Finally the proposed prototype is suggested along with the
justification of the prototype. In the proposed prototype the concept
map is used as visualization metaphor. Concept maps are similar to
the mental schema in long term memory and hence it can reduce
cognitive load well. In addition other method such as part code
method is also proposed in this framework to can reduce cognitive
load.
Abstract: This paper proposes a fast code acquisition scheme for
optical code division multiple access (O-CDMA) systems. Unlike the
conventional scheme, the proposed scheme employs multiple thresholds
providing a shorter mean acquisition time (MAT) performance.
The simulation results show that the MAT of the proposed scheme
is shorter than that of the conventional scheme.
Abstract: This paper deals with a numerical analysis of the
transient response of composite beams with strain rate dependent
mechanical properties by use of a finite difference method. The
equations of motion based on Timoshenko beam theory are derived.
The geometric nonlinearity effects are taken into account with von
Kármán large deflection theory. The finite difference method in
conjunction with Newmark average acceleration method is applied to
solve the differential equations. A modified progressive damage
model which accounts for strain rate effects is developed based on
the material property degradation rules and modified Hashin-type
failure criteria and added to the finite difference model. The
components of the model are implemented into a computer code in
Mathematica 6. Glass/epoxy laminated composite beams with
constant and strain rate dependent mechanical properties under
dynamic load are analyzed. Effects of strain rate on dynamic
response of the beam for various stacking sequences, load and
boundary conditions are investigated.