Abstract: The traditional Failure Mode and Effects Analysis
(FMEA) uses Risk Priority Number (RPN) to evaluate the risk level
of a component or process. The RPN index is determined by
calculating the product of severity, occurrence and detection indexes.
The most critically debated disadvantage of this approach is that
various sets of these three indexes may produce an identical value of
RPN. This research paper seeks to address the drawbacks in
traditional FMEA and to propose a new approach to overcome these
shortcomings. The Risk Priority Code (RPC) is used to prioritize
failure modes, when two or more failure modes have the same RPN.
A new method is proposed to prioritize failure modes, when there is a
disagreement in ranking scale for severity, occurrence and detection.
An Analysis of Variance (ANOVA) is used to compare means of
RPN values. SPSS (Statistical Package for the Social Sciences)
statistical analysis package is used to analyze the data. The results
presented are based on two case studies. It is found that the proposed
new methodology/approach resolves the limitations of traditional
FMEA approach.
Abstract: In 3D-wavelet video coding framework temporal
filtering is done along the trajectory of motion using Motion
Compensated Temporal Filtering (MCTF). Hence computationally
efficient motion estimation technique is the need of MCTF. In this
paper a predictive technique is proposed in order to reduce the
computational complexity of the MCTF framework, by exploiting
the high correlation among the frames in a Group Of Picture (GOP).
The proposed technique applies coarse and fine searches of any fast
block based motion estimation, only to the first pair of frames in a
GOP. The generated motion vectors are supplied to the next
consecutive frames, even to subsequent temporal levels and only fine
search is carried out around those predicted motion vectors. Hence
coarse search is skipped for all the motion estimation in a GOP
except for the first pair of frames. The technique has been tested for
different fast block based motion estimation algorithms over different
standard test sequences using MC-EZBC, a state-of-the-art scalable
video coder. The simulation result reveals substantial reduction (i.e.
20.75% to 38.24%) in the number of search points during motion
estimation, without compromising the quality of the reconstructed
video compared to non-predictive techniques. Since the motion
vectors of all the pair of frames in a GOP except the first pair will
have value ±1 around the motion vectors of the previous pair of
frames, the number of bits required for motion vectors is also
reduced by 50%.
Abstract: This paper presents the simulation of fragmentation
warhead using a hydrocode, Autodyn. The goal of this research is to
determine the lethal range of such a warhead. This study investigates
the lethal range of warheads with and without steel balls as
preformed fragments. The results from the FE simulation, i.e. initial
velocities and ejected spray angles of fragments, are further processed
using an analytical approach so as to determine a fragment hit density
and probability of kill of a modelled warhead. In order to simulate a
plenty of preformed fragments inside a warhead, the model requires
expensive computation resources. Therefore, this study attempts to
model the problem in an alternative approach by considering an
equivalent mass of preformed fragments to the mass of warhead
casing. This approach yields approximately 7% and 20% difference
of fragment velocities from the analytical results for one and two
layers of preformed fragments, respectively. The lethal ranges of the
simulated warheads are 42.6 m and 56.5 m for warheads with one and
two layers of preformed fragments, respectively, compared to 13.85
m for a warhead without preformed fragment. These lethal ranges are
based on the requirement of fragment hit density. The lethal ranges
which are based on the probability of kill are 27.5 m, 61 m and 70 m
for warheads with no preformed fragment, one and two layers of
preformed fragments, respectively.
Abstract: This paper presents a model for the evaluation of
energy performance and aerodynamic forces acting on a three-bladed
small vertical axis Darrieus wind turbine depending on blade chord
curvature with respect to rotor axis.
The adopted survey methodology is based on an analytical code
coupled to a solid modeling software, capable of generating the
desired blade geometry depending on the blade design geometric
parameters, which is linked to a finite volume CFD code for the
calculation of rotor performance.
After describing and validating the model with experimental data,
the results of numerical simulations are proposed on the bases of two
different blade profile architectures, which are respectively
characterized by a straight chord and by a curved one, having a chord
radius equal to rotor external circumference. A CFD campaign of
analysis is completed for three blade-candidate airfoil sections, that is
the recently-developed DU 06-W-200 cambered blade profile, a
classical symmetrical NACA 0021 and its derived cambered airfoil,
characterized by a curved chord, having a chord radius equal to rotor
external circumference.
The effects of blade chord curvature on angle of attack, blade
tangential and normal forces are first investigated and then the
overall rotor torque and power are analyzed as a function of blade
azimuthal position, achieving a numerical quantification of the
influence of blade camber on overall rotor performance.
Abstract: In this paper fatigue crack growth behavior of
aeronautical aluminum alloy 2024 T351 was studied. Effects of
various loading and geometrical parameters are studied such as stress
ratio, amplitude loading, etc. The fatigue crack growth with constant
amplitude is studied using the AFGROW code when NASGRO
model is used. The effect of the stress ratio is highlighted, where one
notices a shift of the curves of crack growth. The comparative study
between two orientations L-T and T-L on fatigue behavior are
presented and shows the variation on the fatigue life. L-T orientation
presents a good fatigue crack growth resistance. Effects of crack
closure are shown in Paris domain and that no crack closure
phenomenons are present at high stress intensity factor.
Abstract: In this paper we have developed a FDTD simulation
code which can treat wave propagation of a monopole antenna in a
metallic case which covers with PML, and performed a series of three
dimensional FDTD simulations of electromagnetic wave propagation
in this space .We also provide a measurement set up in antenna lab
and fortunately the simulations and measurements show good
agreement. According to simulation and measurement results, we
confirmed that the computer program which had been written in
FORTRAN, works correctly.
Abstract: In this paper a hybrid technique of Genetic Algorithm
and Simulated Annealing (HGASA) is applied for Fractal Image
Compression (FIC). With the help of this hybrid evolutionary
algorithm effort is made to reduce the search complexity of matching
between range block and domain block. The concept of Simulated
Annealing (SA) is incorporated into Genetic Algorithm (GA) in order
to avoid pre-mature convergence of the strings. One of the image
compression techniques in the spatial domain is Fractal Image
Compression but the main drawback of FIC is that it involves more
computational time due to global search. In order to improve the
computational time along with acceptable quality of the decoded
image, HGASA technique has been proposed. Experimental results
show that the proposed HGASA is a better method than GA in terms
of PSNR for Fractal image Compression.
Abstract: The voice signal in Voice over Internet protocol (VoIP) system is processed through the best effort policy based IP network, which leads to the network degradations including delay, packet loss jitter. The work in this paper presents the implementation of finite impulse response (FIR) filter for voice quality improvement in the VoIP system through distributed arithmetic (DA) algorithm. The VoIP simulations are conducted with AMR-NB 6.70 kbps and G.729a speech coders at different packet loss rates and the performance of the enhanced VoIP signal is evaluated using the perceptual evaluation of speech quality (PESQ) measurement for narrowband signal. The results show reduction in the computational complexity in the system and significant improvement in the quality of the VoIP voice signal.
Abstract: Wavelet transforms is a very powerful tools for image compression. One of its advantage is the provision of both spatial and frequency localization of image energy. However, wavelet transform coefficients are defined by both a magnitude and sign. While algorithms exist for efficiently coding the magnitude of the transform coefficients, they are not efficient for the coding of their sign. It is generally assumed that there is no compression gain to be obtained from the coding of the sign. Only recently have some authors begun to investigate the sign of wavelet coefficients in image coding. Some authors have assumed that the sign information bit of wavelet coefficients may be encoded with the estimated probability of 0.5; the same assumption concerns the refinement information bit. In this paper, we propose a new method for Separate Sign Coding (SSC) of wavelet image coefficients. The sign and the magnitude of wavelet image coefficients are examined to obtain their online probabilities. We use the scalar quantization in which the information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also examined. We show that the sign information and the refinement information may be encoded by the probability of approximately 0.5 only after about five bit planes. Two maps are separately entropy encoded: the sign map and the magnitude map. The refinement information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also entropy encoded. An algorithm is developed and simulations are performed on three standard images in grey scale: Lena, Barbara and Cameraman. Five scales are performed using the biorthogonal wavelet transform 9/7 filter bank. The obtained results are compared to JPEG2000 standard in terms of peak signal to noise ration (PSNR) for the three images and in terms of subjective quality (visual quality). It is shown that the proposed method outperforms the JPEG2000. The proposed method is also compared to other codec in the literature. It is shown that the proposed method is very successful and shows its performance in term of PSNR.
Abstract: Due to insufficient frequency band and tremendous growth of the mobile users, complex computation is needed for the use of resources. Long distance communication began with the introduction of telegraphs and simple coded pulses, which were used to transmit short messages. Since then numerous advances have rendered reliable transfer of information both easier and quicker. Wireless network refers to any type of computer network that is wireless, and is commonly associated with a telecommunications network whose interconnections between nodes is implemented without the use of wires. Wireless network can be broadly categorized in infrastructure network and infrastructure less network. Infrastructure network is one in which we have a base station to serve the mobile users and in the infrastructure less network is one in which no infrastructure is available to serve the mobile users this kind of networks are also known as mobile Adhoc networks. In this paper we have simulated the result for different scenarios with protocols like AODV and DSR; we simulated the result for throughput, delay and receiving traffic in the given scenario.
Abstract: An original Direct Numerical Simulation (DNS) method to tackle the problem of particulate flows at moderate to high concentration and finite Reynolds number is presented. Our method is built on the framework established by Glowinski and his coworkers [1] in the sense that we use their Distributed Lagrange Multiplier/Fictitious Domain (DLM/FD) formulation and their operator-splitting idea but differs in the treatment of particle collisions. The novelty of our contribution relies on replacing the simple artificial repulsive force based collision model usually employed in the literature by an efficient Discrete Element Method (DEM) granular solver. The use of our DEM solver enables us to consider particles of arbitrary shape (at least convex) and to account for actual contacts, in the sense that particles actually touch each other, in contrast with the simple repulsive force based collision model. We recently upgraded our serial code, GRIFF 1 [2], to full MPI capabilities. Our new code, PeliGRIFF 2, is developed under the framework of the full MPI open source platform PELICANS [3]. The new MPI capabilities of PeliGRIFF open new perspectives in the study of particulate flows and significantly increase the number of particles that can be considered in a full DNS approach: O(100000) in 2D and O(10000) in 3D. Results on the 2D/3D sedimentation/fluidization of isometric polygonal/polyedral particles with collisions are presented.
Abstract: This paper describes a steady state model of a multiple
effect evaporator system for simulation and control purposes. The
model includes overall as well as component mass balance equations,
energy balance equations and heat transfer rate equations for area
calculations for all the effects. Each effect in the process is
represented by a number of variables which are related by the energy
and material balance equations for the feed, product and vapor flow
for backward, mixed and split feed. For simulation 'fsolve' solver in
MATLAB source code is used. The optimality of three sequences i.e.
backward, mixed and splitting feed is studied by varying the various
input parameters.
Abstract: With the rapid popularization of internet services, it is apparent that the next generation terrestrial communication systems must be capable of supporting various applications like voice, video, and data. This paper presents the performance evaluation of turbo- coded mobile terrestrial communication systems, which are capable of providing high quality services for delay sensitive (voice or video) and delay tolerant (text transmission) multimedia applications in urban and suburban areas. Different types of multimedia information require different service qualities, which are generally expressed in terms of a maximum acceptable bit-error-rate (BER) and maximum tolerable latency. The breakthrough discovery of turbo codes allows us to significantly reduce the probability of bit errors with feasible latency. In a turbo-coded system, a trade-off between latency and BER results from the choice of convolutional component codes, interleaver type and size, decoding algorithm, and the number of decoding iterations. This trade-off can be exploited for multimedia applications by using optimal and suboptimal performance parameter amalgamations to achieve different service qualities. The results are therefore proposing an adaptive framework for turbo-coded wireless multimedia communications which incorporate a set of performance parameters that achieve an appropriate set of service qualities, depending on the application's requirements.
Abstract: End milling process is one of the common metal
cutting operations used for machining parts in manufacturing
industry. It is usually performed at the final stage in manufacturing a
product and surface roughness of the produced job plays an
important role. In general, the surface roughness affects wear
resistance, ductility, tensile, fatigue strength, etc., for machined parts
and cannot be neglected in design. In the present work an
experimental investigation of end milling of aluminium alloy with
carbide tool is carried out and the effect of different cutting
parameters on the response are studied with three-dimensional
surface plots. An artificial neural network (ANN) is used to establish
the relationship between the surface roughness and the input cutting
parameters (i.e., spindle speed, feed, and depth of cut). The Matlab
ANN toolbox works on feed forward back propagation algorithm is
used for modeling purpose. 3-12-1 network structure having
minimum average prediction error found as best network architecture
for predicting surface roughness value. The network predicts surface
roughness for unseen data and found that the result/prediction is
better. For desired surface finish of the component to be produced
there are many different combination of cutting parameters are
available. The optimum cutting parameter for obtaining desired
surface finish, to maximize tool life is predicted. The methodology is
demonstrated, number of problems are solved and algorithm is coded
in Matlab®.
Abstract: The well known NP-complete problem of the Traveling Salesman Problem (TSP) is coded in genetic form. A software system is proposed to determine the optimum route for a Traveling Salesman Problem using Genetic Algorithm technique. The system starts from a matrix of the calculated Euclidean distances between the cities to be visited by the traveling salesman and a randomly chosen city order as the initial population. Then new generations are then created repeatedly until the proper path is reached upon reaching a stopping criterion. This search is guided by a solution evaluation function.
Abstract: Considering payload, reliability, security and operational lifetime as major constraints in transmission of images we put forward in this paper a steganographic technique implemented at the physical layer. We suggest transmission of Halftoned images (payload constraint) in wireless sensor networks to reduce the amount of transmitted data. For low power and interference limited applications Turbo codes provide suitable reliability. Ensuring security is one of the highest priorities in many sensor networks. The Turbo Code structure apart from providing forward error correction can be utilized to provide for encryption. We first consider the Halftoned image and then the method of embedding a block of data (called secret) in this Halftoned image during the turbo encoding process is presented. The small modifications required at the turbo decoder end to extract the embedded data are presented next. The implementation complexity and the degradation of the BER (bit error rate) in the Turbo based stego system are analyzed. Using some of the entropy based crypt analytic techniques we show that the strength of our Turbo based stego system approaches that found in the OTPs (one time pad).
Abstract: The adaptive power control of Code Division Multiple
Access (CDMA) communications using Remote Radio Head
(RRH) between multiple Unmanned Aerial Vehicles (UAVs) with
a link-budget based Signal-to-Interference Ratio (SIR) estimate is
applied to four inner loop power control algorithms. It is concluded
that Base Station (BS) can calculate not only UAV distance using
linearity between speed and Consecutive Transmit-Power-Control
Ratio (CTR) of Adaptive Step-size Closed Loop Power Control (ASCLPC),
Consecutive TPC Ratio Step-size Closed Loop Power Control
(CS-CLPC), Fixed Step-size Power Control (FSPC), but also UAV
position with Received Signal Strength Indicator (RSSI) ratio of
RRHs.
Abstract: In this paper, we present the region based hidden Markov random field model (RBHMRF), which encodes the characteristics of different brain regions into a probabilistic framework for brain MR image segmentation. The recently proposed TV+L1 model is used for region extraction. By utilizing different spatial characteristics in different brain regions, the RMHMRF model performs beyond the current state-of-the-art method, the hidden Markov random field model (HMRF), which uses identical spatial information throughout the whole brain. Experiments on both real and synthetic 3D MR images show that the segmentation result of the proposed method has higher accuracy compared to existing algorithms.
Abstract: One of the popular methods for recognition of facial
expressions such as happiness, sadness and surprise is based on
deformation of facial features. Motion vectors which show these
deformations can be specified by the optical flow. In this method, for
detecting emotions, the resulted set of motion vectors are compared
with standard deformation template that caused by facial expressions.
In this paper, a new method is introduced to compute the quantity of
likeness in order to make decision based on the importance of
obtained vectors from an optical flow approach. For finding the
vectors, one of the efficient optical flow method developed by
Gautama and VanHulle[17] is used. The suggested method has been
examined over Cohn-Kanade AU-Coded Facial Expression Database,
one of the most comprehensive collections of test images available.
The experimental results show that our method could correctly
recognize the facial expressions in 94% of case studies. The results
also show that only a few number of image frames (three frames) are
sufficient to detect facial expressions with rate of success of about
83.3%. This is a significant improvement over the available methods.
Abstract: The mechanical behavior of porous media is governed by the interaction between its solid skeleton and the fluid existing inside its pores. The interaction occurs through the interface of gains and fluid. The traditional analysis methods of porous media, based on the effective stress and Darcy's law, are unable to account for these interactions. For an accurate analysis, the porous media is represented in a fluid-filled porous solid on the basis of the Biot theory of wave propagation in poroelastic media. In Biot formulation, the equations of motion of the soil mixture are coupled with the global mass balance equations to describe the realistic behavior of porous media. Because of irregular geometry, the domain is generally treated as an assemblage of fmite elements. In this investigation, the numerical formulation for the field equations governing the dynamic response of fluid-saturated porous media is analyzed and employed for the study of transient wave motion. A finite element model is developed and implemented into a computer code called DYNAPM for dynamic analysis of porous media. The weighted residual method with 8-node elements is used for developing of a finite element model and the analysis is carried out in the time domain considering the dynamic excitation and gravity loading. Newmark time integration scheme is developed to solve the time-discretized equations which are an unconditionally stable implicit method Finally, some numerical examples are presented to show the accuracy and capability of developed model for a wide variety of behaviors of porous media.