Abstract: Natural Disasters have always occurred through earth life. As human life developed on earth, he faced with different disasters. Since disasters would destroy his living areas and ruin his life, he learned how to respond and overcome to these matters. Nowadays, in the era of industrialized world and informatics, the man kind seeks for stages and classification of pre and post disaster process in order to identify a framework in these circumstances. Because too many parameters complicate these frameworks and proceedings, it seems that this goal has not been properly established yet and the only resource is guidelines of UNDRO (1982) [1]. This paper will discuss about temporary housing as one of an approved stage in disaster management field and investigate the affects of disapproval or dismissal of this at two earthquakes which took place in Iran.
Abstract: In the study of honeycomb crushing under quasistatic loading, two parameters are important, the mean crushing stress and the wavelength of the folding mode. The previous theoretical models did not consider the true cylindrical curvature effects and the flow stress in the folding mode of honeycomb material. The present paper introduces a modification on Wierzbicki-s model based on considering two above mentioned parameters in estimating the mean crushing stress and the wavelength through implementation of the energy method. Comparison of the results obtained by the new model and Wierzbicki-s model with existing experimental data shows better prediction by the model presented in this paper.
Abstract: This paper is concerned with the role strategic
management plays in higher education and the methods it entails.
Using the University of West Bohemia and the Czech Republic as
examples, the paper describes the methods used in furthering
strategic objectives within institutions and their different parts
(faculties, institutes). The nature of the demands faced by the
university dictates the need for a strategic framework which defines
the basic objectives and parameters of tertiary education and research
in a local, regional and national context. Sharing strategies with a
wider range of actors (universities, cities, regions, the practical
sphere) is key to laying the foundations for more efficient
cooperation.
Abstract: Despite the fact that Arabic language is currently one
of the most common languages worldwide, there has been only a
little research on Arabic speech recognition relative to other
languages such as English and Japanese. Generally, digital speech
processing and voice recognition algorithms are of special
importance for designing efficient, accurate, as well as fast automatic
speech recognition systems. However, the speech recognition process
carried out in this paper is divided into three stages as follows: firstly,
the signal is preprocessed to reduce noise effects. After that, the
signal is digitized and hearingized. Consequently, the voice activity
regions are segmented using voice activity detection (VAD)
algorithm. Secondly, features are extracted from the speech signal
using Mel-frequency cepstral coefficients (MFCC) algorithm.
Moreover, delta and acceleration (delta-delta) coefficients have been
added for the reason of improving the recognition accuracy. Finally,
each test word-s features are compared to the training database using
dynamic time warping (DTW) algorithm. Utilizing the best set up
made for all affected parameters to the aforementioned techniques,
the proposed system achieved a recognition rate of about 98.5%
which outperformed other HMM and ANN-based approaches
available in the literature.
Abstract: There are two common types of operational research techniques, optimisation and metaheuristic methods. The latter may be defined as a sequential process that intelligently performs the exploration and exploitation adopted by natural intelligence and strong inspiration to form several iterative searches. An aim is to effectively determine near optimal solutions in a solution space. In this work, a type of metaheuristics called Ant Colonies Optimisation, ACO, inspired by a foraging behaviour of ants was adapted to find optimal solutions of eight non-linear continuous mathematical models. Under a consideration of a solution space in a specified region on each model, sub-solutions may contain global or multiple local optimum. Moreover, the algorithm has several common parameters; number of ants, moves, and iterations, which act as the algorithm-s driver. A series of computational experiments for initialising parameters were conducted through methods of Rigid Simplex, RS, and Modified Simplex, MSM. Experimental results were analysed in terms of the best so far solutions, mean and standard deviation. Finally, they stated a recommendation of proper level settings of ACO parameters for all eight functions. These parameter settings can be applied as a guideline for future uses of ACO. This is to promote an ease of use of ACO in real industrial processes. It was found that the results obtained from MSM were pretty similar to those gained from RS. However, if these results with noise standard deviations of 1 and 3 are compared, MSM will reach optimal solutions more efficiently than RS, in terms of speed of convergence.
Abstract: The System Identification problem looks for a
suitably parameterized model, representing a given process. The
parameters of the model are adjusted to optimize a performance
function based on error between the given process output and
identified process output. The linear system identification field is
well established with many classical approaches whereas most of
those methods cannot be applied for nonlinear systems. The problem
becomes tougher if the system is completely unknown with only the
output time series is available. It has been reported that the
capability of Artificial Neural Network to approximate all linear and
nonlinear input-output maps makes it predominantly suitable for the
identification of nonlinear systems, where only the output time series
is available. [1][2][4][5]. The work reported here is an attempt to
implement few of the well known algorithms in the context of
modeling of nonlinear systems, and to make a performance
comparison to establish the relative merits and demerits.
Abstract: One of the purposes of the robust method of
estimation is to reduce the influence of outliers in the data, on the
estimates. The outliers arise from gross errors or contamination from
distributions with long tails. The trimmed mean is a robust estimate.
This means that it is not sensitive to violation of distributional
assumptions of the data. It is called an adaptive estimate when the
trimming proportion is determined from the data rather than being
fixed a “priori-.
The main objective of this study is to find out the robustness
properties of the adaptive trimmed means in terms of efficiency, high
breakdown point and influence function. Specifically, it seeks to find
out the magnitude of the trimming proportion of the adaptive
trimmed mean which will yield efficient and robust estimates of the
parameter for data which follow a modified Weibull distribution with
parameter λ = 1/2 , where the trimming proportion is determined by a
ratio of two trimmed means defined as the tail length. Secondly, the
asymptotic properties of the tail length and the trimmed means are
also investigated. Finally, a comparison is made on the efficiency of
the adaptive trimmed means in terms of the standard deviation for the
trimming proportions and when these were fixed a “priori".
The asymptotic tail lengths defined as the ratio of two trimmed
means and the asymptotic variances were computed by using the
formulas derived. While the values of the standard deviations for the
derived tail lengths for data of size 40 simulated from a Weibull
distribution were computed for 100 iterations using a computer
program written in Pascal language.
The findings of the study revealed that the tail lengths of the
Weibull distribution increase in magnitudes as the trimming
proportions increase, the measure of the tail length and the adaptive
trimmed mean are asymptotically independent as the number of
observations n becomes very large or approaching infinity, the tail
length is asymptotically distributed as the ratio of two independent
normal random variables, and the asymptotic variances decrease as
the trimming proportions increase. The simulation study revealed
empirically that the standard error of the adaptive trimmed mean
using the ratio of tail lengths is relatively smaller for different values
of trimming proportions than its counterpart when the trimming
proportions were fixed a 'priori'.
Abstract: An upwind difference approximation is used for a singularly perturbed problem in material science. Based on the discrete Green-s function theory, the error estimate in maximum norm is achieved, which is first-order uniformly convergent with respect to the perturbation parameter. The numerical experimental result is verified the valid of the theoretical analysis.
Abstract: This paper presents a new hardware interface using a
microcontroller which processes audio music signals to standard
MIDI data. A technique for processing music signals by extracting
note parameters from music signals is described. An algorithm to
convert the voice samples for real-time processing without complex
calculations is proposed. A high frequency microcontroller as the
main processor is deployed to execute the outlined algorithm. The
MIDI data generated is transmitted using the EIA-232 protocol. The
analyses of data generated show the feasibility of using
microcontrollers for real-time MIDI generation hardware interface.
Abstract: This paper proposes a new performance characterization for the test strategy intended for second order filters denominated Transient Analysis Method (TRAM). We evaluate the ability of the addressed test strategy for detecting deviation faults under simultaneous statistical fluctuation of the non-faulty parameters. For this purpose, we use Monte Carlo simulations and a fault model that considers as faulty only one component of the filter under test while the others components adopt random values (within their tolerance band) obtained from their statistical distributions. The new data reported here show (for the filters under study) the presence of hard-to-test components and relatively low fault coverage values for small deviation faults. These results suggest that the fault coverage value obtained using only nominal values for the non-faulty components (the traditional evaluation of TRAM) seem to be a poor predictor of the test performance.
Abstract: By the application of an improved back-propagation
neural network (BPNN), a model of current densities for a solid oxide
fuel cell (SOFC) with 10 layers is established in this study. To build
the learning data of BPNN, Taguchi orthogonal array is applied to
arrange the conditions of operating parameters, which totally 7 factors
act as the inputs of BPNN. Also, the average current densities
achieved by numerical method acts as the outputs of BPNN.
Comparing with the direct solution, the learning errors for all learning
data are smaller than 0.117%, and the predicting errors for 27
forecasting cases are less than 0.231%. The results show that the
presented model effectively builds a mathematical algorithm to predict
performance of a SOFC stack immediately in real time.
Also, the calculating algorithms are applied to proceed with the
optimization of the average current density for a SOFC stack. The
operating performance window of a SOFC stack is found to be
between 41137.11 and 53907.89. Furthermore, an inverse predicting
model of operating parameters of a SOFC stack is developed here by
the calculating algorithms of the improved BPNN, which is proved to
effectively predict operating parameters to achieve a desired
performance output of a SOFC stack.
Abstract: Avalanche release of snow has been modeled in the present studies. Snow is assumed to be represented by semi-solid and the governing equations have been studied from the concept of continuum approach. The dynamical equations have been solved for two different zones [starting zone and track zone] by using appropriate initial and boundary conditions. Effect of density (ρ), Eddy viscosity (η), Slope angle (θ), Slab depth (R) on the flow parameters have been observed in the present studies. Numerical methods have been employed for computing the non linear differential equations. One of the most interesting and fundamental innovation in the present studies is getting initial condition for the computation of velocity by numerical approach. This information of the velocity has obtained through the concept of fracture mechanics applicable to snow. The results on the flow parameters have found to be in qualitative agreement with the published results.
Abstract: The drug discovery process starts with protein
identification because proteins are responsible for many functions
required for maintenance of life. Protein identification further needs
determination of protein function. Proposed method develops a
classifier for human protein function prediction. The model uses
decision tree for classification process. The protein function is
predicted on the basis of matched sequence derived features per each
protein function. The research work includes the development of a
tool which determines sequence derived features by analyzing
different parameters. The other sequence derived features are
determined using various web based tools.
Abstract: Automatic detection of syllable repetition is one of the
important parameter in assessing the stuttered speech objectively.
The existing method which uses artificial neural network (ANN)
requires high levels of agreement as prerequisite before attempting to
train and test ANNs to separate fluent and nonfluent. We propose
automatic detection method for syllable repetition in read speech for
objective assessment of stuttered disfluencies which uses a novel
approach and has four stages comprising of segmentation, feature
extraction, score matching and decision logic. Feature extraction is
implemented using well know Mel frequency Cepstra coefficient
(MFCC). Score matching is done using Dynamic Time Warping
(DTW) between the syllables. The Decision logic is implemented by
Perceptron based on the score given by score matching. Although
many methods are available for segmentation, in this paper it is done
manually. Here the assessment by human judges on the read speech
of 10 adults who stutter are described using corresponding method
and the result was 83%.
Abstract: The boundary layer flow and heat transfer on a
stretched surface moving with prescribed skin friction is studied for
permeable surface. The surface temperature is assumed to vary
inversely with the vertical direction x for n = -1. The skin friction at
the surface scales as (x-1/2) at m = 0. The constants m and n are the
indices of the power law velocity and temperature exponent
respectively. Similarity solutions are obtained for the boundary layer
equations subject to power law temperature and velocity variation.
The effect of various governing parameters, such as the buoyancy
parameter λ and the suction/injection parameter fw for air (Pr = 0.72)
are studied. The choice of n and m ensures that the used similarity
solutions are x independent. The results show that, assisting flow (λ >
0) enhancing the heat transfer coefficient along the surface for any
constant value of fw. Furthermore, injection increases the heat
transfer coefficient but suction reduces it at constant λ.
Abstract: The paper presents an analytical solution for dispersion
of a solute in the peristaltic motion of a micropolar fluid in the
presence of magnetic field and both homogeneous and heterogeneous
chemical reactions. The average effective dispersion coefficient has
been found using Taylor-s limiting condition under long wavelength
approximation. The effects of various relevant parameters on the average
coefficient of dispersion have been studied. The average effective
dispersion coefficient increases with amplitude ratio, cross viscosity
coefficient and heterogeneous chemical reaction rate parameter. But it
decreases with magnetic field parameter and homogeneous chemical
reaction rate parameter. It can be noted that the presence of peristalsis
enhances dispersion of a solute.
Abstract: This paper presented a novel combined cycle of air separation and natural gas liquefaction. The idea is that natural gas can be liquefied, meanwhile gaseous or liquid nitrogen and oxygen are produced in one combined cryogenic system. Cycle simulation and exergy analysis were performed to evaluate the process and thereby reveal the influence of the crucial parameter, i.e., flow rate ratio through two stages expanders β on heat transfer temperature difference, its distribution and consequent exergy loss. Composite curves for the combined hot streams (feeding natural gas and recycled nitrogen) and the cold stream showed the degree of optimization available in this process if appropriate β was designed. The results indicated that increasing β reduces temperature difference and exergy loss in heat exchange process. However, the maximum limit value of β should be confined in terms of minimum temperature difference proposed in heat exchanger design standard and heat exchanger size. The optimal βopt under different operation conditions corresponding to the required minimum temperature differences was investigated.
Abstract: In this research study, an intelligent detection system
to support medical diagnosis and detection of abnormal lesions by
processing endoscopic images is presented. The images used in this
study have been obtained using the M2A Swallowable Imaging
Capsule - a patented, video color-imaging disposable capsule.
Schemes have been developed to extract texture features from the
fuzzy texture spectra in the chromatic and achromatic domains for a
selected region of interest from each color component histogram of
endoscopic images. The implementation of an advanced fuzzy
inference neural network which combines fuzzy systems and
artificial neural networks and the concept of fusion of multiple
classifiers dedicated to specific feature parameters have been also
adopted in this paper. The achieved high detection accuracy of the
proposed system has provided thus an indication that such intelligent
schemes could be used as a supplementary diagnostic tool in
endoscopy.
Abstract: Human activities are increasingly based on the use of remote resources and services, and on the interaction between
remotely located parties that may know little about each other. Mobile agents must be prepared to execute on different hosts with
various environmental security conditions. The aim of this paper is to
propose a trust based mechanism to improve the security of mobile
agents and allow their execution in various environments. Thus, an
adaptive trust mechanism is proposed. It is based on the dynamic interaction between the agent and the environment. Information
collected during the interaction enables generation of an environment
key. This key informs on the host-s trust degree and permits the mobile agent to adapt its execution. Trust estimation is based on
concrete parameters values. Thus, in case of distrust, the source of problem can be located and a mobile agent appropriate behavior can
be selected.
Abstract: Traffic congestion has become a major problem in
many countries. One of the main causes of traffic congestion is due
to road merges. Vehicles tend to move slower when they reach the
merging point. In this paper, an enhanced algorithm for traffic
simulation based on the fluid-dynamic algorithm and kinematic wave
theory is proposed. The enhanced algorithm is used to study traffic
congestion at a road merge. This paper also describes the
development of a dynamic traffic simulation tool which is used as a
scenario planning and to forecast traffic congestion level in a certain
time based on defined parameter values. The tool incorporates the
enhanced algorithm as well as the two original algorithms. Output
from the three above mentioned algorithms are measured in terms of
traffic queue length, travel time and the total number of vehicles
passing through the merging point. This paper also suggests an
efficient way of reducing traffic congestion at a road merge by
analyzing the traffic queue length and travel time.