Abstract: The Automatic Speech Recognition (ASR) applied to
Arabic language is a challenging task. This is mainly related to the
language specificities which make the researchers facing multiple
difficulties such as the insufficient linguistic resources and the very
limited number of available transcribed Arabic speech corpora. In
this paper, we are interested in the development of a HMM-based
ASR system for Standard Arabic (SA) language. Our fundamental
research goal is to select the most appropriate acoustic parameters
describing each audio frame, acoustic models and speech recognition
unit. To achieve this purpose, we analyze the effect of varying frame
windowing (size and period), acoustic parameter number resulting
from features extraction methods traditionally used in ASR, speech
recognition unit, Gaussian number per HMM state and number of
embedded re-estimations of the Baum-Welch Algorithm. To evaluate
the proposed ASR system, a multi-speaker SA connected-digits
corpus is collected, transcribed and used throughout all experiments.
A further evaluation is conducted on a speaker-independent continue
SA speech corpus. The phonemes recognition rate is 94.02% which is
relatively high when comparing it with another ASR system
evaluated on the same corpus.
Abstract: This paper addresses modeling and optimization of process parameters in powder mixed electrical discharge machining (PMEDM). The process output characteristics include metal removal rate (MRR) and electrode wear rate (EWR). Grain size of Aluminum powder (S), concentration of the powder (C), discharge current (I) pulse on time (T) are chosen as control variables to study the process performance. The experimental results are used to develop the regression models based on second order polynomial equations for the different process characteristics. Then, a genetic algorithm (GA) has been employed to determine optimal process parameters for any desired output values of machining characteristics.
Abstract: In this study, we propose a tongue diagnosis method
which detects the tongue from face image and divides the tongue area into six areas, and finally generates tongue coating ratio of each area.
To detect the tongue area from face image, we use ASM as one of the active shape models. Detected tongue area is divided into six areas
widely used in the Korean traditional medicine and the distribution of tongue coating of the six areas is examined by SVM(Support Vector
Machine). For SVM, we use a 3-dimensional vector calculated by PCA(Principal Component Analysis) from a 12-dimentional vector
consisting of RGB, HIS, Lab, and Luv. As a result, we detected the tongue area stably using ASM and found that PCA and SVM helped
raise the ratio of tongue coating detection.
Abstract: In this paper, we construct and implement a new
Steganography algorithm based on learning system to hide a large
amount of information into color BMP image. We have used adaptive
image filtering and adaptive non-uniform image segmentation with
bits replacement on the appropriate pixels. These pixels are selected
randomly rather than sequentially by using new concept defined by
main cases with sub cases for each byte in one pixel. According to
the steps of design, we have been concluded 16 main cases with their
sub cases that covere all aspects of the input information into color
bitmap image. High security layers have been proposed through four
layers of security to make it difficult to break the encryption of the
input information and confuse steganalysis too. Learning system has
been introduces at the fourth layer of security through neural
network. This layer is used to increase the difficulties of the statistical
attacks. Our results against statistical and visual attacks are discussed
before and after using the learning system and we make comparison
with the previous Steganography algorithm. We show that our
algorithm can embed efficiently a large amount of information that
has been reached to 75% of the image size (replace 18 bits for each
pixel as a maximum) with high quality of the output.
Abstract: Delay-Tolerant Networks (DTNs) are sparse, wireless
networks where disconnections are common due to host mobility and
low node density. The Message Ferrying (MF) scheme is a mobilityassisted
paradigm to improve connectivity in DTN-like networks. A
ferry or message ferry is a special node in the network which has
a per-determined route in the deployed area and relays messages
between mobile hosts (MHs) which are intermittently connected.
Increased contact opportunities among mobile hosts and the ferry
improve the performance of the network, both in terms of message
delivery ratio and average end-end delay. However, due to the inherent
mobility of mobile hosts and pre-determined periodicity of the
message ferry, mobile hosts may often -miss- contact opportunities
with a ferry. In this paper, we propose the combination of stationary
ferry access points (FAPs) with MF routing to increase contact
opportunities between mobile hosts and the MF and consequently
improve the performance of the DTN. We also propose several
placement models for deploying FAPs on MF routes. We evaluate the
performance of the FAP placement models through comprehensive
simulation. Our findings show that FAPs do improve the performance
of MF-assisted DTNs and symmetric placement of FAPs outperforms
other placement strategies.
Abstract: The study of the generated defects on manufactured
parts shows the difficulty to maintain parts in their positions during
the machining process and to estimate them during the pre-process
plan. This work presents a contribution to the development of 3D
models for the optimization of the manufacturing tolerances. An
experimental study allows the measurement of the defects of part
positioning for the determination of ε and the choice of an optimal
setup of the part. An approach of 3D tolerance based on the small
displacements method permits the determination of the
manufacturing errors upstream. A developed tool, allows an
automatic generation of the tolerance intervals along the three axes.
Abstract: For gamma radiation detection, assemblies having
scintillation crystals and a photomultiplier tube, also there is a
preamplifier connected to the detector because the signals from
photomultiplier tube are of small amplitude. After pre-amplification
the signals are sent to the amplifier and then to the multichannel
analyser. The multichannel analyser sorts all incoming electrical
signals according to their amplitudes and sorts the detected photons
in channels covering small energy intervals. The energy range of
each channel depends on the gain settings of the multichannel
analyser and the high voltage across the photomultiplier tube. The
exit spectrum data of the two main isotopes studied ,putting data in
biomass program ,process it by Matlab program to get the solid
holdup image (solid spherical nuclear fuel)
Abstract: In this paper, a parametric experimental study for producing paving blocks using fine and coarse waste glass is presented. Some of the physical and mechanical properties of paving blocks having various levels of fine glass (FG) and coarse glass (CG) replacements with fine aggregate (FA) are investigated. The test results show that the replacement of FG by FA at level of 20% by weight has a significant effect on the compressive strength, flexural strength, splitting tensile strength and abrasion resistance of the paving blocks as compared with the control sample because of puzzolanic nature of FG. The compressive strength, flexural strength, splitting tensile strength and abrasion resistance of the paving block samples in the FG replacement level of 20% are 69%, 90%, 47% and 15 % higher as compared with the control sample respectively. It is reported in the earlier works the replacement of FG by FA at level of 20% by weight suppress the alkali-silica reaction (ASR) in the concrete. The test results show that the FG at level of 20% has a potential to be used in the production of paving blocks. The beneficial effect on these properties of CG replacement with FA is little as compared with FG.
Abstract: The objective of global optimization is to find the
globally best solution of a model. Nonlinear models are ubiquitous
in many applications and their solution often requires a global
search approach; i.e. for a function f from a set A ⊂ Rn to
the real numbers, an element x0 ∈ A is sought-after, such that
∀ x ∈ A : f(x0) ≤ f(x). Depending on the field of application,
the question whether a found solution x0 is not only a local minimum
but a global one is very important.
This article presents a probabilistic approach to determine the
probability of a solution being a global minimum. The approach is
independent of the used global search method and only requires a
limited, convex parameter domain A as well as a Lipschitz continuous
function f whose Lipschitz constant is not needed to be known.
Abstract: The software system goes through a number of stages
during its life and a software process model gives a standard format
for planning, organizing and running a project. The article presents a
new software development process model named as “Divide and
Conquer Process Model", based on the idea first it divides the things
to make them simple and then gathered them to get the whole work
done. The article begins with the backgrounds of different software
process models and problems in these models. This is followed by a
new divide and conquer process model, explanation of its different
stages and at the end edge over other models is shown.
Abstract: Real-time embedded systems should benefit from
component-based software engineering to handle complexity and
deal with dependability. In these systems, applications should not
only be logically correct but also behave within time windows.
However, in the current component based software engineering
approaches, a few of component models handles time properties in
a manner that allows efficient analysis and checking at the
architectural level. In this paper, we present a meta-model for
component-based software description that integrates timing
issues. To achieve a complete functional model of software
components, our meta-model focuses on four functional aspects:
interface, static behavior, dynamic behavior, and interaction
protocol. With each aspect we have explicitly associated a time
model. Such a time model can be used to check a component-s
design against certain properties and to compute the timing
properties of component assemblies.
Abstract: Chicken feathers were used as biosorbent for Pb
removal from aqueous solution. In this paper, the kinetics and
equilibrium studies at several pH, temperature, and metal
concentration values are reported. For tested conditions, the Pb
sorption capacity of this poultry waste ranged from 0.8 to 8.3 mg/g.
Optimal conditions for Pb removal by chicken feathers have been
identified. Pseudo-first order and pseudo-second order equations
were used to analyze the experimental data. In addition, the sorption
isotherms were fitted to classical Langmuir and Freundlich models.
Finally, thermodynamic parameters for the sorption process have
been determined. In summary, the results showed that chicken
feathers are an alternative and promising sorbent for the treatment of
effluents polluted by Pb ions.
Abstract: In this paper, we propose a new image segmentation approach for colour textured images. The proposed method for image segmentation consists of two stages. In the first stage, textural features using gray level co-occurrence matrix(GLCM) are computed for regions of interest (ROI) considered for each class. ROI acts as ground truth for the classes. Ohta model (I1, I2, I3) is the colour model used for segmentation. Statistical mean feature at certain inter pixel distance (IPD) of I2 component was considered to be the optimized textural feature for further segmentation. In the second stage, the feature matrix obtained is assumed to be the degraded version of the image labels and modeled as Markov Random Field (MRF) model to model the unknown image labels. The labels are estimated through maximum a posteriori (MAP) estimation criterion using ICM algorithm. The performance of the proposed approach is compared with that of the existing schemes, JSEG and another scheme which uses GLCM and MRF in RGB colour space. The proposed method is found to be outperforming the existing ones in terms of segmentation accuracy with acceptable rate of convergence. The results are validated with synthetic and real textured images.
Abstract: Advances in computing applications in recent years
have prompted the demand for more flexible scheduling models for
QoS demand. Moreover, in practical applications, partly violated
temporal constraints can be tolerated if the violation meets certain
distribution. So we need extend the traditional Liu and Lanland model
to adapt to these circumstances. There are two extensions, which are
the (m, k)-firm model and Window-Constrained model. This paper
researches on weakly hard real-time constraints and their combination
to support QoS. The fact that a practical application can tolerate some
violations of temporal constraint under certain distribution is
employed to support adaptive QoS on the open real-time system. The
experiment results show these approaches are effective compared to
traditional scheduling algorithms.
Abstract: Moulded parts contribute to more than 70% of
components in products. However, common defects particularly in
plastic injection moulding exist such as: warpage, shrinkage, sink
marks, and weld lines. In this paper Taguchi experimental design
methods are applied to reduce the warpage defect of thin plate
Acrylonitrile Butadiene Styrene (ABS) and are demonstrated in two
levels; namely, orthogonal arrays of Taguchi and the Analysis of
Variance (ANOVA). Eight trials have been run in which the optimal
parameters that can minimize the warpage defect in factorial
experiment are obtained. The results obtained from ANOVA
approach analysis with respect to those derived from MINITAB
illustrate the most significant factors which may cause warpage in
injection moulding process. Moreover, ANOVA approach in
comparison with other approaches like S/N ratio is more accurate and
with the interaction of factors it is possible to achieve higher and the
better outcomes.
Abstract: The accomplished study is based on the appointment
and identification of ageing effects and according to this absorption
of moisture of aircraft cabin components over the life-cycle. In the
first step of the study ceiling panels from same age and from the
same aircraft cabin have been examined according to weight changes
depending on the position in the aircraft cabin. In the second step of
the study different aged ceiling panels have been examined
concerning deflection, weight changes and the acoustic sound
transmission loss. To prove the assumption of water absorption
within the study and with the theoretical background from literature
and scientific papers, an older test panel was exposed extreme
thermal conditions (humidity and temperature) within a climate
chamber to show that there is a general ingress of water to cabin
components and that this ingress of water leads to the change of
different mechanical properties.
Abstract: We introduce an effective approach for automatic offline au- thentication of handwritten samples where the forgeries are skillfully done, i.e., the true and forgery sample appearances are almost alike. Subtle details of temporal information used in online verification are not available offline and are also hard to recover robustly. Thus the spatial dynamic information like the pen-tip pressure characteristics are considered, emphasizing on the extraction of low density pixels. The points result from the ballistic rhythm of a genuine signature which a forgery, however skillful that may be, always lacks. Ten effective features, including these low density points and den- sity ratio, are proposed to make the distinction between a true and a forgery sample. An adaptive decision criteria is also derived for better verification judgements.
Abstract: In this paper is shown that the probability-statistic methods application, especially at the early stage of the aviation gas turbine engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence is considered the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods. Training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. Thus for GTE technical condition more adequate model making are analysed dynamics of skewness and kurtosis coefficients' changes. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows to draw conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. For checking of models adequacy is considered the Fuzzy Multiple Correlation Coefficient of Fuzzy Multiple Regression. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stage-bystage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine temperature condition was made.
Abstract: Web applications have become complex and crucial for many firms, especially when combined with areas such as CRM (Customer Relationship Management) and BPR (Business Process Reengineering). The scientific community has focused attention to Web application design, development, analysis, testing, by studying and proposing methodologies and tools. Static and dynamic techniques may be used to analyze existing Web applications. The use of traditional static source code analysis may be very difficult, for the presence of dynamically generated code, and for the multi-language nature of the Web. Dynamic analysis may be useful, but it has an intrinsic limitation, the low number of program executions used to extract information. Our reverse engineering analysis, used into our WAAT (Web Applications Analysis and Testing) project, applies mutational techniques in order to exploit server side execution engines to accomplish part of the dynamic analysis. This paper studies the effects of mutation source code analysis applied to Web software to build application models. Mutation-based generated models may contain more information then necessary, so we need a pruning mechanism.
Abstract: With deep development of software reuse, componentrelated
technologies have been widely applied in the development of
large-scale complex applications. Component identification (CI) is
one of the primary research problems in software reuse, by analyzing
domain business models to get a set of business components with high
reuse value and good reuse performance to support effective reuse.
Based on the concept and classification of CI, its technical stack is
briefly discussed from four views, i.e., form of input business models,
identification goals, identification strategies, and identification
process. Then various CI methods presented in literatures are
classified into four types, i.e., domain analysis based methods,
cohesion-coupling based clustering methods, CRUD matrix based
methods, and other methods, with the comparisons between these
methods for their advantages and disadvantages. Additionally, some
insufficiencies of study on CI are discussed, and the causes are
explained subsequently. Finally, it is concluded with some
significantly promising tendency about research on this problem.