Abstract: Embedded systems need to respect stringent real
time constraints. Various hardware components included in such
systems such as cache memories exhibit variability and therefore
affect execution time. Indeed, a cache memory access from an
embedded microprocessor might result in a cache hit where the
data is available or a cache miss and the data need to be fetched
with an additional delay from an external memory. It is therefore
highly desirable to predict future memory accesses during
execution in order to appropriately prefetch data without incurring
delays. In this paper, we evaluate the potential of several artificial
neural networks for the prediction of instruction memory
addresses. Neural network have the potential to tackle the nonlinear
behavior observed in memory accesses during program
execution and their demonstrated numerous hardware
implementation emphasize this choice over traditional forecasting
techniques for their inclusion in embedded systems. However,
embedded applications execute millions of instructions and
therefore millions of addresses to be predicted. This very
challenging problem of neural network based prediction of large
time series is approached in this paper by evaluating various neural
network architectures based on the recurrent neural network
paradigm with pre-processing based on the Self Organizing Map
(SOM) classification technique.
Abstract: In the present work an investigation of the effects of
the air frontal velocity, relative humidity and dry air temperature on
the heat transfer characteristics of plain finned tube evaporator has
been conducted. Using an appropriate correlation for the air side heat
transfer coefficient the temperature distribution along the fin surface
was calculated using a dimensionless temperature distribution. For a
constant relative humidity and bulb temperature, it is found that the
temperature distribution decreases with increasing air frontal
velocity. Apparently, it is attributed to the condensate water film
flowing over the fin surface. When dry air temperature and face
velocity are being kept constant, the temperature distribution
decreases with the increase of inlet relative humidity. An increase in
the inlet relative humidity is accompanied by a higher amount of
moisture on the fin surface. This results in a higher amount of latent
heat transfer which involves higher fin surface temperature. For the
influence of dry air temperature, the results here show an increase in
the dimensionless temperature parameter with a decrease in bulb
temperature. Increasing bulb temperature leads to higher amount of
sensible and latent heat transfer when other conditions remain
constant.
Abstract: Visualizing sound and noise often help us to determine
an appropriate control over the source localization. Near-field acoustic
holography (NAH) is a powerful tool for the ill-posed problem.
However, in practice, due to the small finite aperture size, the discrete
Fourier transform, FFT based NAH couldn-t predict the activeregion-
of-interest (AROI) over the edges of the plane. Theoretically
few approaches were proposed for solving finite aperture problem.
However most of these methods are not quite compatible for the
practical implementation, especially near the edge of the source. In
this paper, a zip-stuffing extrapolation approach has suggested with
2D Kaiser window. It is operated on wavenumber complex space
to localize the predicted sources. We numerically form a practice
environment with touch impact databases to test the localization of
sound source. It is observed that zip-stuffing aperture extrapolation
and 2D window with evanescent components provide more accuracy
especially in the small aperture and its derivatives.
Abstract: This paper presents an approach for an unequal error
protection of facial features of personal ID images coding. We
consider unequal error protection (UEP) strategies for the efficient
progressive transmission of embedded image codes over noisy
channels. This new method is based on the progressive image
compression embedded zerotree wavelet (EZW) algorithm and UEP
technique with defined region of interest (ROI). In this case is ROI
equal facial features within personal ID image. ROI technique is
important in applications with different parts of importance. In ROI
coding, a chosen ROI is encoded with higher quality than the
background (BG). Unequal error protection of image is provided by
different coding techniques and encoding LL band separately. In our
proposed method, image is divided into two parts (ROI, BG) that
consist of more important bytes (MIB) and less important bytes
(LIB). The proposed unequal error protection of image transmission
has shown to be more appropriate to low bit rate applications,
producing better quality output for ROI of the compresses image.
The experimental results verify effectiveness of the design. The
results of our method demonstrate the comparison of the UEP of
image transmission with defined ROI with facial features and the
equal error protection (EEP) over additive white gaussian noise
(AWGN) channel.
Abstract: This paper aims to provide a conceptual framework to examine competitive disadvantage of banks that suffer from poor performance. Banks generate revenues mainly from the interest rate spread on taking deposits and making loans while collecting fees in the process. To maximize firm value, banks seek loan growth and expense control while managing risk associated with loans with respect to non-performing borrowers or narrowing interest spread between assets and liabilities. Competitive disadvantage refers to the failure to access imitable resources and to build managing capabilities to gain sustainable return given appropriate risk management. This paper proposes a four-quadrant framework of organizational typology is subsequently proposed to examine the features of competitive disadvantage in the banking sector. A resource configuration model, which is extracted from CAMEL indicators to examine the underlying features of bank failures.
Abstract: This paper aims at developing a multilevel fuzzy
decision support model for urban rail transit planning schemes in
China under the background that China is presently experiencing an
unprecedented construction of urban rail transit. In this study, an
appropriate model using multilevel fuzzy comprehensive evaluation
method is developed. In the decision process, the followings are
considered as the influential objectives: traveler attraction,
environment protection, project feasibility and operation. In addition,
consistent matrix analysis method is used to determine the weights
between objectives and the weights between the objectives-
sub-indictors, which reduces the work caused by repeated
establishment of the decision matrix on the basis of ensuring the
consistency of decision matrix. The application results show that
multilevel fuzzy decision model can perfectly deal with the
multivariable and multilevel decision process, which is particularly
useful in the resolution of multilevel decision-making problem of
urban rail transit planning schemes.
Abstract: The mitigation of crop loss due to damaging freezes requires accurate air temperature prediction models. An improved model for temperature prediction in Georgia was developed by including information on seasonality and modifying parameters of an existing artificial neural network model. Alternative models were compared by instantiating and training multiple networks for each model. The inclusion of up to 24 hours of prior weather information and inputs reflecting the day of year were among improvements that reduced average four-hour prediction error by 0.18°C compared to the prior model. Results strongly suggest model developers should instantiate and train multiple networks with different initial weights to establish appropriate model parameters.
Abstract: This paper focuses on the problem of a common linear copositive Lyapunov function(CLCLF) existence for discrete-time switched positive linear systems(SPLSs) with bounded time-varying delays. In particular, applying system matrices, a special class of matrices are constructed in an appropriate manner. Our results reveal that the existence of a common copositive Lyapunov function can be related to the Schur stability of such matrices. A simple example is provided to illustrate the implication of our results.
Abstract: In the present study, a procedure was developed to
determine the optimum reaction rate constants in generalized
Arrhenius form and optimized through the Nelder-Mead method. For
this purpose, a comprehensive mathematical model of a fixed bed
reactor for dehydrogenation of heavy paraffins over Pt–Sn/Al2O3
catalyst was developed. Utilizing appropriate kinetic rate expressions
for the main dehydrogenation reaction as well as side reactions and
catalyst deactivation, a detailed model for the radial flow reactor was
obtained. The reactor model composed of a set of partial differential
equations (PDE), ordinary differential equations (ODE) as well as
algebraic equations all of which were solved numerically to
determine variations in components- concentrations in term of mole
percents as a function of time and reactor radius. It was demonstrated
that most significant variations observed at the entrance of the bed
and the initial olefin production obtained was rather high. The
aforementioned method utilized a direct-search optimization
algorithm along with the numerical solution of the governing
differential equations. The usefulness and validity of the method was
demonstrated by comparing the predicted values of the kinetic
constants using the proposed method with a series of experimental
values reported in the literature for different systems.
Abstract: Activity-Based Costing (ABC) represents an
alternative paradigm to traditional cost accounting system and
it often provides more accurate cost information for decision
making such as product pricing, product mix, and make-orbuy
decisions. ABC models the causal relationships between
products and the resources used in their production and traces
the cost of products according to the activities through the use
of appropriate cost drivers. In this paper, the implementation
of the ABC in a manufacturing system is analyzed and a
comparison with the traditional cost based system in terms of
the effects on the product costs are carried out to highlight the
difference between two costing methodologies. By using this
methodology, a valuable insight into the factors that cause the
cost is provided, helping to better manage the activities of the
company.
Abstract: The research objective aims to search information about storytelling and fable associated with fireflies in Amphawa community, in order to design and create a story book which is appropriate for the interests of children in early childhood. This book should help building the development of learning about the natural environment, imagination, and creativity among children, which then, brings about the promotion of the development, conservation and dissemination of cultural values and uniqueness of the Amphawa community. The population used in this study were 30 students in early childhood aged between 6-8 years-old, grade 1-3 from the Demonstration School of Suan Sunandha Rajabhat University. The method used for this study was purposive sampling and the research conducted by the query and analysis of data from both the document and the narrative field tales and fable associated with the fireflies of Amphawa community. Then, using the results to synthesize and create a conceptual design in a form of 8 visual images which were later applied to 1 illustrated children’s book and presented to the experts to evaluate and test this media.
Abstract: The dominant judgment for earthquake damaged reinforced concrete (RC) structures is to rebuild them with the new ones. Consequently, this paper estimates if there is chance to repair earthquake RC beams and obtain economical contribution to modern day society. Therefore, the totally damaged (damaged in shear under cyclic load) reinforced concrete (RC) beams repaired and strengthened by externally bonded carbon fibre reinforced polymer (CFRP) strips in this study. Four specimens, apart from the reference beam, were separated into two distinct groups. Two experimental beams in the first group primarily tested up to failure then appropriately repaired and strengthened with CFRP strips. Two undamaged specimens from the second group were not repaired but strengthened by the identical strengthening scheme as the first group for comparison. This study studies whether earthquake damaged RC beams that have been repaired and strengthened will validate similar strength and behavior to equally strengthened, undamaged RC beams. Accordingly, a strength correspondence according to strengthened specimens was acquired for the repaired and strengthened specimens. Test results confirmed that repair and strengthening, which were estimated in the experimental program, were effective for the specimens with the cracking patterns considered in the experimental program.
Abstract: SQL injection on web applications is a very popular
kind of attack. There are mechanisms such as intrusion detection
systems in order to detect this attack. These strategies often rely on
techniques implemented at high layers of the application but do not
consider the low level of system calls. The problem of only
considering the high level perspective is that an attacker can
circumvent the detection tools using certain techniques such as URL
encoding. One technique currently used for detecting low-level
attacks on privileged processes is the tracing of system calls. System
calls act as a single gate to the Operating System (OS) kernel; they
allow catching the critical data at an appropriate level of detail. Our
basic assumption is that any type of application, be it a system
service, utility program or Web application, “speaks” the language of
system calls when having a conversation with the OS kernel. At this
level we can see the actual attack while it is happening. We conduct
an experiment in order to demonstrate the suitability of system call
analysis for detecting SQL injection. We are able to detect the attack.
Therefore we conclude that system calls are not only powerful in
detecting low-level attacks but that they also enable us to detect highlevel
attacks such as SQL injection.
Abstract: Intermittent aeration process can be easily applied on
the existing activated sludge system and is highly reliable against the loading changes. It can be operated in a relatively simple way as well.
Since the moving-bed biofilm reactor method processes pollutants by attaching and securing the microorganisms on the media, the process
efficiency can be higher compared to the suspended growth biological
treatment process, and can reduce the return of sludge. In this study,
the existing intermittent aeration process with alternating flow being
applied on the oxidation ditch is applied on the continuous flow stirred tank reactor with advantages from both processes, and we would like
to develop the process to significantly reduce the return of sludge in the clarifier and to secure the reliable quality of treated water by
adding the moving media. Corresponding process has the appropriate
form as an infrastructure based on u- environment in future u- City and
is expected to accelerate the implementation of u-Eco city in conjunction with city based services. The system being conducted in a
laboratory scale has been operated in HRT 8hours except for the final
clarifier and showed the removal efficiency of 97.7 %, 73.1 % and 9.4
% in organic matters, TN and TP, respectively with operating range of
4hour cycle on system SRT 10days. After adding the media, the removal efficiency of phosphorus showed a similar level compared to
that before the addition, but the removal efficiency of nitrogen was
improved by 7~10 %. In addition, the solids which were maintained in
MLSS 1200~1400 at 25 % of media packing were attached all onto the
media, which produced no sludge entering the clarifier. Therefore, the
return of sludge is not needed any longer.
Abstract: In order to reduce cost, increase quality, and for
timely supplying production systems has considerably taken the
advantages of supply chain management and these advantages are
also competitive. Selection of appropriate supplier has an important
role in improvement and efficiency of systems.
The models of supplier selection which have already been used by
researchers have considered selection one or more suppliers from
potential suppliers but in this paper selecting one supplier as partner
from one supplier that have minimum one period supplying to buyer
is considered.
This paper presents a conceptual model for partner selection and
application of Degree of Adoptive (DOA) model for final selection.
The attributes weight in this model is prepared through AHP
model. After making the descriptive model, determining the
attributes and measuring the parameters of the adaptive is examined
in an auto industry of Iran(Zagross Khodro co.) and results are
presented.
Abstract: Data mining (DM) is the process of finding and extracting frequent patterns that can describe the data, or predict unknown or future values. These goals are achieved by using various learning algorithms. Each algorithm may produce a mining result completely different from the others. Some algorithms may find millions of patterns. It is thus the difficult job for data analysts to select appropriate models and interpret the discovered knowledge. In this paper, we describe a framework of an intelligent and complete data mining system called SUT-Miner. Our system is comprised of a full complement of major DM algorithms, pre-DM and post-DM functionalities. It is the post-DM packages that ease the DM deployment for business intelligence applications.
Abstract: Understanding proteins functions is a major goal in
the post-genomic era. Proteins usually work in context of other
proteins and rarely function alone. Therefore, it is highly relevant to
study the interaction partners of a protein in order to understand its
function. Machine learning techniques have been widely applied to
predict protein-protein interactions. Kernel functions play an
important role for a successful machine learning technique. Choosing
the appropriate kernel function can lead to a better accuracy in a
binary classifier such as the support vector machines. In this paper,
we describe a Bayesian kernel for the support vector machine to
predict protein-protein interactions. The use of Bayesian kernel can
improve the classifier performance by incorporating the probability
characteristic of the available experimental protein-protein
interactions data that were compiled from different sources. In
addition, the probabilistic output from the Bayesian kernel can assist
biologists to conduct more research on the highly predicted
interactions. The results show that the accuracy of the classifier has
been improved using the Bayesian kernel compared to the standard
SVM kernels. These results imply that protein-protein interaction can
be predicted using Bayesian kernel with better accuracy compared to
the standard SVM kernels.
Abstract: This paper reviews various approaches that have been
used for the modeling and simulation of large-scale engineering
systems and determines their appropriateness in the development of a
RICS modeling and simulation tool. Bond graphs, linear graphs,
block diagrams, differential and difference equations, modeling
languages, cellular automata and agents are reviewed. This tool
should be based on linear graph representation and supports symbolic
programming, functional programming, the development of noncausal
models and the incorporation of decentralized approaches.
Abstract: The scope of this research was to study the relation between the facial expressions of three lecturers in a real academic lecture theatre and the reactions of the students to those expressions. The first experiment aimed to investigate the effectiveness of a virtual lecturer-s expressions on the students- learning outcome in a virtual pedagogical environment. The second experiment studied the effectiveness of a single facial expression, i.e. the smile, on the students- performance. Both experiments involved virtual lectures, with virtual lecturers teaching real students. The results suggest that the students performed better by 86%, in the lectures where the lecturer performed facial expressions compared to the results of the lectures that did not use facial expressions. However, when simple or basic information was used, the facial expressions of the virtual lecturer had no substantial effect on the students- learning outcome. Finally, the appropriate use of smiles increased the interest of the students and consequently their performance.
Abstract: This paper describes a new supervised fusion (hybrid)
electrocardiogram (ECG) classification solution consisting of a new
QRS complex geometrical feature extraction as well as a new version
of the learning vector quantization (LVQ) classification algorithm
aimed for overcoming the stability-plasticity dilemma. Toward this
objective, after detection and delineation of the major events of ECG
signal via an appropriate algorithm, each QRS region and also its
corresponding discrete wavelet transform (DWT) are supposed as
virtual images and each of them is divided into eight polar sectors.
Then, the curve length of each excerpted segment is calculated
and is used as the element of the feature space. To increase the
robustness of the proposed classification algorithm versus noise,
artifacts and arrhythmic outliers, a fusion structure consisting of
five different classifiers namely as Support Vector Machine (SVM),
Modified Learning Vector Quantization (MLVQ) and three Multi
Layer Perceptron-Back Propagation (MLP–BP) neural networks with
different topologies were designed and implemented. The new proposed
algorithm was applied to all 48 MIT–BIH Arrhythmia Database
records (within–record analysis) and the discrimination power of the
classifier in isolation of different beat types of each record was
assessed and as the result, the average accuracy value Acc=98.51%
was obtained. Also, the proposed method was applied to 6 number
of arrhythmias (Normal, LBBB, RBBB, PVC, APB, PB) belonging
to 20 different records of the aforementioned database (between–
record analysis) and the average value of Acc=95.6% was achieved.
To evaluate performance quality of the new proposed hybrid learning
machine, the obtained results were compared with similar peer–
reviewed studies in this area.