Abstract: This work aims to explore the factors that have an incidence in reading comprehension process, with different type of texts. In a recent study with 2nd, 3rd and 4th grade children, it was observed that reading comprehension of narrative texts was better than comprehension of expository texts. Nevertheless it seems that not only the type of text but also other textual factors would account for comprehension depending on the cognitive processing demands posed by the text. In order to explore this assumption, three narrative and three expository texts were elaborated with different degree of complexity. A group of 40 fourth grade Spanish-speaking children took part in the study. Children were asked to read the texts and answer orally three literal and three inferential questions for each text. The quantitative and qualitative analysis of children responses showed that children had difficulties in both, narrative and expository texts. The problem was to answer those questions that involved establishing complex relationships among information units that were present in the text or that should be activated from children’s previous knowledge to make an inference. Considering the data analysis, it could be concluded that there is some interaction between the type of text and the cognitive processing load of a specific text.
Abstract: A total of twenty tensile biopsies were collected from
children undergoing tonsillectomy from teaching hospital ENT
department and Kurdistan private hospital in sulaimani city. All
biopsies were homogenized and cultured; the obtained bacterial
isolates were purified and identified by biochemical tests and VITEK
2 compact system. Among the twenty studied samples, only one
Pseudomonas putida with probability of 99% was isolated.
Antimicrobial susceptibility was carried out by disk diffusion
method, Pseudomonas putida showed resistance to all antibiotics
used except vancomycin. The isolate further subjected to PCR and
DNA sequence analysis of blaVIM gene using different set of primers
for different regions of VIM gene. The results were found to be PCR
positive for the blaVIM gene. To determine the sequence of blaVIM
gene, DNA sequencing performed. Sequence alignment of blaVIM
gene with previously recorded blaVIM gene in NCBI- database showed
that P. putida isolate have different blaVIM gene.
Abstract: In this paper, a method to detect multiple ellipses is presented. The technique is efficient and robust against incomplete ellipses due to partial occlusion, noise or missing edges and outliers. It is an iterative technique that finds and removes the best ellipse until no reasonable ellipse is found. At each run, the best ellipse is extracted from randomly selected edge patches, its fitness calculated and compared to a fitness threshold. RANSAC algorithm is applied as a sampling process together with the Direct Least Square fitting of ellipses (DLS) as the fitting algorithm. In our experiment, the method performs very well and is robust against noise and spurious edges on both synthetic and real-world image data.
Abstract: Recent financial international scandals around the world have led to a number of investigations into the effectiveness of corporate governance practices and audit quality. Although evidence of corporate governance practices and audit quality exists from developed economies, very scanty studies have been conducted in Egypt where corporate governance is just evolving. Therefore, this study provides evidence on the effectiveness of corporate governance practices and audit quality from a developing country. The data for analysis are gathered from the top 50 most active companies in the Egyptian Stock Exchange, covering the three year period 2007-2009. Logistic regression was used in investigating the questions that were raised in the study. Findings from the study show that board independence; CEO duality and audit committees significantly have relationship with audit quality. The results also, indicate that institutional investor and managerial ownership have no significantly relationship with audit quality. Evidence also exist that size of the company; complexity and business leverage are important factors in audit quality for companies quoted on the Egypt Stock Exchange.
Abstract: Scale defects are common surface defects in hot steel rolling. The modelling of such defects is problematic and their causes are not straightforward. In this study, we investigated genetic algorithms in search for a mathematical solution to scale formation. For this research, a high-dimensional data set from hot steel rolling process was gathered. The synchronisation of the variables as well as the allocation of the measurements made on the steel strip were solved before the modelling phase.
Abstract: This paper introduces a new instantaneous frequency
computation approach -Counting Instantaneous Frequency for a
general class of signals called simple waves. The classsimple wave
contains a wide range of continuous signals for which the concept
instantaneous frequency has a perfect physical sense. The concept of
-Counting Instantaneous Frequency also applies to all the discrete data.
For all the simple wave signals and the discrete data, -Counting
instantaneous frequency can be computed directly without signal
decomposition process. The intrinsic mode functions obtained through
empirical mode decomposition belongs to simple wave. So
-Counting instantaneous frequency can be used together with
empirical mode decomposition.
Abstract: It has been shown that in most accidents the driver is responsible due to being distracted or misjudging the situation. In order to solve such problems research has been dedicated to developing driver assistance systems that are able to monitor the traffic situation around the vehicle. This paper presents methods for recognizing several circumstances on a road. The methods use both the in-vehicle warning systems and the roadside infrastructure. Preliminary evaluation results for fog and ice-on-road detection are presented. The ice detection results are based on data recorded in a test track dedicated to tyre friction testing. The achieved results anticipate that ice detection could work at a performance of 70% detection with the right setup, which is a good foundation for implementation. However, the full benefit of the presented cooperative system is achieved by fusing the outputs of multiple data sources, which is the key point of discussion behind this publication.
Abstract: Ratio and regression type estimators have been used by previous authors to estimate a population mean for the principal variable from samples in which both auxiliary x and principal y variable data are available. However, missing data are a common problem in statistical analyses with real data. Ratio and regression type estimators have also been used for imputing values of missing y data. In this paper, six new ratio and regression type estimators are proposed for imputing values for any missing y data and estimating a population mean for y from samples with missing x and/or y data. A simulation study has been conducted to compare the six ratio and regression type estimators with a previous estimator of Rueda. Two population sizes N = 1,000 and 5,000 have been considered with sample sizes of 10% and 30% and with correlation coefficients between population variables X and Y of 0.5 and 0.8. In the simulations, 10 and 40 percent of sample y values and 10 and 40 percent of sample x values were randomly designated as missing. The new ratio and regression type estimators give similar mean absolute percentage errors that are smaller than the Rueda estimator for all cases. The new estimators give a large reduction in errors for the case of 40% missing y values and sampling fraction of 30%.
Abstract: Constant upgrading of Enterprise Resource Planning
(ERP) systems is necessary, but can cause new defects. This paper
attempts to model the likelihood of defects after completed upgrades
with Weibull defect probability density function (PDF). A case study
is presented analyzing data of recorded defects obtained for one ERP
subsystem. The trends are observed for the value of the parameters
relevant to the proposed statistical Weibull distribution for a given
one year period. As a result, the ability to predict the appearance of
defects after the next upgrade is described.
Abstract: This paper presents the investigation results of UV
measurement at different level of altitudes and the development of a
new portable instrument for measuring UV. The rapid growth of
industrial sectors in developing countries including Malaysia, brings
not only income to the nation, but also causes pollution in various
forms. Air pollution is one of the significant contributors to global
warming by depleting the Ozone layer, which would reduce the
filtration of UV rays. Long duration of exposure to high to UV rays
has many devastating health effects to mankind directly or indirectly
through destruction of the natural resources. This study aimed to
show correlation between UV and altitudes which indirectly can help
predict Ozone depletion. An instrument had been designed to
measure and monitors the level of UV. The instrument comprises of
two main blocks namely data logger and Graphic User Interface
(GUI). Three sensors were used in the data logger to detect changes
in the temperature, humidity and ultraviolet. The system has
undergone experimental measurement to capture data at two different
conditions; industrial area and high attitude area. The performance of
the instrument showed consistency in the data captured and the
results of the experiment drew a significantly high reading of UV at
high altitudes.
Abstract: The paper evaluates several hundred one-day-ahead
VaR forecasting models in the time period between the years 2004
and 2009 on data from six world stock indices - DJI, GSPC, IXIC,
FTSE, GDAXI and N225. The models model mean using the ARMA
processes with up to two lags and variance with one of GARCH,
EGARCH or TARCH processes with up to two lags. The models are
estimated on the data from the in-sample period and their forecasting
accuracy is evaluated on the out-of-sample data, which are more
volatile. The main aim of the paper is to test whether a model
estimated on data with lower volatility can be used in periods with
higher volatility. The evaluation is based on the conditional coverage
test and is performed on each stock index separately. The primary
result of the paper is that the volatility is best modelled using a
GARCH process and that an ARMA process pattern cannot be found
in analyzed time series.
Abstract: In this study, a mathematical model was proposed and
the accuracy of this model was assessed to predict the growth of
Pseudomonas aeruginosa and rhamnolipid production under nitrogen
limiting (sodium nitrate) fed-batch fermentation. All of the
parameters used in this model were achieved individually without
using any data from the literature.
The overall growth kinetic of the strain was evaluated using a
dual-parallel substrate Monod equation which was described by
several batch experimental data. Fed-batch data under different
glycerol (as the sole carbon source, C/N=10) concentrations and feed
flow rates were used to describe the proposed fed-batch model and
other parameters. In order to verify the accuracy of the proposed
model several verification experiments were performed in a vast
range of initial glycerol concentrations. While the results showed an
acceptable prediction for rhamnolipid production (less than 10%
error), in case of biomass prediction the errors were less than 23%. It
was also found that the rhamnolipid production by P. aeruginosa was
more sensitive at low glycerol concentrations.
Based on the findings of this work, it was concluded that the
proposed model could effectively be employed for rhamnolipid
production by this strain under fed-batch fermentation on up to 80 g l-
1 glycerol.
Abstract: Association rules are an important problem in data
mining. Massively increasing volume of data in real life databases
has motivated researchers to design novel and incremental algorithms
for association rules mining. In this paper, we propose an incremental
association rules mining algorithm that integrates shocking
interestingness criterion during the process of building the model. A
new interesting measure called shocking measure is introduced. One
of the main features of the proposed approach is to capture the user
background knowledge, which is monotonically augmented. The
incremental model that reflects the changing data and the user beliefs
is attractive in order to make the over all KDD process more
effective and efficient. We implemented the proposed approach and
experiment it with some public datasets and found the results quite
promising.
Abstract: An automated wood recognition system is designed to
classify tropical wood species.The wood features are extracted based
on two feature extractors: Basic Grey Level Aura Matrix (BGLAM)
technique and statistical properties of pores distribution (SPPD)
technique. Due to the nonlinearity of the tropical wood species
separation boundaries, a pre classification stage is proposed which
consists ofKmeans clusteringand kernel discriminant analysis (KDA).
Finally, Linear Discriminant Analysis (LDA) classifier and KNearest
Neighbour (KNN) are implemented for comparison purposes.
The study involves comparison of the system with and without pre
classification using KNN classifier and LDA classifier.The results
show that the inclusion of the pre classification stage has improved
the accuracy of both the LDA and KNN classifiers by more than
12%.
Abstract: In this project cadmium ions were adsorbed from
aqueous solutions onto either date pits; a cheap agricultural and nontoxic
material, or chemically activated carbon prepared from date pits
using phosphoric acid. A series of experiments were conducted in a
batch adsorption technique to assess the feasibility of using the
prepared adsorbents. The effects of the process variables such as
initial cadmium ions concentration, contact time, solution pH and
adsorbent dose on the adsorption capacity of both adsorbents were
studied. The experimental data were tested using different isotherm
models such as Langmuir, Freundlich, Tempkin and Dubinin-
Radushkevich. The results showed that although the equilibrium data
could be described by all models used, Langmuir model gave slightly
better results when using activated carbon while Freundlich model,
gave better results with date pits.
Abstract: The decision to recruit manpower in an organization
requires clear identification of the criteria (attributes) that distinguish
successful from unsuccessful performance. The choice of appropriate
attributes or criteria in different levels of hierarchy in an organization
is a multi-criteria decision problem and therefore multi-criteria
decision making (MCDM) techniques can be used for prioritization
of such attributes. Analytic Hierarchy Process (AHP) is one such
technique that is widely used for deciding among the complex criteria
structure in different levels. In real applications, conventional AHP
still cannot reflect the human thinking style as precise data
concerning human attributes are quite hard to be extracted. Fuzzy
logic offers a systematic base in dealing with situations, which are
ambiguous or not well defined. This study aims at defining a
methodology to improve the quality of prioritization of an
employee-s performance measurement attributes under fuzziness. To
do so, a methodology based on the Extent Fuzzy Analytic Hierarchy
Process is proposed. Within the model, four main attributes such as
Subject knowledge and achievements, Research aptitude, Personal
qualities and strengths and Management skills with their subattributes
are defined. The two approaches conventional AHP
approach and the Extent Fuzzy Analytic Hierarchy Process approach
have been compared on the same hierarchy structure and criteria set.
Abstract: Compression algorithms reduce the redundancy in
data representation to decrease the storage required for that data.
Lossless compression researchers have developed highly
sophisticated approaches, such as Huffman encoding, arithmetic
encoding, the Lempel-Ziv (LZ) family, Dynamic Markov
Compression (DMC), Prediction by Partial Matching (PPM), and
Burrows-Wheeler Transform (BWT) based algorithms.
Decompression is also required to retrieve the original data by
lossless means. A compression scheme for text files coupled with
the principle of dynamic decompression, which decompresses only
the section of the compressed text file required by the user instead of
decompressing the entire text file. Dynamic decompressed files offer
better disk space utilization due to higher compression ratios
compared to most of the currently available text file formats.
Abstract: When a high DC voltage is applied to a capacitor with
strongly asymmetrical electrodes, it generates a mechanical force that
affects the whole capacitor. This phenomenon is most likely to be
caused by the motion of ions generated around the smaller of the two
electrodes and their subsequent interaction with the surrounding
medium. A method to measure this force has been devised and used.
A formula describing the force has also been derived. After
comparing the data gained through experiments with those acquired
using the theoretical formula, a difference was found above a certain
value of current. This paper also gives reasons for this difference.
Abstract: This paper suggests a new Affine Projection (AP) algorithm with variable data-reuse factor using the condition number as a decision factor. To reduce computational burden, we adopt a recently reported technique which estimates the condition number of an input data matrix. Several simulations show that the new algorithm has better performance than that of the conventional AP algorithm.
Abstract: Feature selection has recently been the subject of intensive research in data mining, specially for datasets with a large number of attributes. Recent work has shown that feature selection can have a positive effect on the performance of machine learning algorithms. The success of many learning algorithms in their attempts to construct models of data, hinges on the reliable identification of a small set of highly predictive attributes. The inclusion of irrelevant, redundant and noisy attributes in the model building process phase can result in poor predictive performance and increased computation. In this paper, a novel feature search procedure that utilizes the Ant Colony Optimization (ACO) is presented. The ACO is a metaheuristic inspired by the behavior of real ants in their search for the shortest paths to food sources. It looks for optimal solutions by considering both local heuristics and previous knowledge. When applied to two different classification problems, the proposed algorithm achieved very promising results.