Abstract: In wastewater treatment processes, aeration introduces
air into a liquid. In these systems, air is introduced by different
devices submerged in the wastewater. Smaller bubbles result in more
bubble surface area per unit of volume and higher oxygen transfer
efficiency. Jet pumps are devices that use air bubbles and are widely
used in wastewater treatment processes. The principle of jet pumps is
their ability to transfer energy of one fluid, called primary or motive,
into a secondary fluid or gas. These pumps have no moving parts and
are able to work in remote areas under extreme conditions. The
objective of this work is to study experimentally the characteristics of
the jet pump and the size of air bubbles in the laboratory water tank.
The effect of flow rate ratio on pump performance is investigated in
order to have a better understanding about pump behavior under
various conditions, in order to determine the efficiency of receiving
air bubbles different sizes. The experiments show that we should take
care when increasing the flow rate ratio while seeking to decrease
bubble size in the outlet flow. This study will help improve and
extend the use of the jet pump in many practical applications.
Abstract: Durian is the flagship fruit of Mindanao and there is
an abundance of several cultivars with many confusing identities/
names.
The project was conducted to develop procedure for reliable and
rapid detection and sorting of durian planting materials. Moreover, it
is also aimed to establish specific genetic or DNA markers for routine
testing and authentication of durian cultivars in question.
The project developed molecular procedures for routine testing.
SSR primers were also screened and identified for their utility in
discriminating durian cultivars collected.
Results of the study showed the following accomplishments:
1. Twenty (29) SSR primers were selected and identified based on
their ability to discriminate durian cultivars,
2. Optimized and established standard procedure for identification
and authentication of Durian cultivars
3. Genetic profile of durian is now available at Biotech Unit
Our results demonstrate the relevance of using molecular
techniques in evaluating and identifying durian clones. The most
polymorphic primers tested in this study could be useful tools for
detecting variation even at the early stage of the plant especially for
commercial purposes. The process developed combines the efficiency
of the microsatellites development process with the optimization of
non-radioactive detection process resulting in a user-friendly protocol
that can be performed in two (2) weeks and easily incorporated into
laboratories about to start microsatellite development projects. This
can be of great importance to extend microsatellite analyses to other
crop species where minimal genetic information is currently
available. With this, the University can now be a service laboratory
for routine testing and authentication of durian clones.
Abstract: The secondary alloy A226 is used for many
automotive casting produced by mould casting and high pressure die
casting. This alloy has excellent castability, good mechanical
properties and cost-effectiveness. Production of primary aluminium
alloys belong to heavy source fouling of life environs. The European
Union calls for the emission reduction and reduction in energy
consumption therefore increase production of recycled (secondary)
aluminium cast alloys. The contribution is deal with influence of
recycling on the quality of the casting made from A226 in automotive
industry. The properties of the casting made from secondary
aluminium alloys were compared with the required properties of
primary aluminium alloys. The effect of recycling on microstructure
was observed using combination different analytical techniques (light
microscopy upon black-white etching, scanning electron microscopy
- SEM upon deep etching and energy dispersive X-ray analysis -
EDX). These techniques were used for the identification of the
various structure parameters, which was used to compare secondary
alloy microstructure with primary alloy microstructure.
Abstract: Many factors influence the educational outcome of
students. Some of these have been studied by researchers with many
emphasizing the role of students, schools, governments, peer groups
and so on. More often than not, some of these factors influencing the
academic achievement of the students have been traced back to
parents and family; being the primary platform on which learning not
only begins but is nurtured, encouraged and developed which later
transforms to the performance of the students. This study not only
explores parental and related factors that predict academic
achievement through the review of relevant literatures but also,
investigates the influence of parental background on the academic
achievement of senior secondary school students in Ibadan North
Local Government Area of Oyo State, Nigeria. As one of the criteria
of the quality of education, students’ academic achievement was
investigated because it is most often cited as an indicator of school
effectiveness by school authorities and educationists. The data
collection was done through interviews and use of well-structured
questionnaires administered to one hundred students (100) within the
target local government. This was statistically analysed and the result
showed that parents’ attitudes towards their children’s education had
significant effect(s) on students’ self-reporting of academic
achievement. However, such factors as parental education and socioeconomic
background had no significant relationship with the
students’ self-reporting of academic achievement.
Abstract: The aim of this paper is to present the concept of an
agile enterprise model and to initiate discussion on the research
assumptions of the model presented. The implementation of the
research project "The agility of enterprises in the process of adapting
to the environment and its changes" began in August 2014 and is
planned to last three years. The article has the form of a work-inprogress
paper which aims to verify and initiate a debate over the
proposed research model. In the literature there are very few
publications relating to research into agility; it can be concluded that
the most controversial issue in this regard is the method of measuring
agility. In previous studies the operationalization of agility was often
fragmentary, focusing only on selected areas of agility, for example
manufacturing, or analysing only selected sectors. As a result the
measures created to date can only be treated as contributory to the
development of precise measurement tools. This research project
aims to fill a cognitive gap in the literature with regard to the
conceptualization and operationalization of an agile company. Thus,
the original contribution of the author of this project is the
construction of a theoretical model that integrates manufacturing
agility (consisting mainly in adaptation to the environment) and
strategic agility (based on proactive measures). The author of this
research project is primarily interested in the attributes of an agile
enterprise which indicate that the company is able to rapidly adapt to
changing circumstances and behave pro-actively.
Abstract: Edge is variation of brightness in an image. Edge
detection is useful in many application areas such as finding forests,
rivers from a satellite image, detecting broken bone in a medical
image etc. The paper discusses about finding edge of multiple aerial
images in parallel. The proposed work tested on 38 images 37
colored and one monochrome image. The time taken to process N
images in parallel is equivalent to time taken to process 1 image in
sequential. Message Passing Interface (MPI) and Open Computing
Language (OpenCL) is used to achieve task and pixel level
parallelism respectively.
Abstract: All current experimental methods for determination of
stress intensity factors are based on the assumption that the state of
stress near the crack tip is plane stress. Therefore, these methods rely
on strain and displacement measurements made outside the near
crack tip region affected by the three-dimensional effects or by
process zone. In this paper, we develop and validate an experimental
procedure for the evaluation of stress intensity factors from the
measurements of the out-of-plane displacements in the surface area
controlled by 3D effects. The evaluation of stress intensity factors is
possible when the process zone is sufficiently small, and the
displacement field generated by the 3D effects is fully encapsulated
by K-dominance region.
Abstract: A simple adaptive voice activity detector (VAD) is
implemented using Gabor and gammatone atomic decomposition of
speech for high Gaussian noise environments. Matching pursuit is
used for atomic decomposition, and is shown to achieve optimal
speech detection capability at high data compression rates for low
signal to noise ratios. The most active dictionary elements found by
matching pursuit are used for the signal reconstruction so that the
algorithm adapts to the individual speakers dominant time-frequency
characteristics. Speech has a high peak to average ratio enabling
matching pursuit greedy heuristic of highest inner products to isolate
high energy speech components in high noise environments. Gabor
and gammatone atoms are both investigated with identical
logarithmically spaced center frequencies, and similar bandwidths.
The algorithm performs equally well for both Gabor and gammatone
atoms with no significant statistical differences. The algorithm
achieves 70% accuracy at a 0 dB SNR, 90% accuracy at a 5 dB SNR
and 98% accuracy at a 20dB SNR using 30d B SNR as a reference
for voice activity.
Abstract: The changes of the optical and structural properties of
Bismuth-Boro-Tellurite glasses pre and post gamma irradiation were
studied. Six glass samples, with different composition [(TeO2)0.7
(B2O3)0.3]1-x (Bi2O3)x prepared by melt quenching method were
irradiated with 25kGy gamma radiation at room temperature. The
Fourier Transform Infrared Spectroscopy (FTIR) was used to explore
the structural bonding in the prepared glass samples due to exposure,
while UV-VIS Spectrophotometer was used to evaluate the changes
in the optical properties before and after irradiation. Gamma
irradiation causes profound changes in the peak intensity as shown by
FTIR spectra which is due to the breaking of the network bonding.
Before gamma irradiation, the optical band gap, Eg value decreased
from 2.44 eV to 2.15 eV with the addition of Bismuth content. The
value kept decreasing (from 2.18 eV to 2.00 eV) following exposure
to gamma radiation due to the increase of non-bridging oxygen
(NBO) and the increase of defect in the glass. In conclusion, the glass
with high content of Bi2O3 (0.30Bi) give smallest Eg and show less
changes in FTIR spectra after gamma irradiation which indicate that
this glass is more resistant to gamma radiation compared to other
glasses.
Abstract: The problems arising from unbalanced data sets
generally appear in real world applications. Due to unequal class
distribution, many researchers have found that the performance of
existing classifiers tends to be biased towards the majority class. The
k-nearest neighbors’ nonparametric discriminant analysis is a method
that was proposed for classifying unbalanced classes with good
performance. In this study, the methods of discriminant analysis are
of interest in investigating misclassification error rates for classimbalanced
data of three diabetes risk groups. The purpose of this
study was to compare the classification performance between
parametric discriminant analysis and nonparametric discriminant
analysis in a three-class classification of class-imbalanced data of
diabetes risk groups. Data from a project maintaining healthy
conditions for 599 employees of a government hospital in Bangkok
were obtained for the classification problem. The employees were
divided into three diabetes risk groups: non-risk (90%), risk (5%),
and diabetic (5%). The original data including the variables of
diabetes risk group, age, gender, blood glucose, and BMI were
analyzed and bootstrapped for 50 and 100 samples, 599 observations
per sample, for additional estimation of the misclassification error
rate. Each data set was explored for the departure of multivariate
normality and the equality of covariance matrices of the three risk
groups. Both the original data and the bootstrap samples showed nonnormality
and unequal covariance matrices. The parametric linear
discriminant function, quadratic discriminant function, and the
nonparametric k-nearest neighbors’ discriminant function were
performed over 50 and 100 bootstrap samples and applied to the
original data. Searching the optimal classification rule, the choices of
prior probabilities were set up for both equal proportions (0.33: 0.33:
0.33) and unequal proportions of (0.90:0.05:0.05), (0.80: 0.10: 0.10)
and (0.70, 0.15, 0.15). The results from 50 and 100 bootstrap samples
indicated that the k-nearest neighbors approach when k=3 or k=4 and
the defined prior probabilities of non-risk: risk: diabetic as 0.90:
0.05:0.05 or 0.80:0.10:0.10 gave the smallest error rate of
misclassification. The k-nearest neighbors approach would be
suggested for classifying a three-class-imbalanced data of diabetes
risk groups.
Abstract: In this work, we explore the capability of the mean
shift algorithm as a powerful preprocessing tool for improving the
quality of spatial data, acquired from airborne scanners, from densely
built urban areas. On one hand, high resolution image data corrupted
by noise caused by lossy compression techniques are appropriately
smoothed while at the same time preserving the optical edges and, on
the other, low resolution LiDAR data in the form of normalized
Digital Surface Map (nDSM) is upsampled through the joint mean
shift algorithm. Experiments on both the edge-preserving smoothing
and upsampling capabilities using synthetic RGB-z data show that the
mean shift algorithm is superior to bilateral filtering as well as to
other classical smoothing and upsampling algorithms. Application of
the proposed methodology for 3D reconstruction of buildings of a
pilot region of Athens, Greece results in a significant visual
improvement of the 3D building block model.
Abstract: Prosperity of electronic equipment in photocopying
environment not only has improved work efficiency, but also has
changed indoor air quality. Considering the number of photocopying
employed, indoor air quality might be worse than in general office
environments. Determining the contribution from any type of
equipment to indoor air pollution is a complex matter. Non-methane
hydrocarbons are known to have an important role on air quality due
to their high reactivity. The presence of hazardous pollutants in
indoor air has been detected in one photocopying shop in Novi Sad,
Serbia. Air samples were collected and analyzed for five days, during
8-hr working time in three time intervals, whereas three different
sampling points were determined. Using multiple linear regression
model and software package STATISTICA 10 the concentrations of
occupational hazards and microclimates parameters were mutually
correlated. Based on the obtained multiple coefficients of
determination (0.3751, 0.2389 and 0.1975), a weak positive
correlation between the observed variables was determined. Small
values of parameter F indicated that there was no statistically
significant difference between the concentration levels of nonmethane
hydrocarbons and microclimates parameters. The results
showed that variable could be presented by the general regression
model: y = b0 + b1xi1+ b2xi2. Obtained regression equations allow to
measure the quantitative agreement between the variables and thus
obtain more accurate knowledge of their mutual relations.
Abstract: Microscopic simulation tool kits allow for
consideration of the two processes of railway operations and the
previous timetable production. Block occupation conflicts on both
process levels are often solved by using defined train priorities. These
conflict resolutions (dispatching decisions) generate reactionary
delays to the involved trains. The sum of reactionary delays is
commonly used to evaluate the quality of railway operations, which
describes the timetable robustness. It is either compared to an
acceptable train performance or the delays are appraised
economically by linear monetary functions. It is impossible to
adequately evaluate dispatching decisions without a well-founded
objective function. This paper presents a new approach for the
evaluation of dispatching decisions. The approach uses mode choice
models and considers the behaviour of the end-customers. These
models evaluate the reactionary delays in more detail and consider
other competing modes of transport. The new approach pursues the
coupling of a microscopic model of railway operations with the
macroscopic choice mode model. At first, it will be implemented for
railway operations process but it can also be used for timetable
production. The evaluation considers the possibility for the customer
to interchange to other transport modes. The new approach starts to
look at rail and road, but it can also be extended to air travel. The
result of mode choice models is the modal split. The reactions by the
end-customers have an impact on the revenue of the train operating
companies. Different purposes of travel have different payment
reserves and tolerances towards late running. Aside from changes to
revenues, longer journey times can also generate additional costs.
The costs are either time- or track-specific and arise from required
changes to rolling stock or train crew cycles. Only the variable values
are summarised in the contribution margin, which is the base for the
monetary evaluation of delays. The contribution margin is calculated
for different possible solutions to the same conflict. The conflict
resolution is optimised until the monetary loss becomes minimal. The
iterative process therefore determines an optimum conflict resolution
by monitoring the change to the contribution margin. Furthermore, a
monetary value of each dispatching decision can also be derived.
Abstract: In the Hierarchical Temporal Memory (HTM) paradigm
the effect of overlap between inputs on the activation of columns in
the spatial pooler is studied. Numerical results suggest that similar
inputs are represented by similar sets of columns and dissimilar inputs
are represented by dissimilar sets of columns. It is shown that the
spatial pooler produces these results under certain conditions for
the connectivity and proximal thresholds. Following the discussion
of the initialization of parameters for the thresholds, corresponding
qualitative arguments about the learning dynamics of the spatial
pooler are discussed.
Abstract: In this study, we proposed two techniques to track the
maximum power point (MPPT) of a photovoltaic system. The first is
an intelligent control technique, and the second is robust used for
variable structure system. In fact the characteristics I-V and P–V of
the photovoltaic generator depends on the solar irradiance and
temperature. These climate changes cause the fluctuation of
maximum power point; a maximum power point tracking technique
(MPPT) is required to maximize the output power. For this we have
adopted a control by fuzzy logic (FLC) famous for its stability and
robustness. And a Siding Mode Control (SMC) widely used for
variable structure system. The system comprises a photovoltaic panel
(PV), a DC-DC converter, which is considered as an adaptation stage
between the PV and the load. The modelling and simulation of the
system is developed using MATLAB/Simulink. SMC technique
provides a good tracking speed in fast changing irradiation and when
the irradiation changes slowly or it is constant the panel power of
FLC technique presents a much smoother signal with less
fluctuations.
Abstract: Evolutionary optimization methods such as genetic
algorithms have been used extensively for the construction site layout
problem. More recently, ant colony optimization algorithms, which
are evolutionary methods based on the foraging behavior of ants,
have been successfully applied to benchmark combinatorial
optimization problems. This paper proposes a formulation of the site
layout problem in terms of a sequencing problem that is suitable for
solution using an ant colony optimization algorithm.
In the construction industry, site layout is a very important
planning problem. The objective of site layout is to position
temporary facilities both geographically and at the correct time such
that the construction work can be performed satisfactorily with
minimal costs and improved safety and working environment. During
the last decade, evolutionary methods such as genetic algorithms
have been used extensively for the construction site layout problem.
This paper proposes an ant colony optimization model for
construction site layout. A simple case study for a highway project is
utilized to illustrate the application of the model.
Abstract: In this paper we propose a novel methodology for
extracting a road network and its nodes from satellite images of
Algeria country.
This developed technique is a progress of our previous research
works. It is founded on the information theory and the mathematical
morphology; the information theory and the mathematical
morphology are combined together to extract and link the road
segments to form a road network and its nodes.
We therefore have to define objects as sets of pixels and to study
the shape of these objects and the relations that exist between them.
In this approach, geometric and radiometric features of roads are
integrated by a cost function and a set of selected points of a crossing
road. Its performances were tested on satellite images of Algeria
country.
Abstract: This paper presents the development of a robot car
that can track the motion of an object by detecting its color through
an Android device. The employed computer vision algorithm uses the
OpenCV library, which is embedded into an Android application of a
smartphone, for manipulating the captured image of the object. The
captured image of the object is subjected to color conversion and is
transformed to a binary image for further processing after color
filtering. The desired object is clearly determined after removing
pixel noise by applying image morphology operations and contour
definition. Finally, the area and the center of the object are
determined so that object’s motion to be tracked. The smartphone
application has been placed on a robot car and transmits by Bluetooth
to an Arduino assembly the motion directives so that to follow
objects of a specified color. The experimental evaluation of the
proposed algorithm shows reliable color detection and smooth
tracking characteristics.
Abstract: With the growing of computer and network, digital
data can be spread to anywhere in the world quickly. In addition,
digital data can also be copied or tampered easily so that the security
issue becomes an important topic in the protection of digital data.
Digital watermark is a method to protect the ownership of digital data.
Embedding the watermark will influence the quality certainly. In this
paper, Vector Quantization (VQ) is used to embed the watermark into
the image to fulfill the goal of data hiding. This kind of watermarking
is invisible which means that the users will not conscious the existing
of embedded watermark even though the embedded image has tiny
difference compared to the original image. Meanwhile, VQ needs a lot
of computation burden so that we adopt a fast VQ encoding scheme by
partial distortion searching (PDS) and mean approximation scheme to
speed up the data hiding process.
The watermarks we hide to the image could be gray, bi-level and
color images. Texts are also can be regarded as watermark to embed.
In order to test the robustness of the system, we adopt Photoshop to
fulfill sharpen, cropping and altering to check if the extracted
watermark is still recognizable. Experimental results demonstrate that
the proposed system can resist the above three kinds of tampering in
general cases.
Abstract: Micro-electromechanical system (MEMS)
accelerometers and gyroscopes are suitable for the inertial navigation
system (INS) of many applications due to low price, small
dimensions and light weight. The main disadvantage in a comparison
with classic sensors is a worse long term stability. The estimation
accuracy is mostly affected by the time-dependent growth of inertial
sensor errors, especially the stochastic errors. In order to eliminate
negative effects of these random errors, they must be accurately
modeled. In this paper, the Allan variance technique will be used in
modeling the stochastic errors of the inertial sensors. By performing
a simple operation on the entire length of data, a characteristic curve
is obtained whose inspection provides a systematic characterization
of various random errors contained in the inertial-sensor output data.