Abstract: Durian is the flagship fruit of Mindanao and there is
an abundance of several cultivars with many confusing identities/
names.
The project was conducted to develop procedure for reliable and
rapid detection and sorting of durian planting materials. Moreover, it
is also aimed to establish specific genetic or DNA markers for routine
testing and authentication of durian cultivars in question.
The project developed molecular procedures for routine testing.
SSR primers were also screened and identified for their utility in
discriminating durian cultivars collected.
Results of the study showed the following accomplishments:
1. Twenty (29) SSR primers were selected and identified based on
their ability to discriminate durian cultivars,
2. Optimized and established standard procedure for identification
and authentication of Durian cultivars
3. Genetic profile of durian is now available at Biotech Unit
Our results demonstrate the relevance of using molecular
techniques in evaluating and identifying durian clones. The most
polymorphic primers tested in this study could be useful tools for
detecting variation even at the early stage of the plant especially for
commercial purposes. The process developed combines the efficiency
of the microsatellites development process with the optimization of
non-radioactive detection process resulting in a user-friendly protocol
that can be performed in two (2) weeks and easily incorporated into
laboratories about to start microsatellite development projects. This
can be of great importance to extend microsatellite analyses to other
crop species where minimal genetic information is currently
available. With this, the University can now be a service laboratory
for routine testing and authentication of durian clones.
Abstract: Microscopic simulation tool kits allow for
consideration of the two processes of railway operations and the
previous timetable production. Block occupation conflicts on both
process levels are often solved by using defined train priorities. These
conflict resolutions (dispatching decisions) generate reactionary
delays to the involved trains. The sum of reactionary delays is
commonly used to evaluate the quality of railway operations, which
describes the timetable robustness. It is either compared to an
acceptable train performance or the delays are appraised
economically by linear monetary functions. It is impossible to
adequately evaluate dispatching decisions without a well-founded
objective function. This paper presents a new approach for the
evaluation of dispatching decisions. The approach uses mode choice
models and considers the behaviour of the end-customers. These
models evaluate the reactionary delays in more detail and consider
other competing modes of transport. The new approach pursues the
coupling of a microscopic model of railway operations with the
macroscopic choice mode model. At first, it will be implemented for
railway operations process but it can also be used for timetable
production. The evaluation considers the possibility for the customer
to interchange to other transport modes. The new approach starts to
look at rail and road, but it can also be extended to air travel. The
result of mode choice models is the modal split. The reactions by the
end-customers have an impact on the revenue of the train operating
companies. Different purposes of travel have different payment
reserves and tolerances towards late running. Aside from changes to
revenues, longer journey times can also generate additional costs.
The costs are either time- or track-specific and arise from required
changes to rolling stock or train crew cycles. Only the variable values
are summarised in the contribution margin, which is the base for the
monetary evaluation of delays. The contribution margin is calculated
for different possible solutions to the same conflict. The conflict
resolution is optimised until the monetary loss becomes minimal. The
iterative process therefore determines an optimum conflict resolution
by monitoring the change to the contribution margin. Furthermore, a
monetary value of each dispatching decision can also be derived.
Abstract: Evolutionary optimization methods such as genetic
algorithms have been used extensively for the construction site layout
problem. More recently, ant colony optimization algorithms, which
are evolutionary methods based on the foraging behavior of ants,
have been successfully applied to benchmark combinatorial
optimization problems. This paper proposes a formulation of the site
layout problem in terms of a sequencing problem that is suitable for
solution using an ant colony optimization algorithm.
In the construction industry, site layout is a very important
planning problem. The objective of site layout is to position
temporary facilities both geographically and at the correct time such
that the construction work can be performed satisfactorily with
minimal costs and improved safety and working environment. During
the last decade, evolutionary methods such as genetic algorithms
have been used extensively for the construction site layout problem.
This paper proposes an ant colony optimization model for
construction site layout. A simple case study for a highway project is
utilized to illustrate the application of the model.
Abstract: Cloud computing is the innovative and leading
information technology model for enabling convenient, on-demand
network access to a shared pool of configurable computing resources
that can be rapidly provisioned and released with minimal
management effort. In this paper, we aim at the development of
workflow management system for cloud computing platforms based
on our previous research on the dynamic allocation of the cloud
computing resources and its workflow process. We took advantage of
the HTML5 technology and developed web-based workflow interface.
In order to enable the combination of many tasks running on the cloud
platform in sequence, we designed a mechanism and developed an
execution engine for workflow management on clouds. We also
established a prediction model which was integrated with job queuing
system to estimate the waiting time and cost of the individual tasks on
different computing nodes, therefore helping users achieve maximum
performance at lowest payment. This proposed effort has the potential
to positively provide an efficient, resilience and elastic environment
for cloud computing platform. This development also helps boost user
productivity by promoting a flexible workflow interface that lets users
design and control their tasks' flow from anywhere.
Abstract: In this study, a computational fluid dynamics (CFD)
model has been developed for studying the effect of surface
roughness profile on the EHL problem. The cylinders contact
geometry, meshing and calculation of the conservation of mass and
momentum equations are carried out using the commercial software
packages ICEMCFD and ANSYS Fluent. The user defined functions
(UDFs) for density, viscosity and elastic deformation of the cylinders
as the functions of pressure and temperature are defined for the CFD
model. Three different surface roughness profiles are created and
incorporated into the CFD model. It is found that the developed CFD
model can predict the characteristics of fluid flow and heat transfer in
the EHL problem, including the main parameters such as pressure
distribution, minimal film thickness, viscosity, and density changes.
The results obtained show that the pressure profile at the center of the
contact area directly relates to the roughness amplitude. A rough
surface with kurtosis value of more than 3 has greater influence over
the fluctuated shape of pressure distribution than in other cases.
Abstract: Robotic surgery is used to enhance minimally invasive
surgical procedure. It provides greater degree of freedom for surgical
tools but lacks of haptic feedback system to provide sense of touch to
the surgeon. Surgical robots work on master-slave operation, where
user is a master and robotic arms are the slaves. Current, surgical
robots provide precise control of the surgical tools, but heavily rely
on visual feedback, which sometimes cause damage to the inner
organs. The goal of this research was to design and develop a realtime
Simulink based robotic system to study force feedback
mechanism during instrument-object interaction. Setup includes three
VelmexXSlide assembly (XYZ Stage) for three dimensional
movement, an end effector assembly for forceps, electronic circuit for
four strain gages, two Novint Falcon 3D gaming controllers,
microcontroller board with linear actuators, MATLAB and Simulink
toolboxes. Strain gages were calibrated using Imada Digital Force
Gauge device and tested with a hard-core wire to measure
instrument-object interaction in the range of 0-35N. Designed
Simulink model successfully acquires 3D coordinates from two
Novint Falcon controllers and transfer coordinates to the XYZ stage
and forceps. Simulink model also reads strain gages signal through
10-bit analog to digital converter resolution of a microcontroller
assembly in real time, converts voltage into force and feedback the
output signals to the Novint Falcon controller for force feedback
mechanism. Experimental setup allows user to change forward
kinematics algorithms to achieve the best-desired movement of the
XYZ stage and forceps. This project combines haptic technology
with surgical robot to provide sense of touch to the user controlling
forceps through machine-computer interface.
Abstract: Constructing a portfolio of investments is one of the
most significant financial decisions facing individuals and
institutions. In accordance with the modern portfolio theory
maximization of return at minimal risk should be the investment goal
of any successful investor. In addition, the costs incurred when
setting up a new portfolio or rebalancing an existing portfolio must
be included in any realistic analysis.
In this paper rebalancing an investment portfolio in the presence of
transaction costs on the Croatian capital market is analyzed. The
model applied in the paper is an extension of the standard portfolio
mean-variance optimization model in which transaction costs are
incurred to rebalance an investment portfolio. This model allows
different costs for different securities, and different costs for buying
and selling. In order to find efficient portfolio, using this model, first,
the solution of quadratic programming problem of similar size to the
Markowitz model, and then the solution of a linear programming
problem have to be found. Furthermore, in the paper the impact of
transaction costs on the efficient frontier is investigated. Moreover, it
is shown that global minimum variance portfolio on the efficient
frontier always has the same level of the risk regardless of the amount
of transaction costs. Although efficient frontier position depends of
both transaction costs amount and initial portfolio it can be concluded
that extreme right portfolio on the efficient frontier always contains
only one stock with the highest expected return and the highest risk.
Abstract: Random epistemologies and hash tables have garnered
minimal interest from both security experts and experts in the last
several years. In fact, few information theorists would disagree with
the evaluation of expert systems. In our research, we discover how
flip-flop gates can be applied to the study of superpages. Though
such a hypothesis at first glance seems perverse, it is derived from
known results.
Abstract: At certain depths during large diameter displacement
pile driving, rebound well over 0.25 inches was experienced,
followed by a small permanent-set during each hammer blow. High
pile rebound (HPR) soils may stop the pile driving and results in a
limited pile capacity. In some cases, rebound leads to pile damage,
delaying the construction project, and the requiring foundations
redesign. HPR was evaluated at seven Florida sites, during driving of
square precast, prestressed concrete piles driven into saturated, fine
silty to clayey sands and sandy clays. Pile Driving Analyzer (PDA)
deflection versus time data recorded during installation, was used to
develop correlations between cone penetrometer (CPT) pore-water
pressures, pile displacements and rebound. At five sites where piles
experienced excessive HPR with minimal set, the pore pressure
yielded very high positive values of greater than 20 tsf. However, at
the site where the pile rebounded, followed by an acceptable
permanent-set, the measured pore pressure ranged between 5 and 20
tsf. The pore pressure exhibited values of less than 5 tsf at the site
where no rebound was noticed. In summary, direct correlations
between CPTu pore pressure and rebound were produced, allowing
identification of soils that produce HPR.
Abstract: We present a solution to the Maxmin u/E parameters
estimation problem of possibility distributions in m-dimensional
case. Our method is based on geometrical approach, where minimal
area enclosing ellipsoid is constructed around the sample. Also we
demonstrate that one can improve results of well-known algorithms
in fuzzy model identification task using Maxmin u/E parameters
estimation.
Abstract: Factors affecting construction unit cost vary
depending on a country’s political, economic, social and
technological inclinations. Factors affecting construction costs have
been studied from various perspectives. Analysis of cost factors
requires an appreciation of a country’s practices. Identified cost
factors provide an indication of a country’s construction economic
strata. The purpose of this paper is to identify the essential factors
that affect unit cost estimation and their breakdown using artificial
neural networks. Twenty five (25) identified cost factors in road
construction were subjected to a questionnaire survey and employing
SPSS factor analysis the factors were reduced to eight. The 8 factors
were analysed using neural network (NN) to determine the
proportionate breakdown of the cost factors in a given construction
unit rate. NN predicted that political environment accounted 44% of
the unit rate followed by contractor capacity at 22% and financial
delays, project feasibility and overhead & profit each at 11%. Project
location, material availability and corruption perception index had
minimal impact on the unit cost from the training data provided.
Quantified cost factors can be incorporated in unit cost estimation
models (UCEM) to produce more accurate estimates. This can create
improvements in the cost estimation of infrastructure projects and
establish a benchmark standard to assist the process of alignment of
work practises and training of new staff, permitting the on-going
development of best practises in cost estimation to become more
effective.
Abstract: Artificial Neural Network (ANN) can be trained using
back propagation (BP). It is the most widely used algorithm for
supervised learning with multi-layered feed-forward networks.
Efficient learning by the BP algorithm is required for many practical
applications. The BP algorithm calculates the weight changes of
artificial neural networks, and a common approach is to use a twoterm
algorithm consisting of a learning rate (LR) and a momentum
factor (MF). The major drawbacks of the two-term BP learning
algorithm are the problems of local minima and slow convergence
speeds, which limit the scope for real-time applications. Recently the
addition of an extra term, called a proportional factor (PF), to the
two-term BP algorithm was proposed. The third increases the speed
of the BP algorithm. However, the PF term also reduces the
convergence of the BP algorithm, and criteria for evaluating
convergence are required to facilitate the application of the three
terms BP algorithm. Although these two seem to be closely related,
as described later, we summarize various improvements to overcome
the drawbacks. Here we compare the different methods of
convergence of the new three-term BP algorithm.
Abstract: Ontologies offer a means for representing and sharing
information in many domains, particularly in complex domains. For
example, it can be used for representing and sharing information
of System Requirement Specification (SRS) of complex systems
like the SRS of ERTMS/ETCS written in natural language. Since
this system is a real-time and critical system, generic ontologies,
such as OWL and generic ERTMS ontologies provide minimal
support for modeling temporal information omnipresent in these SRS
documents. To support the modeling of temporal information, one
of the challenges is to enable representation of dynamic features
evolving in time within a generic ontology with a minimal redesign
of it. The separation of temporal information from other information
can help to predict system runtime operation and to properly design
and implement them. In addition, it is helpful to provide a reasoning
and querying techniques to reason and query temporal information
represented in the ontology in order to detect potential temporal
inconsistencies. To address this challenge, we propose a lightweight
3-layer temporal Quality of Service (QoS) ontology for representing,
reasoning and querying over temporal and non-temporal information
in a complex domain ontology. Representing QoS entities in separated
layers can clarify the distinction between the non QoS entities
and the QoS entities in an ontology. The upper generic layer of
the proposed ontology provides an intuitive knowledge of domain
components, specially ERTMS/ETCS components. The separation of
the intermediate QoS layer from the lower QoS layer allows us to
focus on specific QoS Characteristics, such as temporal or integrity
characteristics. In this paper, we focus on temporal information that
can be used to predict system runtime operation. To evaluate our
approach, an example of the proposed domain ontology for handover
operation, as well as a reasoning rule over temporal relations in this
domain-specific ontology, are presented.
Abstract: In MANET, mobile nodes communicate with each
other using the wireless channel where transmission takes place with
significant interference. The wireless medium used in MANET is a
shared resource used by all the nodes available in MANET. Packet
reserving is one important resource management scheme which
controls the allocation of bandwidth among multiple flows through
node cooperation in MANET. This paper proposes packet reserving
and clogging control via Routing Aware Packet Reserving (RAPR)
framework in MANET. It mainly focuses the end-to-end routing
condition with maximal throughput. RAPR is complimentary system
where the packet reserving utilizes local routing information
available in each node. Path setup in RAPR estimates the security
level of the system, and symbolizes the end-to-end routing by
controlling the clogging. RAPR reaches the packet to the destination
with high probability ratio and minimal delay count. The standard
performance measures such as network security level,
communication overhead, end-to-end throughput, resource utilization
efficiency and delay measure are considered in this work. The results
reveals that the proposed packet reservation and clogging control via
Routing Aware Packet Reserving (RAPR) framework performs well
for the above said performance measures compare to the existing
methods.
Abstract: Diabetes is a growing health problem in worldwide.
Especially, the patients with Type 1 diabetes need strict glycemic
control because they have deficiency of insulin production. This
paper attempts to control blood glucose based on body mathematical
body model. The Bergman minimal mathematical model is used to
develop the nonlinear controller. A novel back-stepping based sliding
mode control (B-SMC) strategy is proposed as a solution that
guarantees practical tracking of a desired glucose concentration. In
order to show the performance of the proposed design, it is compared
with conventional linear and fuzzy controllers which have been done
in previous researches. The numerical simulation result shows the
advantages of sliding mode back stepping controller design to linear
and fuzzy controllers.
Abstract: Artificial Neural Networks (ANN) trained using backpropagation
(BP) algorithm are commonly used for modeling
material behavior associated with non-linear, complex or unknown
interactions among the material constituents. Despite multidisciplinary
applications of back-propagation neural networks
(BPNN), the BP algorithm possesses the inherent drawback of
getting trapped in local minima and slowly converging to a global
optimum. The paper present a hybrid artificial neural networks and
genetic algorithm approach for modeling slump of ready mix
concrete based on its design mix constituents. Genetic algorithms
(GA) global search is employed for evolving the initial weights and
biases for training of neural networks, which are further fine tuned
using the BP algorithm. The study showed that, hybrid ANN-GA
model provided consistent predictions in comparison to commonly
used BPNN model. In comparison to BPNN model, the hybrid ANNGA
model was able to reach the desired performance goal quickly.
Apart from the modeling slump of ready mix concrete, the synaptic
weights of neural networks were harnessed for analyzing the relative
importance of concrete design mix constituents on the slump value.
The sand and water constituents of the concrete design mix were
found to exhibit maximum importance on the concrete slump value.
Abstract: An optimisation method using both global and local
optimisation is implemented to determine the flapping profile which
will produce the most lift for an experimental wing-actuation system.
The optimisation method is tested using a numerical quasi-steady
analysis. Results of an optimised flapping profile show a 20% increase
in lift generated as compared to flapping profiles obtained by high
speed cinematography of a Sympetrum frequens dragonfly. Initial
optimisation procedures showed 3166 objective function evaluations.
The global optimisation parameters - initial sample size and stage
one sample size, were altered to reduce the number of function
evaluations. Altering the stage one sample size had no significant
effect. It was found that reducing the initial sample size to 400
would allow a reduction in computational effort to approximately
1500 function evaluations without compromising the global solvers
ability to locate potential minima. To further reduce the optimisation
effort required, we increase the local solver’s convergence tolerance
criterion. An increase in the tolerance from 0.02N to 0.05N decreased
the number of function evaluations by another 20%. However, this
potentially reduces the maximum obtainable lift by up to 0.025N.
Abstract: Termites have been observed as major pre-colonisation and post-colonisation pest insect of honeybees’ wooden hives in Nigeria. However, pest situation studies in modern beekeeping have been largely directed towards those pests that affect honeybees rather than the biological structure (wood) which houses the honeybees and the influence of seasons on the pests’ activities against the hives. This study, therefore, investigated the influence of seasons on the intensity of hives attacks by termites for 2 years in University of Port Harcourt, Rivers State using visual inspection. The Experimental Apiary was established with 15 Kenyan’s top bar hives made of Triplochiton scleroxylon wood that were strategically placed and observed within the Department of Forestry and Wildlife Management arboretum. The colonies hives consistently showed comparatively lower termite’s infestation levels in the dry season and, consequently, also lower attacks on the colonized hives. The result indicated raining season as a distinct period for more destructive activities of termites on the hives and strongly associated with dryness of the hives. Since previous study and observations have linked colonization with dry season coupled with minimal attacked on colonized hives; the non-colonised hives should be removed from the field at the onset of raining season and returned two weeks prior to dry season to reduce hives degradation by pests.
Abstract: In a perfect secret-sharing scheme, a dealer distributes
a secret among a set of participants in such a way that only qualified
subsets of participants can recover the secret and the joint share of the
participants in any unqualified subset is statistically independent of
the secret. The access structure of the scheme refers to the collection
of all qualified subsets. In a graph-based access structures, each vertex
of a graph G represents a participant and each edge of G represents a
minimal qualified subset. The average information ratio of a perfect
secret-sharing scheme realizing a given access structure is the ratio
of the average length of the shares given to the participants to the
length of the secret. The infimum of the average information ratio
of all possible perfect secret-sharing schemes realizing an access
structure is called the optimal average information ratio of that access
structure. We study the optimal average information ratio of the
access structures based on bipartite graphs. Based on some previous
results, we give a bound on the optimal average information ratio
for all bipartite graphs of girth at least six. This bound is the best
possible for some classes of bipartite graphs using our approach.
Abstract: Self-Consolidating Concrete (SCC) is considered as a relatively new technology created as an effective solution to problems associated with low quality consolidation. A SCC mix is defined as successful if it flows freely and cohesively without the intervention of mechanical compaction. The construction industry is showing high tendency to use SCC in many contemporary projects to benefit from the various advantages offered by this technology.
At this point, a main question is raised regarding the effect of enhanced fluidity of SCC on the structural behavior of high strength self-consolidating reinforced concrete.
A three phase research program was conducted at the American University of Beirut (AUB) to address this concern. The first two phases consisted of comparative studies conducted on concrete and mortar mixes prepared with second generation Sulphonated Naphtalene-based superplasticizer (SNF) or third generation Polycarboxylate Ethers-based superplasticizer (PCE). The third phase of the research program investigates and compares the structural performance of high strength reinforced concrete beam specimens prepared with two different generations of superplasticizers that formed the unique variable between the concrete mixes. The beams were designed to test and exhibit flexure, shear, or bond splitting failure.
The outcomes of the experimental work revealed comparable resistance of beam specimens cast using self-compacting concrete and conventional vibrated concrete. The dissimilarities in the experimental values between the SCC and the control VC beams were minimal, leading to a conclusion, that the high consistency of SCC has little effect on the flexural, shear and bond strengths of concrete members.