Abstract: Octree compression techniques have been used
for several years for compressing large three dimensional data
sets into homogeneous regions. This compression technique
is ideally suited to datasets which have similar values in
clusters. Oil engineers represent reservoirs as a three dimensional
grid where hydrocarbons occur naturally in clusters. This
research looks at the efficiency of storing these grids using
octree compression techniques where grid cells are broken
into active and inactive regions. Initial experiments yielded
high compression ratios as only active leaf nodes and their
ancestor, header nodes are stored as a bitstream to file on
disk. Savings in computational time and memory were possible
at decompression, as only active leaf nodes are sent to the
graphics card eliminating the need of reconstructing the original
matrix. This results in a more compact vertex table, which can
be loaded into the graphics card quicker and generating shorter
refresh delay times.
Abstract: The aim of this paper is to provide an empirical
evidence about the effects that the management of continuous
training have on employability (or employment stability) in the
Spanish labour market. With this purpose a binary logit model with
interaction effect is been used. The dependent variable includes two
situations of the active workers: continuous and discontinuous
employability. To distinguish between them an Employability Index
Stability (ESI) was calculated taking into account two factors: time
worked and job security. Various aspects of the continuous training
and personal workers data are used as independent variables. The
data obtained from a survey of a sample of 918 employed have
revealed a relationship between the likelihood of continuous
employability and continuous training received. The empirical results
support the positive and significant relationship between various
aspects of the training provided by firms and employability
likelihood of the workers, postulate alike from a theoretical point of
view.
Abstract: In this work, we study the impact of dynamically changing link slowdowns on the stability properties of packetswitched networks under the Adversarial Queueing Theory framework. Especially, we consider the Adversarial, Quasi-Static Slowdown Queueing Theory model, where each link slowdown may take on values in the two-valued set of integers {1, D} with D > 1 which remain fixed for a long time, under a (w, p)-adversary. In this framework, we present an innovative systematic construction for the estimation of adversarial injection rate lower bounds, which, if exceeded, cause instability in networks that use the LIS (Longest-in- System) protocol for contention-resolution. In addition, we show that a network that uses the LIS protocol for contention-resolution may result in dropping its instability bound at injection rates p > 0 when the network size and the high slowdown D take large values. This is the best ever known instability lower bound for LIS networks.
Abstract: Power cables are vulnerable to failure due to aging or
defects that occur with the passage of time under continuous
operation and loading stresses. PD detection and characterization
provide information on the location, nature, form and extent of the
degradation. As a result, PD monitoring has become an important
part of condition based maintenance (CBM) program among power
utilities. Online partial discharge (PD) localization of defect sources
in power cable system is possible using the time of flight method.
The information regarding the time difference between the main and
reflected pulses and cable length can help in locating the partial
discharge source along the cable length. However, if the length of
the cable is not known and the defect source is located at the extreme
ends of the cable or in the middle of the cable, then double ended
measurement is required to indicate the location of PD source. Use of
multiple sensors can also help in discriminating the cable PD or local/
external PD. This paper presents the experience and results from
online partial discharge measurements conducted in the laboratory
and the challenges in partial discharge source localization.
Abstract: In the recent years multimedia traffic and in particular
VoIP services are growing dramatically. We present a new algorithm
to control the resource utilization and to optimize the voice codec
selection during SIP call setup on behalf of the traffic condition
estimated on the network path.
The most suitable methodologies and the tools that perform realtime
evaluation of the available bandwidth on a network path have
been integrated with our proposed algorithm: this selects the best
codec for a VoIP call in function of the instantaneous available
bandwidth on the path. The algorithm does not require any explicit
feedback from the network, and this makes it easily deployable over
the Internet. We have also performed intensive tests on real network
scenarios with a software prototype, verifying the algorithm
efficiency with different network topologies and traffic patterns
between two SIP PBXs.
The promising results obtained during the experimental validation
of the algorithm are now the basis for the extension towards a larger
set of multimedia services and the integration of our methodology
with existing PBX appliances.
Abstract: Optimization of filter banks based on the knowledge of input statistics has been of interest for a long time. Finite impulse response (FIR) Compaction filters are used in the design of optimal signal adapted orthonormal FIR filter banks. In this paper we discuss three different approaches for the design of interpolated finite impulse response (IFIR) compaction filters. In the first method, the magnitude squared response satisfies Nyquist constraint approximately. In the second and third methods Nyquist constraint is exactly satisfied. These methods yield FIR compaction filters whose response is comparable with that of the existing methods. At the same time, IFIR filters enjoy significant saving in the number of multipliers and can be implemented efficiently. Since eigenfilter approach is used here, the method is less complex. Design of IFIR filters in the least square sense is presented.
Abstract: Due to growing environmental concerns of the cement
industry, alternative cement technologies have become an area of
increasing interest. It is now believed that new binders are
indispensable for enhanced environmental and durability
performance. Self-compacting Geopolymer concrete is an innovative
method and improved way of concreting operation that does not
require vibration for placing it and is produced by complete
elimination of ordinary Portland cement.
This paper documents the assessment of the compressive strength
and workability characteristics of low-calcium fly ash based selfcompacting
geopolymer concrete. The essential workability
properties of the freshly prepared Self-compacting Geopolymer
concrete such as filling ability, passing ability and segregation
resistance were evaluated by using Slump flow, V-funnel, L-box and
J-ring test methods. The fundamental requirements of high
flowability and segregation resistance as specified by guidelines on
Self Compacting Concrete by EFNARC were satisfied. In addition,
compressive strength was determined and the test results are included
here. This paper also reports the effect of extra water, curing time and
curing temperature on the compressive strength of self-compacting
geopolymer concrete. The test results show that extra water in the
concrete mix plays a significant role. Also, longer curing time and
curing the concrete specimens at higher temperatures will result in
higher compressive strength.
Abstract: this paper presents a multi-context recurrent network for time series analysis. While simple recurrent network (SRN) are very popular among recurrent neural networks, they still have some shortcomings in terms of learning speed and accuracy that need to be addressed. To solve these problems, we proposed a multi-context recurrent network (MCRN) with three different learning algorithms. The performance of this network is evaluated on some real-world application such as handwriting recognition and energy load forecasting. We study the performance of this network and we compared it to a very well established SRN. The experimental results showed that MCRN is very efficient and very well suited to time series analysis and its applications.
Abstract: Visually impaired people find it extremely difficult to
acquire basic and vital information necessary for their living.
Therefore, they are at a very high risk of being socially excluded as a
result of poor access to information. In recent years, several attempts
have been made in improving the communication methods for
visually impaired people which involve tactile sensation such as
finger Braille, manual alphabets and the print on palm method and
several other electronic devices. But, there are some problems which
arise in such methods such as lack of privacy and lack of
compatibility to computer environment. This paper describes a low
cost Braille hand glove for blind people using slot sensors and
vibration motors with the help of which they can read and write emails,
text messages and read e-books. This glove allows the person
to type characters based on different Braille combination using six
slot sensors. The vibration in six different positions of the glove
which matches to the Braille code allows them to read characters.
Abstract: Introduction: Obesity is a major health risk issue in
the present day of life for one and all globally. Obesity is one of the
major concerns for public health according to recent increasing trends
in obesity-related diseases such as Type 2 diabetes. ( Kazuya,
1994).and hyperlipidemia, (Sakata,1990) .which are more prevalent
in Japanese adults with body mass index (BMI) values Z25 kg/m2.(
Japanese Ministry of Health and Welfare,1997). The purpose of the
study was to assess the effect of twelve weeks of brisk walking on
blood pressure and body mass index, anthropometric measurements
of obese males. Method: Thirty obese (BMI= above 30) males, aged
18 to 22 years, were selected from King Fahd University of
Petroleum & Minerals, Saudi Arabia. The subject-s height (cm) was
measured using a stadiometer and body mass (kg) was measured with
a electronic weighing machine. BMI was subsequently calculated
(kg/m2). The blood pressure was measured with standardized
sphygmomanometer in mm of Hg. All the measurements were taken
twice before and twice after the experimental period. The pre and
post anthropometric measurements of waist and hip circumference
were measured with the steel tape in cm. The subjects underwent
walking schedule two times in a week for 12 weeks. The 45 minute
sessions of brisk walking were undertaken at an average intensity of
65% to 85% of maximum HR (HRmax; calculated as 220-age).
Results & Discussion: Statistical findings revealed significant
changes from pre test to post test in case of both systolic blood
pressure and diastolic blood pressure in the walking group. Results
also showed significant decrease in their body mass index and
anthropometric measurements i.e. (waist & hip circumference).
Conclusion: It was concluded that twelve weeks brisk walking is
beneficial for lowering of blood pressure, body mass index, and
anthropometric circumference of obese males.
Abstract: This paper presents three models which enable the
customisation of Universal Description, Discovery and Integration
(UDDI) query results, based on some pre-defined and/or real-time
changing parameters. These proposed models detail the requirements,
design and techniques which make ranking of Web service discovery
results from a service registry possible. Our contribution is two fold:
First, we present an extension to the UDDI inquiry capabilities. This
enables a private UDDI registry owner to customise or rank the query
results, based on its business requirements. Second, our proposal
utilises existing technologies and standards which require minimal
changes to existing UDDI interfaces or its data structures. We believe
these models will serve as valuable reference for enhancing the
service discovery methods within a private UDDI registry
environment.
Abstract: Mobile learning (m-learning) is a new method in teaching and learning process which combines technology of mobile device with learning materials. It can enhance student's engagement in learning activities and facilitate them to access the learning materials at anytime and anywhere. In Kolej Poly-Tech Mara (KPTM), this method is seen as an important effort in teaching practice and to improve student learning performance. The aim of this paper is to discuss the development of m-learning application called Mobile EEF Learning System (MEEFLS) to be implemented for Electric and Electronic Fundamentals course using Flash, XML (Extensible Markup Language) and J2ME (Java 2 micro edition). System Development Life Cycle (SDLC) was used as an application development approach. It has three modules in this application such as notes or course material, exercises and video. MEELFS development is seen as a tool or a pilot test for m-learning in KPTM.
Abstract: The Non-Rotating Adjustable Stabilizer / Directional
Solution (NAS/DS) is the imitation of a mechanical process or an
object by a directional drilling operation that causes a respond
mathematically and graphically to data and decision to choose the
best conditions compared to the previous mode.
The NAS/DS Auto Guide rotary steerable tool is undergoing final
field trials. The point-the-bit tool can use any bit, work at any
rotating speed, work with any MWD/LWD system, and there is no
pressure drop through the tool. It is a fully closed-loop system that
automatically maintains a specified curvature rate.
The Non–Rotating Adjustable stabilizer (NAS) can be controls
curvature rate by exactly positioning and run with the optimum bit,
use the most effective weight (WOB) and rotary speed (RPM) and
apply all of the available hydraulic energy to the bit. The directional
simulator allowed to specify the size of the curvature rate
performance errors of the NAS tool and the magnitude of the random
errors in the survey measurements called the Directional Solution
(DS).
The combination of these technologies (NAS/DS) will provide
smoother bore holes, reduced drilling time, reduced drilling cost and
incredible targeting precision. This simulator controls curvature rate
by precisely adjusting the radial extension of stabilizer blades on a
near bit Non-Rotating Stabilizer and control process corrects for the
secondary effects caused by formation characteristics, bit and tool
wear, and manufacturing tolerances.
Abstract: Integration of process planning and scheduling
functions is necessary to achieve superior overall system
performance. This paper proposes a methodology for integration of
process planning and scheduling for prismatic component that can be
implemented in a company with existing departments. The developed
model considers technological constraints whereas available time for
machining in shop floor is the limiting factor to produce multiple
process plan (MPP). It takes advantage of MPP while guarantied the
fulfillment of the due dates via using overtime. This study has been
proposed to determinate machining parameters, tools, machine and
amount of over time within the minimum cost objective while
overtime is considered for this. At last the illustration shows that the
system performance is improved by as measured by cost and
compatible with due date.
Abstract: Bio-chips are used for experiments on genes and
contain various information such as genes, samples and so on. The
two-dimensional bio-chips, in which one axis represent genes and the
other represent samples, are widely being used these days. Instead of
experimenting with real genes which cost lots of money and much
time to get the results, bio-chips are being used for biological
experiments. And extracting data from the bio-chips with high
accuracy and finding out the patterns or useful information from such
data is very important. Bio-chip analysis systems extract data from
various kinds of bio-chips and mine the data in order to get useful
information. One of the commonly used methods to mine the data is
classification. The algorithm that is used to classify the data can be
various depending on the data types or number characteristics and so
on. Considering that bio-chip data is extremely large, an algorithm that
imitates the ecosystem such as the ant algorithm is suitable to use as an
algorithm for classification. This paper focuses on finding the
classification rules from the bio-chip data using the Ant Colony
algorithm which imitates the ecosystem. The developed system takes
in consideration the accuracy of the discovered rules when it applies it
to the bio-chip data in order to predict the classes.
Abstract: A major challenge in camel productivity is the high
mortality rate of camel calves in the early stage due to the lack of
colostrums. This study investigates the time required for the calves to
obtain the optimum amount of the immunoglobulin (IgG). Eleven
pregnant female camels (Camelus Dromedarus) were selected
randomly and variant in age and gestation. After delivery, 7 calves
were obtained and used for this investigation. Colostrum samples
were collected from mothers immediately after parturition. Blood
samples were obtained from the calves as follow: 0 day (before
suckling), 24, 48, 72, 96, 120 and 144 hours, 2nd, 3rd, and 4th weeks
post suckling. Blood serum and colostrums whey were separated and
used to determine IgG concentration, total protein and concentration
of Cortisol and Thyroxin. The results showed high levels of IgG in
camel colostrums (328.8 ± 4.5 mg / ml). The IgG concentration in
serum of calves was the highest within 1st 24 h after suckling (140.75
mg /ml), and then declined gradually reached lower level at 144 h
(41.97 mg / ml). The average turnover rate (t 1/2) of serum IgG in
the all cases was 3.22 days. The turnover of ranged from 2.56 days
for calves have values of IgG more than average and 7.7 days for
those with values below average. In spite of very high levels of
thyroxin in sera of new born the results showed no correlation
between cortisol and thyroxin with IgG levels.
Abstract: The heuristic decision rules used for project
scheduling will vary depending upon the project-s size, complexity,
duration, personnel, and owner requirements. The concept of project
complexity has received little detailed attention. The need to
differentiate between easy and hard problem instances and the
interest in isolating the fundamental factors that determine the
computing effort required by these procedures inspired a number of
researchers to develop various complexity measures.
In this study, the most common measures of project complexity are
presented. A new measure of project complexity is developed. The
main privilege of the proposed measure is that, it considers size,
shape and logic characteristics, time characteristics, resource
demands and availability characteristics as well as number of critical
activities and critical paths. The degree of sensitivity of the proposed
measure for complexity of project networks has been tested and
evaluated against the other measures of complexity of the considered
fifty project networks under consideration in the current study. The
developed measure showed more sensitivity to the changes in the
network data and gives accurate quantified results when comparing
the complexities of networks.
Abstract: A heuristic conceptual model for to develop the
Reliability Centered Maintenance (RCM), especially in preventive
strategy, has been explored during this paper. In most real cases
which complicity of system obligates high degree of reliability, this
model proposes a more appropriate reliability function between life
time distribution based and another which is based on relevant
Extreme Value (EV) distribution. A statistical and mathematical
approach is used to estimate and verify these two distribution
functions. Then best one is chosen just among them, whichever is
more reliable. A numeric Industrial case study will be reviewed to
represent the concepts of this paper, more clearly.
Abstract: This paper discusses EM algorithm and Bootstrap
approach combination applied for the improvement of the satellite
image fusion process. This novel satellite image fusion method based
on estimation theory EM algorithm and reinforced by Bootstrap
approach was successfully implemented and tested. The sensor
images are firstly split by a Bayesian segmentation method to
determine a joint region map for the fused image. Then, we use the
EM algorithm in conjunction with the Bootstrap approach to develop
the bootstrap EM fusion algorithm, hence producing the fused
targeted image. We proposed in this research to estimate the
statistical parameters from some iterative equations of the EM
algorithm relying on a reference of representative Bootstrap samples
of images. Sizes of those samples are determined from a new
criterion called 'hybrid criterion'. Consequently, the obtained results
of our work show that using the Bootstrap EM (BEM) in image
fusion improve performances of estimated parameters which involve
amelioration of the fused image quality; and reduce the computing
time during the fusion process.
Abstract: A feed-forward, back-propagation Artificial Neural
Network (ANN) model has been used to forecast the occurrences of
wastewater overflows in a combined sewerage reticulation system.
This approach was tested to evaluate its applicability as a method
alternative to the common practice of developing a complete
conceptual, mathematical hydrological-hydraulic model for the
sewerage system to enable such forecasts. The ANN approach
obviates the need for a-priori understanding and representation of the
underlying hydrological hydraulic phenomena in mathematical terms
but enables learning the characteristics of a sewer overflow from the
historical data.
The performance of the standard feed-forward, back-propagation
of error algorithm was enhanced by a modified data normalizing
technique that enabled the ANN model to extrapolate into the
territory that was unseen by the training data. The algorithm and the
data normalizing method are presented along with the ANN model
output results that indicate a good accuracy in the forecasted sewer
overflow rates. However, it was revealed that the accurate
forecasting of the overflow rates are heavily dependent on the
availability of a real-time flow monitoring at the overflow structure
to provide antecedent flow rate data. The ability of the ANN to
forecast the overflow rates without the antecedent flow rates (as is
the case with traditional conceptual reticulation models) was found to
be quite poor.