Abstract: Combustion of sprays is of technological importance, but its flame behavior is not fully understood. Furthermore, the multiplicity of dependent variables such as pressure, temperature, equivalence ratio, and droplet sizes complicates the study of spray combustion. Fundamental study on the influence of the presence of liquid droplets has revealed that laminar flames within aerosol mixtures more readily become unstable than for gaseous ones and this increases the practical burning rate. However, fundamental studies on turbulent flames of aerosol mixtures are limited particularly those under near mono-dispersed droplet conditions. In the present work, centrally ignited expanding flames at near atmospheric pressures are employed to quantify the burning rates in gaseous and aerosol flames. Iso-octane-air aerosols are generated by expansion of the gaseous pre-mixture to produce a homogeneously distributed suspension of fuel droplets. The effects of the presence of droplets and turbulence velocity in relation to the burning rates of the flame are also investigated.
Abstract: This paper demonstrates how the soft systems
methodology can be used to improve the delivery of a module in data warehousing for fourth year information technology students.
Graduates in information technology needs to have academic skills
but also needs to have good practical skills to meet the skills requirements of the information technology industry. In developing
and improving current data warehousing education modules one has to find a balance in meeting the expectations of various role players such as the students themselves, industry and academia. The soft
systems methodology, developed by Peter Checkland, provides a
methodology for facilitating problem understanding from different world views. In this paper it is demonstrated how the soft systems methodology can be used to plan the improvement of data
warehousing education for fourth year information technology students.
Abstract: Corner detection and optical flow are common techniques for feature-based video stabilization. However, these algorithms are computationally expensive and should be performed at a reasonable rate. This paper presents an algorithm for discarding irrelevant feature points and maintaining them for future use so as to improve the computational cost. The algorithm starts by initializing a maintained set. The feature points in the maintained set are examined against its accuracy for modeling. Corner detection is required only when the feature points are insufficiently accurate for future modeling. Then, optical flows are computed from the maintained feature points toward the consecutive frame. After that, a motion model is estimated based on the simplified affine motion model and least square method, with outliers belonging to moving objects presented. Studentized residuals are used to eliminate such outliers. The model estimation and elimination processes repeat until no more outliers are identified. Finally, the entire algorithm repeats along the video sequence with the points remaining from the previous iteration used as the maintained set. As a practical application, an efficient video stabilization can be achieved by exploiting the computed motion models. Our study shows that the number of times corner detection needs to perform is greatly reduced, thus significantly improving the computational cost. Moreover, optical flow vectors are computed for only the maintained feature points, not for outliers, thus also reducing the computational cost. In addition, the feature points after reduction can sufficiently be used for background objects tracking as demonstrated in the simple video stabilizer based on our proposed algorithm.
Abstract: The Information and Communication Technologies
(ICTs), and the Wide World Web (WWW) have fundamentally
altered the practice of teaching and learning world wide. Many
universities, organizations, colleges and schools are trying to apply
the benefits of the emerging ICT. In the early nineties the term
learning object was introduced into the instructional technology
vernacular; the idea being that educational resources could be broken
into modular components for later combination by instructors,
learners, and eventually computes into larger structures that would
support learning [1]. However in many developing countries, the use
of ICT is still in its infancy stage and the concept of learning object
is quite new. This paper outlines the learning object design
considerations for developing countries depending on learning
environment.
Abstract: Wetting characteristics of reactive (Sn–0.7Cu solder)
and non– reactive (castor oil) wetting of liquids on Cu and Ag plated
Al substrates have been investigated. Solder spreading exhibited
capillary, gravity and viscous regimes. Oils did not exhibit noticeable
spreading regimes. Solder alloy showed better wettability on Ag
coated Al substrate compared to Cu plating. In the case of castor oil,
Cu coated Al substrate exhibited good wettability as compared to Ag
coated Al substrates. The difference in wettability during reactive
wetting of solder and non–reactive wetting of oils is attributed to the
change in the surface energies of Al substrates brought about by the
formation of intermetallic compounds (IMCs).
Abstract: This article proposes a voltage-mode
multifunction filter using differential voltage current
controllable current conveyor transconductance amplifier
(DV-CCCCTA). The features of the circuit are that: the
quality factor and pole frequency can be tuned independently
via the values of capacitors: the circuit description is very
simple, consisting of merely 1 DV-CCCCTA, and 2
capacitors. Without any component matching conditions, the
proposed circuit is very appropriate to further develop into
an integrated circuit. Additionally, each function response
can be selected by suitably selecting input signals with
digital method. The PSpice simulation results are depicted.
The given results agree well with the theoretical anticipation.
Abstract: In this article, we aim to discuss the formulation of two explicit group iterative finite difference methods for time-dependent two dimensional Burger-s problem on a variable mesh. For the non-linear problems, the discretization leads to a non-linear system whose Jacobian is a tridiagonal matrix. We discuss the Newton-s explicit group iterative methods for a general Burger-s equation. The proposed explicit group methods are derived from the standard point and rotated point Crank-Nicolson finite difference schemes. Their computational complexity analysis is discussed. Numerical results are given to justify the feasibility of these two proposed iterative methods.
Abstract: The dynamic spectrum allocation solutions such as
cognitive radio networks have been proposed as a key technology to
exploit the frequency segments that are spectrally underutilized.
Cognitive radio users work as secondary users who need to
constantly and rapidly sense the presence of primary users or
licensees to utilize their frequency bands if they are inactive. Short
sensing cycles should be run by the secondary users to achieve
higher throughput rates as well as to provide low level of interference
to the primary users by immediately vacating their channels once
they have been detected. In this paper, the throughput-sensing time
relationship in local and cooperative spectrum sensing has been
investigated under two distinct scenarios, namely, constant primary
user protection (CPUP) and constant secondary user spectrum
usability (CSUSU) scenarios. The simulation results show that the
design of sensing slot duration is very critical and depends on the
number of cooperating users under CPUP scenario whereas under
CSUSU, cooperating more users has no effect if the sensing time
used exceeds 5% of the total frame duration.
Abstract: Xanthan gum is one of the major commercial
biopolymers. Due to its excellent rheological properties xanthan gum
is used in many applications, mainly in food industry. Commercial
production of xanthan gum uses glucose as the carbon substrate;
consequently the price of xanthan production is high. One of the
ways to decrease xanthan price, is using cheaper substrate like
agricultural wastes. Iran is one of the biggest date producer countries.
However approximately 50% of date production is wasted annually.
The goal of this study is to produce xanthan gum from waste date
using Xanthomonas campestris PTCC1473 by submerged
fermentation. In this study the effect of three variables including
phosphor and nitrogen amount and agitation rate in three levels using
response surface methodology (RSM) has been studied. Results
achieved from statistical analysis Design Expert 7.0.0 software
showed that xanthan increased with increasing level of phosphor.
Low level of nitrogen leaded to higher xanthan production. Xanthan
amount, increasing agitation had positive influence. The statistical
model identified the optimum conditions nitrogen amount=3.15g/l,
phosphor amount=5.03 g/l and agitation=394.8 rpm for xanthan. To
model validation, experiments in optimum conditions for xanthan
gum were carried out. The mean of result for xanthan was 6.72±0.26.
The result was closed to the predicted value by using RSM.
Abstract: Titanium gels doped with water-soluble cationic porphyrin were synthesized by the sol–gel polymerization of Ti (OC4H9)4. In this work we investigate the spectroscopic properties along with SEM images of tetra carboxyl phenyl porphyrin when incorporated into porous matrix produced by the sol–gel technique.
Abstract: Structural Integrity Management (SIM) is
important for the protection of offshore crew, environment, business assets and company and industry reputation. API RP 2A contained guidelines for assessment of existing platforms mostly for the Gulf
of Mexico (GOM). ISO 19902 SIM framework also does not
specifically cater for Malaysia. There are about 200 platforms in
Malaysia with 90 exceeding their design life. The Petronas Carigali
Sdn Bhd (PCSB) uses the Asset Integrity Management System and
the very subjective Risk based Inspection Program for these
platforms. Petronas currently doesn-t have a standalone Petronas
Technical Standard PTS-SIM. This study proposes a recommended
practice for the SIM process for offshore structures in Malaysia,
including studies by API and ISO and local elements such as the
number of platforms, types of facilities, age and risk ranking. Case
study on SMG-A platform in Sabah shows missing or scattered
platform data and a gap in inspection history. It is to undergo a level
3 underwater inspection in year 2015.
Abstract: Using activity theory, organisational theory and
didactics as theoretical foundations, a comprehensive model of the
organisational dimensions relevant for learning and knowledge
transfer will be developed. In a second step, a Learning Assessment
Guideline will be elaborated. This guideline will be designed to
permit a targeted analysis of organisations to identify the status quo
in those areas crucial to the implementation of learning and
knowledge transfer. In addition, this self-analysis tool will enable
learning managers to select adequate didactic models for e- and
blended learning. As part of the European Integrated Project
"Process-oriented Learning and Information Exchange" (PROLIX),
this model of organisational prerequisites for learning and knowledge
transfer will be empirically tested in four profit and non-profit
organisations in Great Britain, Germany and France (to be finalized
in autumn 2006). The findings concern not only the capability of the
model of organisational dimensions, but also the predominant
perceptions of and obstacles to learning in organisations.
Abstract: In this research, heat transfer of a poly Ethylene
fluidized bed reactor without reaction were studied experimentally
and computationally at different superficial gas velocities. A multifluid
Eulerian computational model incorporating the kinetic theory
for solid particles was developed and used to simulate the heat
conducting gas–solid flows in a fluidized bed configuration.
Momentum exchange coefficients were evaluated using the Syamlal–
O-Brien drag functions. Temperature distributions of different phases
in the reactor were also computed. Good agreement was found
between the model predictions and the experimentally obtained data
for the bed expansion ratio as well as the qualitative gas–solid flow
patterns. The simulation and experimental results showed that the gas
temperature decreases as it moves upward in the reactor, while the
solid particle temperature increases. Pressure drop and temperature
distribution predicted by the simulations were in good agreement
with the experimental measurements at superficial gas velocities
higher than the minimum fluidization velocity. Also, the predicted
time-average local voidage profiles were in reasonable agreement
with the experimental results. The study showed that the
computational model was capable of predicting the heat transfer and
the hydrodynamic behavior of gas-solid fluidized bed flows with
reasonable accuracy.
Abstract: 53 college students answered questions regarding the circumstances in which they first heard about the news of Wenchuan earthquake or the news of their acceptance to college which took place approximately one year ago, and answered again two years later. The number of details recalled about their circumstances for both events was high and didn-t decline two years later. However, consistency in reported details over two years was low. Participants were more likely to construct central (e.g., Where were you?) than peripheral information (What were you wearing?), and the confidence of the central information was higher than peripheral information, which indicated that they constructed more when they were more confident.
Abstract: When acid is pumped into damaged reservoirs for
damage removal/stimulation, distorted inflow of acid into the
formation occurs caused by acid preferentially traveling into highly
permeable regions over low permeable regions, or (in general) into
the path of least resistance. This can lead to poor zonal coverage and
hence warrants diversion to carry out an effective placement of acid.
Diversion is desirably a reversible technique of temporarily reducing
the permeability of high perm zones, thereby forcing the acid into
lower perm zones.
The uniqueness of each reservoir can pose several challenges to
engineers attempting to devise optimum and effective diversion
strategies. Diversion techniques include mechanical placement and/or
chemical diversion of treatment fluids, further sub-classified into ball
sealers, bridge plugs, packers, particulate diverters, viscous gels,
crosslinked gels, relative permeability modifiers (RPMs), foams,
and/or the use of placement techniques, such as coiled tubing (CT)
and the maximum pressure difference and injection rate (MAPDIR)
methodology.
It is not always realized that the effectiveness of diverters greatly
depends on reservoir properties, such as formation type, temperature,
reservoir permeability, heterogeneity, and physical well
characteristics (e.g., completion type, well deviation, length of
treatment interval, multiple intervals, etc.). This paper reviews the
mechanisms by which each variety of diverter functions and
discusses the effect of various reservoir properties on the efficiency
of diversion techniques. Guidelines are recommended to help
enhance productivity from zones of interest by choosing the best
methods of diversion while pumping an optimized amount of
treatment fluid. The success of an overall acid treatment often
depends on the effectiveness of the diverting agents.
Abstract: The objectif of the present work is to determinate the
potential of the solar parabolic trough collector (PTC) for use in the
design of a solar thermal power plant in Algeria. The study is based
on a mathematical modeling of the PTC. Heat balance has been
established respectively on the heat transfer fluid (HTF), the absorber
tube and the glass envelop using the principle of energy conservation
at each surface of the HCE cross-sectionn. The modified Euler
method is used to solve the obtained differential equations. At first
the results for typical days of two seasons the thermal behavior of the
HTF, the absorber and the envelope are obtained. Then to determine
the thermal performances of the heat transfer fluid, different oils are
considered and their temperature and heat gain evolutions compared.
Abstract: The objectives of this research paper were to study the
influencing factors that contributed to the success of electronic
commerce (e-commerce) and to study the approach to enhance the
standard of e-commerce for small and medium enterprises (SME).
The research paper focused the study on only sole proprietorship
SMEs in Bangkok, Thailand. The factors contributed to the success
of SME included business management, learning in the organization,
business collaboration, and the quality of website. A quantitative and
qualitative mixed research methodology was used. In terms of
quantitative method, a questionnaire was used to collect data from
251 sole proprietorships. The System Equation Model (SEM) was
utilized as the tool for data analysis. In terms of qualitative method,
an in-depth interview, a dialogue with experts in the field of ecommerce
for SMEs, and content analysis were used.
By using the adjusted causal relationship structure model, it was
revealed that the factors affecting the success of e-commerce for
SMEs were found to be congruent with the empirical data. The
hypothesis testing indicated that business management influenced the
learning in the organization, the learning in the organization
influenced business collaboration and the quality of the website, and
these factors, in turn, influenced the success of SMEs. Moreover, the
approach to enhance the standard of SMEs revealed that the majority
of respondents wanted to enhance the standard of SMEs to a high
level in the category of safety of e-commerce system, basic structure
of e-commerce, development of staff potentials, assistance of budget
and tax reduction, and law improvement regarding the e-commerce
respectively.
Abstract: This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.
Abstract: X-ray mammography is the most effective method for
the early detection of breast diseases. However, the typical diagnostic
signs such as microcalcifications and masses are difficult to detect
because mammograms are of low-contrast and noisy. In this paper, a
new algorithm for image denoising and enhancement in Orthogonal
Polynomials Transformation (OPT) is proposed for radiologists to
screen mammograms. In this method, a set of OPT edge coefficients
are scaled to a new set by a scale factor called OPT scale factor. The
new set of coefficients is then inverse transformed resulting in
contrast improved image. Applications of the proposed method to
mammograms with subtle lesions are shown. To validate the
effectiveness of the proposed method, we compare the results to
those obtained by the Histogram Equalization (HE) and the Unsharp
Masking (UM) methods. Our preliminary results strongly suggest
that the proposed method offers considerably improved enhancement
capability over the HE and UM methods.
Abstract: The aim of this study is to show innovative techniques that describe the effectiveness of individuals diagnosed with antisocial personality disorders (ASPD). The author presents information about hate schemas regarding persons with ASPD and their understanding of the role of hate. The data of 60 prisoners with ASPD, 40 prisoners without ASPD, and 60 men without antisocial tendencies, has been analyzed. The participants were asked to describe their hate inspired by a photograph. The narrative discourse was analyzed, the three groups were compared. The results show the differences between the inmates with ASPD, those without ASPD, and the controls. The antisocial individuals describe hate as an ambivalent feeling with low emotional intensity, i.e., actors (in stories) are presented more as positives than as partners. They use different mechanisms to keep them from understanding the meaning of the emotional situation. The schema's characteristics were expressed in narratives attributed to high Psychopathy.