Abstract: The nickel and gold nanoclusters as supported
catalysts were analyzed by XAS, XRD and XPS in order to
determine their local, global and electronic structure. The present
study has pointed out a strong deformation of the local structure of
the metal, due to its interaction with oxide supports. The average
particle size, the mean squares of the microstrain, the particle size
distribution and microstrain functions of the supported Ni and Au
catalysts were determined by XRD method using Generalized Fermi
Function for the X-ray line profiles approximation. Based on EXAFS
analysis we consider that the local structure of the investigated
systems is strongly distorted concerning the atomic number pairs.
Metal-support interaction is confirmed by the shape changes of the
probability densities of electron transitions: Ni K edge (1s →
continuum and 2p), Au LIII-edge (2p3/2 → continuum, 6s, 6d5/2 and
6d3/2). XPS investigations confirm the metal-support interaction at
their interface.
Abstract: The Influence Diagrams (IDs) is a kind of Probabilistic Belief Networks for graphic modeling. The usage of IDs can improve the communication among field experts, modelers, and decision makers, by showing the issue frame discussed from a high-level point of view. This paper enhances the Time-Sliced Influence Diagrams (TSIDs, or called Dynamic IDs) based formalism from a Discrete Event Systems Modeling and Simulation (DES M&S) perspective, for Exploring Analysis (EA) modeling. The enhancements enable a modeler to specify times occurred of endogenous events dynamically with stochastic sampling as model running and to describe the inter- influences among them with variable nodes in a dynamic situation that the existing TSIDs fails to capture. The new class of model is named Dynamic-Stochastic Influence Diagrams (DSIDs). The paper includes a description of the modeling formalism and the hiberarchy simulators implementing its simulation algorithm, and shows a case study to illustrate its enhancements.
Abstract: Some of the students' problems in writing skill stem
from inadequate preparation for the writing assignment. Students
should be taught how to write well when they arrive in language
classes. Having selected a topic, the students examine and explore the
theme from as large a variety of viewpoints as their background and
imagination make possible. Another strategy is that the students
prepare an Outline before writing the paper. The comparison between
the two mentioned thought provoking techniques was carried out
between the two class groups –students of Islamic Azad University of
Dezful who were studying “Writing 2" as their main course. Each
class group was assigned to write five compositions separately in
different periods of time. Then a t-test for each pair of exams between
the two class groups showed that the t-observed in each pair was
more than the t-critical. Consequently, the first hypothesis which
states those who utilize Brainstorming as a thought provoking
technique in prewriting phase are more successful than those who
outline the papers before writing was verified.
Abstract: In this study, stress distributions on dental implants
made of functionally graded biomaterials (FGBM) are investigated
numerically. The implant body is considered to be subjected to axial
compression loads. Numerical problem is assumed to be 2D, and
ANSYS commercial software is used for the analysis. The cross
section of the implant thread varies as varying the height (H) and the
width (t) of the thread. According to thread dimensions of implant
and material properties of FGBM, equivalent stress distribution on
the implant is determined and presented with contour plots along
with the maximum equivalent stress values. As a result, with
increasing material gradient parameter (n), the equivalent stress
decreases, but the minimum stress distribution increases. Maximum
stress values decrease with decreasing implant radius (r). Maximum
von Mises stresses increases with decreasing H when t is constant.
On the other hand, the stress values are not affected by variation of t
in the case of H = constant.
Abstract: Combustion of sprays is of technological importance, but its flame behavior is not fully understood. Furthermore, the multiplicity of dependent variables such as pressure, temperature, equivalence ratio, and droplet sizes complicates the study of spray combustion. Fundamental study on the influence of the presence of liquid droplets has revealed that laminar flames within aerosol mixtures more readily become unstable than for gaseous ones and this increases the practical burning rate. However, fundamental studies on turbulent flames of aerosol mixtures are limited particularly those under near mono-dispersed droplet conditions. In the present work, centrally ignited expanding flames at near atmospheric pressures are employed to quantify the burning rates in gaseous and aerosol flames. Iso-octane-air aerosols are generated by expansion of the gaseous pre-mixture to produce a homogeneously distributed suspension of fuel droplets. The effects of the presence of droplets and turbulence velocity in relation to the burning rates of the flame are also investigated.
Abstract: This paper demonstrates how the soft systems
methodology can be used to improve the delivery of a module in data warehousing for fourth year information technology students.
Graduates in information technology needs to have academic skills
but also needs to have good practical skills to meet the skills requirements of the information technology industry. In developing
and improving current data warehousing education modules one has to find a balance in meeting the expectations of various role players such as the students themselves, industry and academia. The soft
systems methodology, developed by Peter Checkland, provides a
methodology for facilitating problem understanding from different world views. In this paper it is demonstrated how the soft systems methodology can be used to plan the improvement of data
warehousing education for fourth year information technology students.
Abstract: Corner detection and optical flow are common techniques for feature-based video stabilization. However, these algorithms are computationally expensive and should be performed at a reasonable rate. This paper presents an algorithm for discarding irrelevant feature points and maintaining them for future use so as to improve the computational cost. The algorithm starts by initializing a maintained set. The feature points in the maintained set are examined against its accuracy for modeling. Corner detection is required only when the feature points are insufficiently accurate for future modeling. Then, optical flows are computed from the maintained feature points toward the consecutive frame. After that, a motion model is estimated based on the simplified affine motion model and least square method, with outliers belonging to moving objects presented. Studentized residuals are used to eliminate such outliers. The model estimation and elimination processes repeat until no more outliers are identified. Finally, the entire algorithm repeats along the video sequence with the points remaining from the previous iteration used as the maintained set. As a practical application, an efficient video stabilization can be achieved by exploiting the computed motion models. Our study shows that the number of times corner detection needs to perform is greatly reduced, thus significantly improving the computational cost. Moreover, optical flow vectors are computed for only the maintained feature points, not for outliers, thus also reducing the computational cost. In addition, the feature points after reduction can sufficiently be used for background objects tracking as demonstrated in the simple video stabilizer based on our proposed algorithm.
Abstract: This article proposes a voltage-mode
multifunction filter using differential voltage current
controllable current conveyor transconductance amplifier
(DV-CCCCTA). The features of the circuit are that: the
quality factor and pole frequency can be tuned independently
via the values of capacitors: the circuit description is very
simple, consisting of merely 1 DV-CCCCTA, and 2
capacitors. Without any component matching conditions, the
proposed circuit is very appropriate to further develop into
an integrated circuit. Additionally, each function response
can be selected by suitably selecting input signals with
digital method. The PSpice simulation results are depicted.
The given results agree well with the theoretical anticipation.
Abstract: In this article, we aim to discuss the formulation of two explicit group iterative finite difference methods for time-dependent two dimensional Burger-s problem on a variable mesh. For the non-linear problems, the discretization leads to a non-linear system whose Jacobian is a tridiagonal matrix. We discuss the Newton-s explicit group iterative methods for a general Burger-s equation. The proposed explicit group methods are derived from the standard point and rotated point Crank-Nicolson finite difference schemes. Their computational complexity analysis is discussed. Numerical results are given to justify the feasibility of these two proposed iterative methods.
Abstract: The dynamic spectrum allocation solutions such as
cognitive radio networks have been proposed as a key technology to
exploit the frequency segments that are spectrally underutilized.
Cognitive radio users work as secondary users who need to
constantly and rapidly sense the presence of primary users or
licensees to utilize their frequency bands if they are inactive. Short
sensing cycles should be run by the secondary users to achieve
higher throughput rates as well as to provide low level of interference
to the primary users by immediately vacating their channels once
they have been detected. In this paper, the throughput-sensing time
relationship in local and cooperative spectrum sensing has been
investigated under two distinct scenarios, namely, constant primary
user protection (CPUP) and constant secondary user spectrum
usability (CSUSU) scenarios. The simulation results show that the
design of sensing slot duration is very critical and depends on the
number of cooperating users under CPUP scenario whereas under
CSUSU, cooperating more users has no effect if the sensing time
used exceeds 5% of the total frame duration.
Abstract: Xanthan gum is one of the major commercial
biopolymers. Due to its excellent rheological properties xanthan gum
is used in many applications, mainly in food industry. Commercial
production of xanthan gum uses glucose as the carbon substrate;
consequently the price of xanthan production is high. One of the
ways to decrease xanthan price, is using cheaper substrate like
agricultural wastes. Iran is one of the biggest date producer countries.
However approximately 50% of date production is wasted annually.
The goal of this study is to produce xanthan gum from waste date
using Xanthomonas campestris PTCC1473 by submerged
fermentation. In this study the effect of three variables including
phosphor and nitrogen amount and agitation rate in three levels using
response surface methodology (RSM) has been studied. Results
achieved from statistical analysis Design Expert 7.0.0 software
showed that xanthan increased with increasing level of phosphor.
Low level of nitrogen leaded to higher xanthan production. Xanthan
amount, increasing agitation had positive influence. The statistical
model identified the optimum conditions nitrogen amount=3.15g/l,
phosphor amount=5.03 g/l and agitation=394.8 rpm for xanthan. To
model validation, experiments in optimum conditions for xanthan
gum were carried out. The mean of result for xanthan was 6.72±0.26.
The result was closed to the predicted value by using RSM.
Abstract: Structural Integrity Management (SIM) is
important for the protection of offshore crew, environment, business assets and company and industry reputation. API RP 2A contained guidelines for assessment of existing platforms mostly for the Gulf
of Mexico (GOM). ISO 19902 SIM framework also does not
specifically cater for Malaysia. There are about 200 platforms in
Malaysia with 90 exceeding their design life. The Petronas Carigali
Sdn Bhd (PCSB) uses the Asset Integrity Management System and
the very subjective Risk based Inspection Program for these
platforms. Petronas currently doesn-t have a standalone Petronas
Technical Standard PTS-SIM. This study proposes a recommended
practice for the SIM process for offshore structures in Malaysia,
including studies by API and ISO and local elements such as the
number of platforms, types of facilities, age and risk ranking. Case
study on SMG-A platform in Sabah shows missing or scattered
platform data and a gap in inspection history. It is to undergo a level
3 underwater inspection in year 2015.
Abstract: Using activity theory, organisational theory and
didactics as theoretical foundations, a comprehensive model of the
organisational dimensions relevant for learning and knowledge
transfer will be developed. In a second step, a Learning Assessment
Guideline will be elaborated. This guideline will be designed to
permit a targeted analysis of organisations to identify the status quo
in those areas crucial to the implementation of learning and
knowledge transfer. In addition, this self-analysis tool will enable
learning managers to select adequate didactic models for e- and
blended learning. As part of the European Integrated Project
"Process-oriented Learning and Information Exchange" (PROLIX),
this model of organisational prerequisites for learning and knowledge
transfer will be empirically tested in four profit and non-profit
organisations in Great Britain, Germany and France (to be finalized
in autumn 2006). The findings concern not only the capability of the
model of organisational dimensions, but also the predominant
perceptions of and obstacles to learning in organisations.
Abstract: Leptospirosis is recognized as an important zoonosis
in tropical regions well as an important animal disease with
substantial loss in production. In this study, the model for the
transmission of the Leptospirosis disease to human population are
discussed. Model is described the vector population dynamics and
the Leptospirosis transmission to the human population are
discussed. Local analysis of equilibria are given. We confirm the
results by using numerical results.
Abstract: Large volumes of fingerprints are collected and stored
every day in a wide range of applications, including forensics, access
control etc. It is evident from the database of Federal Bureau of
Investigation (FBI) which contains more than 70 million finger
prints. Compression of this database is very important because of this
high Volume. The performance of existing image coding standards
generally degrades at low bit-rates because of the underlying block
based Discrete Cosine Transform (DCT) scheme. Over the past
decade, the success of wavelets in solving many different problems
has contributed to its unprecedented popularity. Due to
implementation constraints scalar wavelets do not posses all the
properties which are needed for better performance in compression.
New class of wavelets called 'Multiwavelets' which posses more
than one scaling filters overcomes this problem. The objective of this
paper is to develop an efficient compression scheme and to obtain
better quality and higher compression ratio through multiwavelet
transform and embedded coding of multiwavelet coefficients through
Set Partitioning In Hierarchical Trees algorithm (SPIHT) algorithm.
A comparison of the best known multiwavelets is made to the best
known scalar wavelets. Both quantitative and qualitative measures of
performance are examined for Fingerprints.
Abstract: Identity verification of authentic persons by their multiview faces is a real valued problem in machine vision. Multiview faces are having difficulties due to non-linear representation in the feature space. This paper illustrates the usability of the generalization of LDA in the form of canonical covariate for face recognition to multiview faces. In the proposed work, the Gabor filter bank is used to extract facial features that characterized by spatial frequency, spatial locality and orientation. Gabor face representation captures substantial amount of variations of the face instances that often occurs due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images of rotated profile views produce Gabor faces with high dimensional features vectors. Canonical covariate is then used to Gabor faces to reduce the high dimensional feature spaces into low dimensional subspaces. Finally, support vector machines are trained with canonical sub-spaces that contain reduced set of features and perform recognition task. The proposed system is evaluated with UMIST face database. The experiment results demonstrate the efficiency and robustness of the proposed system with high recognition rates.
Abstract: This paper provides a key driver-based conceptual framework that can be used to improve a firm-s success in commercializing technology and in new product innovation resulting from collaboration with other organizations through strategic alliances. Based on a qualitative study using an interview approach, strategic alliances of entrepreneurs in the food processing industry in Thailand are explored. This paper describes factors affecting decisions to collaborate through alliances. It identifies four issues: maintaining the efficiency of the value chain for production capability, adapting to present and future competition, careful assessment of value of outcomes, and management of innovation. We consider five driving factors: resource orientation, assessment of risk, business opportunity, sharing of benefits and confidence in alliance partners. These factors will be of interest to entrepreneurs and policy makers with regard to further understanding of the direction of business strategies.
Abstract: In this research, heat transfer of a poly Ethylene
fluidized bed reactor without reaction were studied experimentally
and computationally at different superficial gas velocities. A multifluid
Eulerian computational model incorporating the kinetic theory
for solid particles was developed and used to simulate the heat
conducting gas–solid flows in a fluidized bed configuration.
Momentum exchange coefficients were evaluated using the Syamlal–
O-Brien drag functions. Temperature distributions of different phases
in the reactor were also computed. Good agreement was found
between the model predictions and the experimentally obtained data
for the bed expansion ratio as well as the qualitative gas–solid flow
patterns. The simulation and experimental results showed that the gas
temperature decreases as it moves upward in the reactor, while the
solid particle temperature increases. Pressure drop and temperature
distribution predicted by the simulations were in good agreement
with the experimental measurements at superficial gas velocities
higher than the minimum fluidization velocity. Also, the predicted
time-average local voidage profiles were in reasonable agreement
with the experimental results. The study showed that the
computational model was capable of predicting the heat transfer and
the hydrodynamic behavior of gas-solid fluidized bed flows with
reasonable accuracy.
Abstract: 53 college students answered questions regarding the circumstances in which they first heard about the news of Wenchuan earthquake or the news of their acceptance to college which took place approximately one year ago, and answered again two years later. The number of details recalled about their circumstances for both events was high and didn-t decline two years later. However, consistency in reported details over two years was low. Participants were more likely to construct central (e.g., Where were you?) than peripheral information (What were you wearing?), and the confidence of the central information was higher than peripheral information, which indicated that they constructed more when they were more confident.
Abstract: When acid is pumped into damaged reservoirs for
damage removal/stimulation, distorted inflow of acid into the
formation occurs caused by acid preferentially traveling into highly
permeable regions over low permeable regions, or (in general) into
the path of least resistance. This can lead to poor zonal coverage and
hence warrants diversion to carry out an effective placement of acid.
Diversion is desirably a reversible technique of temporarily reducing
the permeability of high perm zones, thereby forcing the acid into
lower perm zones.
The uniqueness of each reservoir can pose several challenges to
engineers attempting to devise optimum and effective diversion
strategies. Diversion techniques include mechanical placement and/or
chemical diversion of treatment fluids, further sub-classified into ball
sealers, bridge plugs, packers, particulate diverters, viscous gels,
crosslinked gels, relative permeability modifiers (RPMs), foams,
and/or the use of placement techniques, such as coiled tubing (CT)
and the maximum pressure difference and injection rate (MAPDIR)
methodology.
It is not always realized that the effectiveness of diverters greatly
depends on reservoir properties, such as formation type, temperature,
reservoir permeability, heterogeneity, and physical well
characteristics (e.g., completion type, well deviation, length of
treatment interval, multiple intervals, etc.). This paper reviews the
mechanisms by which each variety of diverter functions and
discusses the effect of various reservoir properties on the efficiency
of diversion techniques. Guidelines are recommended to help
enhance productivity from zones of interest by choosing the best
methods of diversion while pumping an optimized amount of
treatment fluid. The success of an overall acid treatment often
depends on the effectiveness of the diverting agents.