Abstract: An interesting method to produce calcium carbonate is based in a gas-liquid reaction between carbon dioxide and aqueous solutions of calcium hydroxide. The design parameters for gas-liquid phase are flow regime, individual mass transfer, gas-liquid specific interfacial area. Most studies on gas-liquid phase were devoted to the experimental determination of some of these parameters, and more specifically, of the mass transfer coefficient, kLa which depends fundamentally on the superficial gas velocity and on the physical properties of absorption phase. The principle investigation was directed to study the effect of the vibration on the mass transfer coefficient kLa in gas-liquid phase during absorption of CO2 in the in aqueous solution of calcium hydroxide. The vibration with a higher frequency increase the mass transfer coefficient kLa, but vibration with lower frequency didn-t improve it, the mass transfer coefficient kLa increase with increase the superficial gas velocity.
Abstract: PDMS (Polydimethylsiloxane) polymer is a suitable material for biological and MEMS (Microelectromechanical systems) designers, because of its biocompatibility, transparency and high resistance under plasma treatment. PDMS round channel is always been of great interest due to its ability to confine the liquid with membrane type micro valves. In this paper we are presenting a very simple way to form round shapemicrofluidic channel, which is based on reflow of positive photoresist AZ® 40 XT. With this method, it is possible to obtain channel of different height simply by varying the spin coating parameters of photoresist.
Abstract: Recently, a great amount of interest has been shown
in the field of modeling and controlling hybrid systems. One of the
efficient and common methods in this area utilizes the mixed logicaldynamical
(MLD) systems in the modeling. In this method, the
system constraints are transformed into mixed-integer inequalities by
defining some logic statements. In this paper, a system containing
three tanks is modeled as a nonlinear switched system by using the
MLD framework. Comparing the model size of the three-tank system
with that of a two-tank system, it is deduced that the number of
binary variables, the size of the system and its complexity
tremendously increases with the number of tanks, which makes the
control of the system more difficult. Therefore, methods should be
found which result in fewer mixed-integer inequalities.
Abstract: Cameron Highlands is a mountainous area subjected
to torrential tropical showers. It extracts 5.8 million liters of water
per day for drinking supply from its rivers at several intake points.
The water quality of rivers in Cameron Highlands, however, has
deteriorated significantly due to land clearing for agriculture,
excessive usage of pesticides and fertilizers as well as construction
activities in rapidly developing urban areas. On the other hand, these
pollution sources known as non-point pollution sources are diverse
and hard to identify and therefore they are difficult to estimate.
Hence, Geographical Information Systems (GIS) was used to provide
an extensive approach to evaluate landuse and other mapping
characteristics to explain the spatial distribution of non-point sources
of contamination in Cameron Highlands. The method to assess
pollution sources has been developed by using Cameron Highlands
Master Plan (2006-2010) for integrating GIS, databases, as well as
pollution loads in the area of study. The results show highest annual
runoff is created by forest, 3.56 × 108 m3/yr followed by urban
development, 1.46 × 108 m3/yr. Furthermore, urban development
causes highest BOD load (1.31 × 106 kgBOD/yr) while agricultural
activities and forest contribute the highest annual loads for
phosphorus (6.91 × 104 kgP/yr) and nitrogen (2.50 × 105 kgN/yr),
respectively. Therefore, best management practices (BMPs) are
suggested to be applied to reduce pollution level in the area.
Abstract: This work consists of three parts. First, the alias-free
condition for the conventional two-channel quadrature mirror filter
bank is analyzed using complex arithmetic. Second, the approach
developed in the first part is applied to the complex quadrature mirror
filter bank. Accordingly, the structure is simplified and the theory is
easier to follow. Finally, a new class of complex quadrature mirror
filter banks is proposed. Interesting properties of this new structure
are also discussed.
Abstract: The decoding of Low-Density Parity-Check (LDPC) codes is operated over a redundant structure known as the bipartite graph, meaning that the full set of bit nodes is not absolutely necessary for decoder convergence. In 2008, Soyjaudah and Catherine designed a recovery algorithm for LDPC codes based on this assumption and showed that the error-correcting performance of their codes outperformed conventional LDPC Codes. In this work, the use of the recovery algorithm is further explored to test the performance of LDPC codes while the number of iterations is progressively increased. For experiments conducted with small blocklengths of up to 800 bits and number of iterations of up to 2000, the results interestingly demonstrate that contrary to conventional wisdom, the error-correcting performance keeps increasing with increasing number of iterations.
Abstract: There are three approaches to complete Bayesian
Network (BN) model construction: total expert-centred, total datacentred,
and semi data-centred. These three approaches constitute the
basis of the empirical investigation undertaken and reported in this
paper. The objective is to determine, amongst these three
approaches, which is the optimal approach for the construction of a
BN-based model for the performance assessment of students-
laboratory work in a virtual electronic laboratory environment. BN
models were constructed using all three approaches, with respect to
the focus domain, and compared using a set of optimality criteria. In
addition, the impact of the size and source of the training, on the
performance of total data-centred and semi data-centred models was
investigated. The results of the investigation provide additional
insight for BN model constructors and contribute to literature
providing supportive evidence for the conceptual feasibility and
efficiency of structure and parameter learning from data. In addition,
the results highlight other interesting themes.
Abstract: A cross sectional survey design was used to collect
data from 370 diabetic patients. Two instruments were used in
obtaining data; in-depth interview guide and researchers- developed
questionnaire. Fisher's exact test was used to investigate association
between the identified factors and nonadherence. Factors identified
were: socio-demographic factors such as: gender, age, marital status,
educational level and occupation; psychosocial obstacles such as:
non-affordability of prescribed diet, frustration due to the restriction,
limited spousal support, feelings of deprivation, feeling that
temptation is inevitable, difficulty in adhering in social gatherings
and difficulty in revealing to host that one is diabetic; health care
providers obstacles were: poor attitude of health workers, irregular
diabetes education in clinics , limited number of nutrition education
sessions/ inability of the patients to estimate the desired quantity of
food, no reminder post cards or phone calls about upcoming patient
appointments and delayed start of appointment / time wasting in
clinics.
Abstract: Using mobile Internet access technologies and eservices,
various economic agents can efficiently offer their products
or services to a large number of clients. With the support of mobile
communications networks, the clients can have access to e-services,
anywhere and anytime. This is a base to establish a convergence of
technological and financial interests of mobile operators, software
developers, mobile terminals producers and e-content providers. In
this paper, a client server system is presented, using 3G, EDGE,
mobile terminals, for Stock Exchange e-services access.
Abstract: Modeling of the dynamic behavior and motion are
renewed interest in the improved tractive performance of an
intelligent air-cushion tracked vehicle (IACTV). This paper presents
a new dynamical model for the forces on the developed small scale
intelligent air-cushion tracked vehicle moving over swamp peat. The
air cushion system partially supports the 25 % of vehicle total weight
in order to make the vehicle ground contact pressure 7 kN/m2. As the
air-cushion support system can adjust automatically on the terrain, so
the vehicle can move over the terrain without any risks. The springdamper
system is used with the vehicle body to control the aircushion
support system on any undulating terrain by making the
system sinusoidal form. Experiments have been carried out to
investigate the relationships among tractive efficiency, slippage,
traction coefficient, load distribution ratio, tractive effort, motion
resistance and power consumption in given terrain conditions.
Experiment and simulation results show that air-cushion system
improves the vehicle performance by keeping traction coefficient of
71% and tractive efficiency of 62% and the developed model can
meet the demand of transport efficiency with the optimal power
consumption.
Abstract: In this paper a stochastic scenario-based model predictive control applied to molten salt storage systems in concentrated solar tower power plant is presented. The main goal of this study is to build up a tool to analyze current and expected future resources for evaluating the weekly power to be advertised on electricity secondary market. This tool will allow plant operator to maximize profits while hedging the impact on the system of stochastic variables such as resources or sunlight shortage.
Solving the problem first requires a mixed logic dynamic modeling of the plant. The two stochastic variables, respectively the sunlight incoming energy and electricity demands from secondary market, are modeled by least square regression. Robustness is achieved by drawing a certain number of random variables realizations and applying the most restrictive one to the system. This scenario approach control technique provides the plant operator a confidence interval containing a given percentage of possible stochastic variable realizations in such a way that robust control is always achieved within its bounds. The results obtained from many trajectory simulations show the existence of a ‘’reliable’’ interval, which experimentally confirms the algorithm robustness.
Abstract: Predicting short term wind speed is essential in order
to prevent systems in-action from the effects of strong winds. It also
helps in using wind energy as an alternative source of energy, mainly
for Electrical power generation. Wind speed prediction has
applications in Military and civilian fields for air traffic control,
rocket launch, ship navigation etc. The wind speed in near future
depends on the values of other meteorological variables, such as
atmospheric pressure, moisture content, humidity, rainfall etc. The
values of these parameters are obtained from a nearest weather
station and are used to train various forms of neural networks. The
trained model of neural networks is validated using a similar set of
data. The model is then used to predict the wind speed, using the
same meteorological information. This paper reports an Artificial
Neural Network model for short term wind speed prediction, which
uses back propagation algorithm.
Abstract: The Niger Delta Region of Nigeria is home to about
20 million people and 40 different ethnic groups. The region has an
area of seventy thousand square kilometers (70,000 KM2) of
wetlands, formed primarily by sediments deposition and makes up
7.5 percent of Nigeria's total landmass. The notable ecological zones
in this region includes: coastal barrier islands; mangrove swamp
forests; fresh water swamps; and lowland rainforests. This incredibly
naturally-endowed ecosystem region, which contains one of the
highest concentrations of biodiversity on the planet, in addition to
supporting abundant flora and fauna, is threatened by the inhuman act
known as gas flaring. Gas flaring is the combustion of natural gas
that is associated with crude oil when it is pumped up from the
ground. In petroleum-producing areas such as the Niger Delta region
of Nigeria where insufficient investment was made in infrastructure
to utilize natural gas, flaring is employed to dispose of this associated
gas. This practice has impoverished the communities where it is
practiced, with attendant environmental, economic and health
challenges. This paper discusses the adverse environmental and
health implication associated with the practice, the role of
Government, Policy makers, Oil companies and the Local
communities aimed at bring this inhuman practice to a prompt end.
Abstract: This paper applies Bayesian Networks to support
information extraction from unstructured, ungrammatical, and
incoherent data sources for semantic annotation. A tool has been
developed that combines ontologies, machine learning, and
information extraction and probabilistic reasoning techniques to
support the extraction process. Data acquisition is performed with the
aid of knowledge specified in the form of ontology. Due to the
variable size of information available on different data sources, it is
often the case that the extracted data contains missing values for
certain variables of interest. It is desirable in such situations to
predict the missing values. The methodology, presented in this paper,
first learns a Bayesian network from the training data and then uses it
to predict missing data and to resolve conflicts. Experiments have
been conducted to analyze the performance of the presented
methodology. The results look promising as the methodology
achieves high degree of precision and recall for information
extraction and reasonably good accuracy for predicting missing
values.
Abstract: The use of renewable energy sources becomes more
necessary and interesting. As wider applications of renewable energy
devices at domestic, commercial and industrial levels has not only
resulted in greater awareness, but also significantly installed
capacities. In addition, biomass principally is in the form of woods,
which is a form of energy by humans for a long time. Gasification is
a process of conversion of solid carbonaceous fuel into combustible
gas by partial combustion. Many gasifier models have various
operating conditions; the parameters kept in each model are different.
This study applied experimental data, which has three inputs, which
are; biomass consumption, temperature at combustion zone and ash
discharge rate. One output is gas flow rate. For this paper, neural
network was used to identify the gasifier system suitable for the
experimental data. In the result,neural networkis usable to attain the
answer.
Abstract: The term hybrid composite refers to the composite
containing more than one type of fiber material as reinforcing fillers.
It has become attractive structural material due to the ability of
providing better combination of properties with respect to single fiber
containing composite. The eco-friendly nature as well as processing
advantage, light weight and low cost have enhanced the attraction
and interest of natural fiber reinforced composite. The objective of
present research is to study the mechanical properties of jute-coir
fiber reinforced hybrid polypropylene (PP) composite according to
filler loading variation. In the present work composites were
manufactured by using hot press machine at four levels of fiber
loading (5, 10, 15 and 20 wt %). Jute and coir fibers were utilized at a
ratio of (1:1) during composite manufacturing. Tensile, flexural,
impact and hardness tests were conducted for mechanical
characterization. Tensile test of composite showed a decreasing trend
of tensile strength and increasing trend of the Young-s modulus with
increasing fiber content. During flexural, impact and hardness tests,
the flexural strength, flexural modulus, impact strength and hardness
were found to be increased with increasing fiber loading. Based on
the fiber loading used in this study, 20% fiber reinforced composite
resulted the best set of mechanical properties.
Abstract: The number of framework conceived for e-learning
constantly increase, unfortunately the creators of learning materials
and educational institutions engaged in e-formation adopt a
“proprietor" approach, where the developed products (courses,
activities, exercises, etc.) can be exploited only in the framework
where they were conceived, their uses in the other learning
environments requires a greedy adaptation in terms of time and
effort. Each one proposes courses whose organization, contents,
modes of interaction and presentations are unique for all learners,
unfortunately the latter are heterogeneous and are not interested by
the same information, but only by services or documents adapted to
their needs. Currently the new tendency for the framework
conceived for e-learning, is the interoperability of learning materials,
several standards exist (DCMI (Dublin Core Metadata Initiative)[2],
LOM (Learning Objects Meta data)[1], SCORM (Shareable Content
Object Reference Model)[6][7][8], ARIADNE (Alliance of Remote
Instructional Authoring and Distribution Networks for Europe)[9],
CANCORE (Canadian Core Learning Resource Metadata
Application Profiles)[3]), they converge all to the idea of learning
objects. They are also interested in the adaptation of the learning
materials according to the learners- profile. This article proposes an
approach for the composition of courses adapted to the various
profiles (knowledge, preferences, objectives) of learners, based on
two ontologies (domain to teach and educational) and the learning
objects.
Abstract: This paper suggests an improved integer frequency
offset (IFO) estimation scheme using P1 symbol for orthogonal
frequency division multiplexing (OFDM) based the second generation
terrestrial digital video broadcasting (DVB-T2) system. Proposed
IFO estimator is designed by a low-complexity blind IFO estimation
scheme, which is implemented with complex additions. Also, we
propose active carriers (ACs) selection scheme in order to prevent
performance degradation in blind IFO estimation. The simulation
results show that under the AWGN and TU6 channels, the proposed
method has low complexity than conventional method and almost
similar performance in comparison with the conventional method.
Abstract: In this paper we present a method for gene ranking
from DNA microarray data. More precisely, we calculate the correlation
networks, which are unweighted and undirected graphs, from
microarray data of cervical cancer whereas each network represents
a tissue of a certain tumor stage and each node in the network
represents a gene. From these networks we extract one tree for
each gene by a local decomposition of the correlation network. The
interpretation of a tree is that it represents the n-nearest neighbor
genes on the n-th level of a tree, measured by the Dijkstra distance,
and, hence, gives the local embedding of a gene within the correlation
network. For the obtained trees we measure the pairwise similarity
between trees rooted by the same gene from normal to cancerous
tissues. This evaluates the modification of the tree topology due to
progression of the tumor. Finally, we rank the obtained similarity
values from all tissue comparisons and select the top ranked genes.
For these genes the local neighborhood in the correlation networks
changes most between normal and cancerous tissues. As a result
we find that the top ranked genes are candidates suspected to be
involved in tumor growth and, hence, indicates that our method
captures essential information from the underlying DNA microarray
data of cervical cancer.
Abstract: C-control chart assumes that process nonconformities follow a Poisson distribution. In actuality, however, this Poisson distribution does not always occur. A process control for semiconductor based on a Poisson distribution always underestimates the true average amount of nonconformities and the process variance. Quality is described more accurately if a compound Poisson process is used for process control at this time. A cumulative sum (CUSUM) control chart is much better than a C control chart when a small shift will be detected. This study calculates one-sided CUSUM ARLs using a Markov chain approach to construct a CUSUM control chart with an underlying Poisson-Gamma compound distribution for the failure mechanism. Moreover, an actual data set from a wafer plant is used to demonstrate the operation of the proposed model. The results show that a CUSUM control chart realizes significantly better performance than EWMA.