Abstract: As the fossil fuels kept on depleting, intense research in developing hydrogen (H2) as the alternative fuel has been done to cater our tremendous demand for fuel. The potential of H2 as the ultimate clean fuel differs with the fossil fuel that releases significant amounts of carbon dioxide (CO2) into the surrounding and leads to the global warming. The experimental work was carried out to study the production of H2 from palm kernel shell steam gasification at different variables such as heating rate, steam to biomass ratio and adsorbent to biomass ratio. Maximum H2 composition which is 61% (volume basis) was obtained at heating rate of 100oCmin-1, steam/biomass of 2:1 ratio, and adsorbent/biomass of 1:1 ratio. The commercial adsorbent had been modified by utilizing the alcoholwater mixture. Characteristics of both adsorbents were investigated and it is concluded that flowability and floodability of modified CaO is significantly improved.
Abstract: The paper shows some ability to manage two-phase
flows arising from the use of unsteady effects. In one case, we
consider the condition of fragmentation of the interface between the
two components leads to the intensification of mixing. The problem
is solved when the temporal and linear scale are small for the
appearance of the developed mixing layer. Showing that exist such
conditions for unsteady flow velocity at the surface of the channel,
which will lead to the creation and fragmentation of vortices at Re
numbers of order unity. Also showing that the Re is not a criterion of
similarity for this type of flows, but we can introduce a criterion that
depends on both the Re, and the frequency splitting of the vortices. It
turned out that feature of this situation is that streamlines behave
stable, and if we analyze the behavior of the interface between the
components it satisfies all the properties of unstable flows. The other
problem we consider the behavior of solid impurities in the extensive
system of channels. Simulated unsteady periodic flow modeled
breaths. Consider the behavior of the particles along the trajectories.
It is shown that, depending on the mass and diameter of the particles,
they can be collected in a caustic on the channel walls, stop in a
certain place or fly back. Of interest is the distribution of particle
velocity in frequency. It turned out that by choosing a behavior of the
velocity field of the carrier gas can affect the trajectory of individual
particles including force them to fly back.
Abstract: Service innovations are central concerns in fast
changing environment. Due to the fitness in customer demands and
advances in information technologies (IT) in service management, an
expanded conceptualization of e-service innovation is required.
Specially, innovation practices have become increasingly more
challenging, driving managers to employ a different open innovation
model to maintain competitive advantages. At the same time, firms
need to interact with external and internal customers in innovative
environments, like the open innovation networks, to co-create values.
Based on these issues, an important conceptual framework of e-service
innovation is developed. This paper aims to examine the contributing
factors on e-service innovation and firm performance, including
financial and non-financial aspects. The study concludes by showing
how e-service innovation will play a significant role in growing the
overall values of the firm. The discussion and conclusion will lead to a
stronger understanding of e-service innovation and co-creating values
with customers within open innovation networks.
Abstract: When binary decision diagrams are formed from
uniformly distributed Monte Carlo data for a large number of
variables, the complexity of the decision diagrams exhibits a
predictable relationship to the number of variables and minterms. In
the present work, a neural network model has been used to analyze the
pattern of shortest path length for larger number of Monte Carlo data
points. The neural model shows a strong descriptive power for the
ISCAS benchmark data with an RMS error of 0.102 for the shortest
path length complexity. Therefore, the model can be considered as a
method of predicting path length complexities; this is expected to lead
to minimum time complexity of very large-scale integrated circuitries
and related computer-aided design tools that use binary decision
diagrams.
Abstract: In illumination variant face recognition, existing
methods extracting face albedo as light normalized image may lead to
loss of extensive facial details, with light template discarded. To
improve that, a novel approach for realistic facial texture
reconstruction by combining original image and albedo image is
proposed. First, light subspaces of different identities are established
from the given reference face images; then by projecting the original
and albedo image into each light subspace respectively, texture
reference images with corresponding lighting are reconstructed and
two texture subspaces are formed. According to the projections in
texture subspaces, facial texture with normal light can be synthesized.
Due to the combination of original image, facial details can be
preserved with face albedo. In addition, image partition is applied to
improve the synthesization performance. Experiments on Yale B and
CMUPIE databases demonstrate that this algorithm outperforms the
others both in image representation and in face recognition.
Abstract: Phylogenies ; The evolutionary histories of groups of
species are one of the most widely used tools throughout the life
sciences, as well as objects of research with in systematic,
evolutionary biology. In every phylogenetic analysis reconstruction
produces trees. These trees represent the evolutionary histories of
many groups of organisms, bacteria due to horizontal gene transfer
and plants due to process of hybridization. The process of gene
transfer in bacteria and hybridization in plants lead to reticulate
networks, therefore, the methods of constructing trees fail in
constructing reticulate networks. In this paper a model has been
employed to reconstruct phylogenetic network in honey bee. This
network represents reticulate evolution in honey bee. The maximum
parsimony approach has been used to obtain this reticulate network.
Abstract: In many applications, it is a priori known that the
target function should satisfy certain constraints imposed by, for
example, economic theory or a human-decision maker. Here we
consider partially monotone problems, where the target variable
depends monotonically on some of the predictor variables but not all.
We propose an approach to build partially monotone models based
on the convolution of monotone neural networks and kernel
functions. The results from simulations and a real case study on
house pricing show that our approach has significantly better
performance than partially monotone linear models. Furthermore, the
incorporation of partial monotonicity constraints not only leads to
models that are in accordance with the decision maker's expertise,
but also reduces considerably the model variance in comparison to
standard neural networks with weight decay.
Abstract: The use of neural networks is popular in various
building applications such as prediction of heating load, ventilation
rate and indoor temperature. Significant is, that only few papers deal
with indoor carbon dioxide (CO2) prediction which is a very good
indicator of indoor air quality (IAQ). In this study, a data-driven
modelling method based on multilayer perceptron network for indoor
air carbon dioxide in an apartment building is developed.
Temperature and humidity measurements are used as input variables
to the network. Motivation for this study derives from the following
issues. First, measuring carbon dioxide is expensive and sensors
power consumptions is high and secondly, this leads to short
operating times of battery-powered sensors. The results show that
predicting CO2 concentration based on relative humidity and
temperature measurements, is difficult. Therefore, more additional
information is needed.
Abstract: In a competitive production environment, critical
decision making are based on data resulted by random sampling of
product units. Efficiency of these decisions depends on data quality
and also their reliability scale. This point leads to the necessity of a
reliable measurement system. Therefore, the conjecture process and
analysing the errors contributes to a measurement system known as
Measurement System Analysis (MSA). The aim of this research is on
determining the necessity and assurance of extensive development in
analysing measurement systems, particularly with the use of
Repeatability and Reproducibility Gages (GR&R) to improve
physical measurements. Nowadays in productive industries,
repeatability and reproducibility gages released so well but they are
not applicable as well as other measurement system analysis
methods. To get familiar with this method and gain a feedback in
improving measurement systems, this survey would be on
“ANOVA" method as the most widespread way of calculating
Repeatability and Reproducibility (R&R).
Abstract: In this paper we describe the design and implementation of a parallel algorithm for data assimilation with ensemble Kalman filter (EnKF) for oil reservoir history matching problem. The use of large number of observations from time-lapse seismic leads to a large turnaround time for the analysis step, in addition to the time consuming simulations of the realizations. For efficient parallelization it is important to consider parallel computation at the analysis step. Our experiments show that parallelization of the analysis step in addition to the forecast step has good scalability, exploiting the same set of resources with some additional efforts.
Abstract: To achieve the desired specifications of gain and
phase margins for plants with time-delay that stabilized with FO-PID
controller a lead compensator is designed. At first the range of
controlled system stability based on stability boundary criteria is
determined. Using stability boundary locus method in frequency
domain the fractional order controller parameters are tuned and then
with drawing bode diagram in frequency domain accessing to desired
gain and phase margin are shown. Numerical examples are given to
illustrate the shapes of the stabilizing region and to show the design
procedure.
Abstract: Pipeline infrastructures normally represent high cost of investment and the pipeline must be free from risks that could cause environmental hazard and potential threats to personnel safety. Pipeline integrity such monitoring and management become very crucial to provide unimpeded transportation and avoiding unnecessary production deferment. Thus proper cleaning and inspection is the key to safe and reliable pipeline operation and plays an important role in pipeline integrity management program and has become a standard industry procedure. In view of this, understanding the motion (dynamic behavior), prediction and control of the PIG speed is important in executing pigging operation as it offers significant benefits, such as estimating PIG arrival time at receiving station, planning for suitable pigging operation, and improves efficiency of pigging tasks. The objective of this paper is to review recent developments in speed control system of pipeline PIGs. The review carried out would serve as an industrial application in a form of quick reference of recent developments in pipeline PIG speed control system, and further initiate others to add-in/update the list in the future leading to knowledge based data, and would attract active interest of others to share their view points.
Abstract: This paper proposes a delay-dependent leader-following consensus condition of multi-agent systems with both communication delay and probabilistic self-delay. The proposed methods employ a suitable piecewise Lyapunov-Krasovskii functional and the average dwell time approach. New consensus criterion for the systems are established in terms of linear matrix inequalities (LMIs) which can be easily solved by various effective optimization algorithms. Numerical example showed that the proposed method is effective.
Abstract: Following the laser ablation studies leading to a
theory of nuclei confinement by a Debye layer mechanism, we
present here numerical evaluations for the known stable nuclei where
the Coulomb repulsion is included as a rather minor component
especially for lager nuclei. In this research paper the required
physical conditions for the formation and stability of nuclei
particularly endothermic nuclei with mass number greater than to
which is an open astrophysical question have been investigated.
Using the Debye layer mechanism, nuclear surface energy, Fermi
energy and coulomb repulsion energy it is possible to find conditions
under which the process of nucleation is permitted in early universe.
Our numerical calculations indicate that about 200 second after the
big bang at temperature of about 100 KeV and subrelativistic region
with nucleon density nearly equal to normal nuclear density namely,
10cm all endothermic and exothermic nuclei have been
formed.
Abstract: Global environmental changes lead to increased frequency and scale of natural disaster, Taiwan is under the influence of global warming and extreme weather. Therefore, the vulnerability was increased and variability and complexity of disasters is relatively enhanced. The purpose of this study is to consider the source and magnitude of hazard characteristics on the tourism industry. Using modern risk management concepts, integration of related domestic and international basic research, this goes beyond the Taiwan typhoon disaster risk assessment model and evaluation of loss. This loss evaluation index system considers the impact of extreme weather, in particular heavy rain on the tourism industry in Taiwan. Consider the extreme climate of the compound impact of disaster for the tourism industry; we try to make multi-hazard risk assessment model, strategies and suggestions. Related risk analysis results are expected to provide government department, the tourism industry asset owners, insurance companies and banking include tourist disaster risk necessary information to help its tourism industry for effective natural disaster risk management.
Abstract: The main aim of this study was to examine whether
people understand indicative conditionals on the basis of syntactic
factors or on the basis of subjective conditional probability. The
second aim was to investigate whether the conditional probability of
q given p depends on the antecedent and consequent sizes or derives
from inductive processes leading to establish a link of plausible cooccurrence
between events semantically or experientially associated.
These competing hypotheses have been tested through a 3 x 2 x 2 x 2
mixed design involving the manipulation of four variables: type of
instructions (“Consider the following statement to be true", “Read the
following statement" and condition with no conditional statement);
antecedent size (high/low); consequent size (high/low); statement
probability (high/low). The first variable was between-subjects, the
others were within-subjects. The inferences investigated were Modus
Ponens and Modus Tollens. Ninety undergraduates of the Second
University of Naples, without any prior knowledge of logic or
conditional reasoning, participated in this study.
Results suggest that people understand conditionals in a syntactic
way rather than in a probabilistic way, even though the perception of
the conditional probability of q given p is at least partially involved in
the conditionals- comprehension. They also showed that, in presence
of a conditional syllogism, inferences are not affected by the
antecedent or consequent sizes. From a theoretical point of view these
findings suggest that it would be inappropriate to abandon the idea
that conditionals are naturally understood in a syntactic way for the
idea that they are understood in a probabilistic way.
Abstract: This paper presents a low-voltage low-power differential linear transconductor with near rail-to-rail input swing. Based on the current-mirror OTA topology, the proposed transconductor combines the Flipped Voltage Follower (FVF) technique to linearize the transconductor behavior that leads to class- AB linear operation and the virtual transistor technique to lower the effective threshold voltages of the transistors which offers an advantage in terms of low supply requirement. Design of the OTA has been discussed. It operates at supply voltages of about ±0.8V. Simulation results for 0.18μm TSMC CMOS technology show a good input range of 1Vpp with a high DC gain of 81.53dB and a total harmonic distortion of -40dB at 1MHz for an input of 1Vpp. The main aim of this paper is to present and compare new OTA design with high transconductance, which has a potential to be used in low voltage applications.
Abstract: Composite steel-concrete slabs using thin-walled
corrugated steel sheets with embossments represent a modern and
effective combination of steel and concrete. However, the design
of new types of sheeting is conditional on the execution of expensive
and time-consuming laboratory testing. The effort to develop
a cheaper and faster method has lead to many investigations all over
the world. In our paper we compare the results from our experiments
involving vacuum loading, four-point bending and small-scale shear
tests.
Abstract: Next Generation Wireless Network (NGWN) is
expected to be a heterogeneous network which integrates all different
Radio Access Technologies (RATs) through a common platform. A
major challenge is how to allocate users to the most suitable RAT for
them. An optimized solution can lead to maximize the efficient use
of radio resources, achieve better performance for service providers
and provide Quality of Service (QoS) with low costs to users.
Currently, Radio Resource Management (RRM) is implemented
efficiently for the RAT that it was developed. However, it is not
suitable for a heterogeneous network. Common RRM (CRRM) was
proposed to manage radio resource utilization in the heterogeneous
network. This paper presents a user level Markov model for a three
co-located RAT networks. The load-balancing based and service
based CRRM algorithms have been studied using the presented
Markov model. A comparison for the performance of load-balancing
based and service based CRRM algorithms is studied in terms of
traffic distribution, new call blocking probability, vertical handover
(VHO) call dropping probability and throughput.
Abstract: The equivalence class subset algorithm is a powerful
tool for solving a wide variety of constraint satisfaction problems and
is based on the use of a decision function which has a very high but
not perfect accuracy. Perfect accuracy is not required in the decision
function as even a suboptimal solution contains valuable information
that can be used to help find an optimal solution. In the hardest
problems, the decision function can break down leading to a
suboptimal solution where there are more equivalence classes than
are necessary and which can be viewed as a mixture of good decision
and bad decisions. By choosing a subset of the decisions made in
reaching a suboptimal solution an iterative technique can lead to an
optimal solution, using series of steadily improved suboptimal
solutions. The goal is to reach an optimal solution as quickly as
possible. Various techniques for choosing the decision subset are
evaluated.