Abstract: The mixture formation prior to the ignition process
plays as a key element in the diesel combustion. Parametric studies of
mixture formation and ignition process in various injection parameter
has received considerable attention in potential for reducing
emissions. Purpose of this study is to clarify the effects of injection
pressure on mixture formation and ignition especially during ignition
delay period, which have to be significantly influences throughout the
combustion process and exhaust emissions. This study investigated
the effects of injection pressure on diesel combustion fundamentally
using rapid compression machine. The detail behavior of mixture
formation during ignition delay period was investigated using the
schlieren photography system with a high speed camera. This method
can capture spray evaporation, spray interference, mixture formation
and flame development clearly with real images. Ignition process and
flame development were investigated by direct photography method
using a light sensitive high-speed color digital video camera. The
injection pressure and air motion are important variable that strongly
affect to the fuel evaporation, endothermic and prolysis process
during ignition delay. An increased injection pressure makes spray tip
penetration longer and promotes a greater amount of fuel-air mixing
occurs during ignition delay. A greater quantity of fuel prepared
during ignition delay period thus predominantly promotes more rapid
heat release.
Abstract: With the rapid popularization of internet services, it is apparent that the next generation terrestrial communication systems must be capable of supporting various applications like voice, video, and data. This paper presents the performance evaluation of turbo- coded mobile terrestrial communication systems, which are capable of providing high quality services for delay sensitive (voice or video) and delay tolerant (text transmission) multimedia applications in urban and suburban areas. Different types of multimedia information require different service qualities, which are generally expressed in terms of a maximum acceptable bit-error-rate (BER) and maximum tolerable latency. The breakthrough discovery of turbo codes allows us to significantly reduce the probability of bit errors with feasible latency. In a turbo-coded system, a trade-off between latency and BER results from the choice of convolutional component codes, interleaver type and size, decoding algorithm, and the number of decoding iterations. This trade-off can be exploited for multimedia applications by using optimal and suboptimal performance parameter amalgamations to achieve different service qualities. The results are therefore proposing an adaptive framework for turbo-coded wireless multimedia communications which incorporate a set of performance parameters that achieve an appropriate set of service qualities, depending on the application's requirements.
Abstract: The Influence Diagrams (IDs) is a kind of Probabilistic Belief Networks for graphic modeling. The usage of IDs can improve the communication among field experts, modelers, and decision makers, by showing the issue frame discussed from a high-level point of view. This paper enhances the Time-Sliced Influence Diagrams (TSIDs, or called Dynamic IDs) based formalism from a Discrete Event Systems Modeling and Simulation (DES M&S) perspective, for Exploring Analysis (EA) modeling. The enhancements enable a modeler to specify times occurred of endogenous events dynamically with stochastic sampling as model running and to describe the inter- influences among them with variable nodes in a dynamic situation that the existing TSIDs fails to capture. The new class of model is named Dynamic-Stochastic Influence Diagrams (DSIDs). The paper includes a description of the modeling formalism and the hiberarchy simulators implementing its simulation algorithm, and shows a case study to illustrate its enhancements.
Abstract: Bythe development of the Internet, e-commerce has
got very popular between organizations. E-commerce means buying
and selling products and services over the Internet. One of the
challenging issues in e-commerce is how to attract the customers and
how to satisfy them. Therefore, it is important to keep good
relationship with the customers. This paper proposes a new model to
increase the customer satisfaction by introducing live-operator.
Live-operator is a system which is involved both with the customers
and the organization.In this system the customers feelthatthey receive
the service directly from the organization. This model decreases the
response time and the customer loss. Moreover, it increases customer
trust and the ability of organizations.
Abstract: The Information and Communication Technologies
(ICTs), and the Wide World Web (WWW) have fundamentally
altered the practice of teaching and learning world wide. Many
universities, organizations, colleges and schools are trying to apply
the benefits of the emerging ICT. In the early nineties the term
learning object was introduced into the instructional technology
vernacular; the idea being that educational resources could be broken
into modular components for later combination by instructors,
learners, and eventually computes into larger structures that would
support learning [1]. However in many developing countries, the use
of ICT is still in its infancy stage and the concept of learning object
is quite new. This paper outlines the learning object design
considerations for developing countries depending on learning
environment.
Abstract: The dynamic spectrum allocation solutions such as
cognitive radio networks have been proposed as a key technology to
exploit the frequency segments that are spectrally underutilized.
Cognitive radio users work as secondary users who need to
constantly and rapidly sense the presence of primary users or
licensees to utilize their frequency bands if they are inactive. Short
sensing cycles should be run by the secondary users to achieve
higher throughput rates as well as to provide low level of interference
to the primary users by immediately vacating their channels once
they have been detected. In this paper, the throughput-sensing time
relationship in local and cooperative spectrum sensing has been
investigated under two distinct scenarios, namely, constant primary
user protection (CPUP) and constant secondary user spectrum
usability (CSUSU) scenarios. The simulation results show that the
design of sensing slot duration is very critical and depends on the
number of cooperating users under CPUP scenario whereas under
CSUSU, cooperating more users has no effect if the sensing time
used exceeds 5% of the total frame duration.
Abstract: In this research, heat transfer of a poly Ethylene
fluidized bed reactor without reaction were studied experimentally
and computationally at different superficial gas velocities. A multifluid
Eulerian computational model incorporating the kinetic theory
for solid particles was developed and used to simulate the heat
conducting gas–solid flows in a fluidized bed configuration.
Momentum exchange coefficients were evaluated using the Syamlal–
O-Brien drag functions. Temperature distributions of different phases
in the reactor were also computed. Good agreement was found
between the model predictions and the experimentally obtained data
for the bed expansion ratio as well as the qualitative gas–solid flow
patterns. The simulation and experimental results showed that the gas
temperature decreases as it moves upward in the reactor, while the
solid particle temperature increases. Pressure drop and temperature
distribution predicted by the simulations were in good agreement
with the experimental measurements at superficial gas velocities
higher than the minimum fluidization velocity. Also, the predicted
time-average local voidage profiles were in reasonable agreement
with the experimental results. The study showed that the
computational model was capable of predicting the heat transfer and
the hydrodynamic behavior of gas-solid fluidized bed flows with
reasonable accuracy.
Abstract: This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.
Abstract: Over the years, many implementations have been
proposed for solving IA networks. These implementations are
concerned with finding a solution efficiently. The primary goal of
our implementation is simplicity and ease of use.
We present an IA network implementation based on finite domain
non-binary CSPs, and constraint logic programming. The
implementation has a GUI which permits the drawing of arbitrary IA
networks. We then show how the implementation can be extended to
find all the solutions to an IA network. One application of finding all
the solutions, is solving probabilistic IA networks.
Abstract: Interactions among proteins are the basis of various
life events. So, it is important to recognize and research protein
interaction sites. A control set that contains 149 protein molecules
were used here. Then 10 features were extracted and 4 sample sets
that contained 9 sliding windows were made according to features.
These 4 sample sets were calculated by Radial Basis Functional neutral
networks which were optimized by Particle Swarm Optimization
respectively. Then 4 groups of results were obtained. Finally, these 4
groups of results were integrated by decision fusion (DF) and Genetic
Algorithm based Selected Ensemble (GASEN). A better accuracy was
got by DF and GASEN. So, the integrated methods were proved to
be effective.
Abstract: The aim of this paper is to propose a mathematical
model to determine invariant sets, set covering, orbits and, in
particular, attractors in the set of tourism variables. Analysis was
carried out based on a pre-designed algorithm and applying our
interpretation of chaos theory developed in the context of General
Systems Theory. This article sets out the causal relationships
associated with tourist flows in order to enable the formulation of
appropriate strategies. Our results can be applied to numerous cases.
For example, in the analysis of tourist flows, these findings can be
used to determine whether the behaviour of certain groups affects that
of other groups and to analyse tourist behaviour in terms of the most
relevant variables. Unlike statistical analyses that merely provide
information on current data, our method uses orbit analysis to
forecast, if attractors are found, the behaviour of tourist variables in
the immediate future.
Abstract: Water, soil and sediment contaminated with
metolachlor poses a threat to the environment and human health.
We determined the effectiveness of nano-zerovalent iron (NZVI) to
dechlorinate metolachlor [2-chloro-n-(2-ethyl-6-methyl-phenyl)-n-
(1-methoxypropan-2-yl)acetamide] in pH solution and the presence
of aluminium salt. The optimum dosage of degradation of 100 mlL-1
metolachlor was 1% (w/v) NZVI. The degradation kinetic rate (kobs)
was 0.218×10-3 min-1 and specific first-order rates (kSA) was
8.72×10-7 L m-2min-1. By treating aqueous solutions of metolachlor
with NZVI, metolachlor destruction rate were increased as the pH
decrease from 10 to 4. Lowering solution pH removes Fe (III)
passivating layers from the NZVI and makes it free for reductive
transformations. Destruction kinetic rates were 20.8×10-3 min-1 for
pH4, 18.9×10-3 min-1 for pH7, 13.8×10-3 min-1 for pH10. In addition,
destruction kinetic of metolachlor by NZVI was enhanced when
aluminium sulfate was added. The destruction kinetic rate were
20.4×10-3 min-1 for 0.05% Al(SO4)3 and 60×10-3 min-1 for 0.1%
Al(SO4)3.
Abstract: This work is focused on the steady boundary layer flow
near the forward stagnation point of plane and axisymmetric bodies
towards a stretching sheet. The no slip condition on the solid
boundary is replaced by the partial slip condition. The analytical
solutions for the velocity distributions are obtained for the various
values of the ratio of free stream velocity and stretching velocity, slip
parameter, the suction and injection velocity parameter, magnetic
parameter and dimensionality index parameter in the series forms with
the help of homotopy analysis method (HAM). Convergence of the
series is explicitly discussed. Results show that the flow and the skin
friction coefficient depend heavily on the velocity slip factor. In
addition, the effects of all the parameters mentioned above were more
pronounced for plane flows than for axisymmetric flows.
Abstract: This paper presents a web based remote access
microcontroller laboratory. Because of accelerated development in
electronics and computer technologies, microcontroller-based devices
and appliances are found in all aspects of our daily life. Before the
implementation of remote access microcontroller laboratory an
experiment set is developed by teaching staff for training
microcontrollers. Requirement of technical teaching and industrial
applications are considered when experiment set is designed.
Students can make the experiments by connecting to the experiment
set which is connected to the computer that set as the web server. The
students can program the microcontroller, can control digital and
analog inputs and can observe experiment. Laboratory experiment
web page can be accessed via www.elab.aku.edu.tr address.
Abstract: End milling process is one of the common metal
cutting operations used for machining parts in manufacturing
industry. It is usually performed at the final stage in manufacturing a
product and surface roughness of the produced job plays an
important role. In general, the surface roughness affects wear
resistance, ductility, tensile, fatigue strength, etc., for machined parts
and cannot be neglected in design. In the present work an
experimental investigation of end milling of aluminium alloy with
carbide tool is carried out and the effect of different cutting
parameters on the response are studied with three-dimensional
surface plots. An artificial neural network (ANN) is used to establish
the relationship between the surface roughness and the input cutting
parameters (i.e., spindle speed, feed, and depth of cut). The Matlab
ANN toolbox works on feed forward back propagation algorithm is
used for modeling purpose. 3-12-1 network structure having
minimum average prediction error found as best network architecture
for predicting surface roughness value. The network predicts surface
roughness for unseen data and found that the result/prediction is
better. For desired surface finish of the component to be produced
there are many different combination of cutting parameters are
available. The optimum cutting parameter for obtaining desired
surface finish, to maximize tool life is predicted. The methodology is
demonstrated, number of problems are solved and algorithm is coded
in Matlab®.
Abstract: Large scale climate signals and their teleconnections can influence hydro-meteorological variables on a local scale. Several extreme flow and timing measures, including high flow and low flow measures, from 62 hydrometric stations in Canada are investigated to detect possible linkages with several large scale climate indices. The streamflow data used in this study are derived from the Canadian Reference Hydrometric Basin Network and are characterized by relatively pristine and stable land-use conditions with a minimum of 40 years of record. A composite analysis approach was used to identify linkages between extreme flow and timing measures and climate indices. The approach involves determining the 10 highest and 10 lowest values of various climate indices from the data record. Extreme flow and timing measures for each station were examined for the years associated with the 10 largest values and the years associated with the 10 smallest values. In each case, a re-sampling approach was applied to determine if the 10 values of extreme flow measures differed significantly from the series mean. Results indicate that several stations are impacted by the large scale climate indices considered in this study. The results allow the determination of any relationship between stations that exhibit a statistically significant trend and stations for which the extreme measures exhibit a linkage with the climate indices.
Abstract: Internet infrastructures in most places of the world
have been supported by the advancement of optical fiber technology,
most notably wavelength division multiplexing (WDM) system.
Optical technology by means of WDM system has revolutionized
long distance data transport and has resulted in high data capacity,
cost reductions, extremely low bit error rate, and operational
simplification of the overall Internet infrastructure. This paper
analyses and compares the system impairments, which occur at data
transmission rates of 2.5Gb/s and 10 Gb/s per wavelength channel in
our proposed optical WDM system for Internet infrastructure in
Tanzania. The results show that the data transmission rate of 2.5 Gb/s
has minimum system impairments compared with a rate of 10 Gb/s
per wavelength channel, and achieves a sufficient system
performance to provide a good Internet access service.
Abstract: A four element prototype phased array surface probe
has been designed and constructed to improve clinical human
prostate spectroscopic data. The probe consists of two pairs of
adjacent rectangular coils with an optimum overlap to reduce the
mutual inductance. The two pairs are positioned on the anterior and
the posterior pelvic region and two couples of varactors at the input
of each coil undertake the procedures of tuning and matching. The
probe switches off and on automatically during the consecutive
phases of the MR experiment with the use of an analog switch that is
triggered by a microcontroller. Experimental tests that were carried
out resulted in high levels of tuning accuracy. Also, the switching
mechanism functions properly for various applied loads and pulse
sequence characteristics, producing only 10 μs of latency.
Abstract: The sanitary sewerage connection rate becomes an
important indicator of advanced cities. Following the construction of
sanitary sewerages, the maintenance and management systems are
required for keeping pipelines and facilities functioning well. These
maintenance tasks often require sewer workers to enter the manholes
and the pipelines, which are confined spaces short of natural
ventilation and full of hazardous substances. Working in sewers could
be easily exposed to a risk of adverse health effects. This paper
proposes the use of Bayesian belief networks (BBN) as a higher level
of noncarcinogenic health risk assessment of sewer workers. On the
basis of the epidemiological studies, the actual hospital attendance
records and expert experiences, the BBN is capable of capturing the
probabilistic relationships between the hazardous substances in sewers
and their adverse health effects, and accordingly inferring the
morbidity and mortality of the adverse health effects. The provision of
the morbidity and mortality rates of the related diseases is more
informative and can alleviate the drawbacks of conventional methods.
Abstract: The security of power systems against malicious cyberphysical
data attacks becomes an important issue. The adversary
always attempts to manipulate the information structure of the power
system and inject malicious data to deviate state variables while
evading the existing detection techniques based on residual test. The
solutions proposed in the literature are capable of immunizing the
power system against false data injection but they might be too costly
and physically not practical in the expansive distribution network.
To this end, we define an algebraic condition for trustworthy power
system to evade malicious data injection. The proposed protection
scheme secures the power system by deterministically reconfiguring
the information structure and corresponding residual test. More
importantly, it does not require any physical effort in either microgrid
or network level. The identification scheme of finding meters being
attacked is proposed as well. Eventually, a well-known IEEE 30-bus
system is adopted to demonstrate the effectiveness of the proposed
schemes.