Abstract: Knowing about the customer behavior in a grocery has
been a long-standing issue in the retailing industry. The advent of
RFID has made it easier to collect moving data for an individual
shopper's behavior. Most of the previous studies used the traditional
statistical clustering technique to find the major characteristics of
customer behavior, especially shopping path. However, in using the
clustering technique, due to various spatial constraints in the store,
standard clustering methods are not feasible because moving data such
as the shopping path should be adjusted in advance of the analysis,
which is time-consuming and causes data distortion. To alleviate this
problem, we propose a new approach to spatial pattern clustering
based on the longest common subsequence. Experimental results using
real data obtained from a grocery confirm the good performance of the
proposed method in finding the hot spot, dead spot and major path
patterns of customer movements.
Abstract: Gas Metal Arc Welding (GMAW) processes is an
important joining process widely used in metal fabrication
industries. This paper addresses modeling and optimization of this
technique using a set of experimental data and regression analysis.
The set of experimental data has been used to assess the influence
of GMAW process parameters in weld bead geometry. The
process variables considered here include voltage (V); wire feed
rate (F); torch Angle (A); welding speed (S) and nozzle-to-plate
distance (D). The process output characteristics include weld bead
height, width and penetration. The Taguchi method and regression
modeling are used in order to establish the relationships between
input and output parameters. The adequacy of the model is
evaluated using analysis of variance (ANOVA) technique. In the
next stage, the proposed model is embedded into a Simulated
Annealing (SA) algorithm to optimize the GMAW process
parameters. The objective is to determine a suitable set of process
parameters that can produce desired bead geometry, considering
the ranges of the process parameters. Computational results prove
the effectiveness of the proposed model and optimization
procedure.
Abstract: The task of strategic information technology
management is to focus on adapting technology to ensure
competitiveness. A key factor for success in this sector is awareness
and readiness to deploy new technologies and exploit the services
they offer. Recently, the need for more flexible and dynamic user
interfaces (UIs) has been recognized, especially in mobile
applications. An ongoing research project (MOP), initiated by TUT
in Finland, is looking at how mobile device UIs can be adapted for
different needs and contexts. It focuses on examining the possibilities
to develop adapter software for solving the challenges related to the
UI and its flexibility in mobile devices. This approach has great
potential for enhancing information transfer in mobile devices, and
consequently for improving information management. The
technology presented here could be one of the key emerging
technologies in the information technology sector in relation to
mobile devices and telecommunications.
Abstract: Variable channel conditions in underwater networks,
and variable distances between sensors due to water current, leads to
variable bit error rate (BER). This variability in BER has great
effects on energy efficiency of error correction techniques used. In
this paper an efficient energy adaptive hybrid error correction
technique (AHECT) is proposed. AHECT adaptively changes error
technique from pure retransmission (ARQ) in a low BER case to a
hybrid technique with variable encoding rates (ARQ & FEC) in a
high BER cases. An adaptation algorithm depends on a precalculated
packet acceptance rate (PAR) look-up table, current BER,
packet size and error correction technique used is proposed. Based
on this adaptation algorithm a periodically 3-bit feedback is added to
the acknowledgment packet to state which error correction technique
is suitable for the current channel conditions and distance.
Comparative studies were done between this technique and other
techniques, and the results show that AHECT is more energy
efficient and has high probability of success than all those
techniques.
Abstract: Full search block matching algorithm is widely used for hardware implementation of motion estimators in video compression algorithms. In this paper we are proposing a new architecture, which consists of a 2D parallel processing unit and a 1D unit both working in parallel. The proposed architecture reduces both data access power and computational power which are the main causes of power consumption in integer motion estimation. It also completes the operations with nearly the same number of clock cycles as compared to a 2D systolic array architecture. In this work sum of absolute difference (SAD)-the most repeated operation in block matching, is calculated in two steps. The first step is to calculate the SAD for alternate rows by a 2D parallel unit. If the SAD calculated by the parallel unit is less than the stored minimum SAD, the SAD of the remaining rows is calculated by the 1D unit. Early termination, which stops avoidable computations has been achieved with the help of alternate rows method proposed in this paper and by finding a low initial SAD value based on motion vector prediction. Data reuse has been applied to the reference blocks in the same search area which significantly reduced the memory access.
Abstract: We apply a particle tracking technique to track the motion of individual pathogenic Leptospira. We observe and capture images of motile Leptospira by means of CCD and darkfield microscope. Image processing, statistical theories and simulations are used for data analysis. Based on trajectory patterns, mean square displacement, and power spectral density characteristics, we found that the motion modes are most likely to be directed motion mode (70%) and the rest are either normal diffusion or unidentified mode. Our findings may support the fact that why leptospires are very well efficient toward targeting internal tissues as a result of increase in virulence factor.
Abstract: It has often been said that the strength of any country
resides in the strength of its industrial sector, and Progress in
industrial society has been accomplished by the creation of new
technologies. Developments have been facilitated by the increasing
availability of advanced manufacturing technology (AMT), in
addition the implementation of advanced manufacturing technology
(AMT) requires careful planning at all levels of the organization to
ensure that the implementation will achieve the intended goals.
Justification and implementation of advanced manufacturing
technology (AMT) involves decisions that are crucial for the
practitioners regarding the survival of business in the present days of
uncertain manufacturing world. This paper assists the industrial
managers to consider all the important criteria for success AMT
implementation, when purchasing new technology. Concurrently,
this paper classifies the tangible benefits of a technology that are
evaluated by addressing both cost and time dimensions, and the
intangible benefits are evaluated by addressing technological,
strategic, social and human issues to identify and create awareness of
the essential elements in the AMT implementation process and
identify the necessary actions before implementing AMT.
Abstract: Data mining can be called as a technique to extract
information from data. It is the process of obtaining hidden
information and then turning it into qualified knowledge by statistical
and artificial intelligence technique. One of its application areas is
medical area to form decision support systems for diagnosis just by
inventing meaningful information from given medical data. In this
study a decision support system for diagnosis of illness that make use
of data mining and three different artificial intelligence classifier
algorithms namely Multilayer Perceptron, Naive Bayes Classifier and
J.48. Pima Indian dataset of UCI Machine Learning Repository was
used. This dataset includes urinary and blood test results of 768
patients. These test results consist of 8 different feature vectors.
Obtained classifying results were compared with the previous studies.
The suggestions for future studies were presented.
Abstract: management of medical devices in hospitals includes
the planning of medical equipment acquisition and maintenance. The
presence of critical and non-critical areas together with technological
proliferation render the management of medical devices very
complex. This study creates an easy and objective methodology for
the analysis of medical equipment maintenance, that makes the
management of medical devices more feasible. The study has been
carried out at Florence Hospital Careggi and it aims to help the
clinical engineering department to manage medical equipment by
clarifying the hospital situation through a characterization of the
different areas, technologies and fault typologies.
Abstract: This paper is a review on the aspects and approaches of design an image cryptosystem. First a general introduction given for cryptography and images encryption and followed by different techniques in image encryption and related works for each technique surveyed. Finally, general security analysis methods for encrypted images are mentioned.
Abstract: The gases generated in oil filled transformers can be
used for qualitative determination of incipient faults. The Dissolved
Gas Analysis has been widely used by utilities throughout the world
as the primarily diagnostic tool for transformer maintenance. In this
paper, various Artificial Intelligence Techniques that have been used
by the researchers in the past have been reviewed, some conclusions
have been drawn and a sequential hybrid system has been proposed.
The synergy of ANN and FIS can be a good solution for reliable
results for predicting faults because one should not rely on a single
technology when dealing with real–life applications.
Abstract: This paper deals with condition monitoring of electric switch machine for railway points. Point machine, as a complex electro-mechanical device, switch the track between two alternative routes. There has been an increasing interest in railway safety and the optimal management of railway equipments maintenance, e.g. point machine, in order to enhance railway service quality and reduce system failure. This paper explores the development of Kolmogorov- Smirnov (K-S) test to detect some point failures (external to the machine, slide chairs, fixing, stretchers, etc), while the point machine (inside the machine) is in its proper condition. Time-domain stator Current signatures of normal (healthy) and faulty points are taken by 3 Hall Effect sensors and are analyzed by K-S test. The test is simulated by creating three types of such failures, namely putting a hard stone and a soft stone between stock rail and switch blades as obstacles and also slide chairs- friction. The test has been applied for those three faults which the results show that K-S test can effectively be developed for the aim of other point failures detection, which their current signatures deviate parametrically from the healthy current signature. K-S test as an analysis technique, assuming that any defect has a specific probability distribution. Empirical cumulative distribution functions (ECDF) are used to differentiate these probability distributions. This test works based on the null hypothesis that ECDF of target distribution is statistically similar to ECDF of reference distribution. Therefore by comparing a given current signature (as target signal) from unknown switch state to a number of template signatures (as reference signal) from known switch states, it is possible to identify which is the most likely state of the point machine under analysis.
Abstract: Requirement engineering has been the subject of large
volume of researches due to the significant role it plays in the
software development life cycle. However, dynamicity of software
industry is much faster than advances in requirements engineering
approaches. Therefore, this paper aims to systematically review and
evaluate the current research in requirement engineering and identify
new research trends and direction in this field. In addition, various
research methods associated with the Evaluation-based techniques
and empirical study are highlighted for the requirements engineering
field. Finally, challenges and recommendations on future directions
research are presented based on the research team observations
during this study.
Abstract: The aim of this research is to determine how preservice Turkish teachers perceive themselves in terms of problem solving skills. Students attending Department of Turkish Language Teaching of Gazi University Education Faculty in 2005-2006 academic year constitute the study group (n= 270) of this research in which survey model was utilized. Data were obtained by Problem Solving Inventory developed by Heppner & Peterson and Personal Information Form. Within the settings of this research, Cronbach Alpha reliability coefficient of the scale was found as .87. Besides, reliability coefficient obtained by split-half technique which splits odd and even numbered items of the scale was found as r=.81 (Split- Half Reliability). The findings of the research revealed that preservice Turkish teachers were sufficiently qualified on the subject of problem solving skills and statistical significance was found in favor of male candidates in terms of “gender" variable. According to the “grade" variable, statistical significance was found in favor of 4th graders.
Abstract: Wireless sensor networks are consisted of hundreds or
thousands of small sensors that have limited resources.
Energy-efficient techniques are the main issue of wireless sensor
networks. This paper proposes an energy efficient agent-based
framework in wireless sensor networks. We adopt biologically
inspired approaches for wireless sensor networks. Agent operates
automatically with their behavior policies as a gene. Agent aggregates
other agents to reduce communication and gives high priority to nodes
that have enough energy to communicate. Agent behavior policies are
optimized by genetic operation at the base station. Simulation results
show that our proposed framework increases the lifetime of each node.
Each agent selects a next-hop node with neighbor information and
behavior policies. Our proposed framework provides self-healing,
self-configuration, self-optimization properties to sensor nodes.
Abstract: The construction of a civil structure inside a urban
area inevitably modifies the outdoor microclimate at the building
site. Wind speed, wind direction, air pollution, driving rain, radiation
and daylight are some of the main physical aspects that are subjected
to the major changes. The quantitative amount of these modifications
depends on the shape, size and orientation of the building and on its
interaction with the surrounding environment.The flow field over a
flat roof model building has been numerically investigated in order to
determine two-dimensional CFD guidelines for the calculation of the
turbulent flow over a structure immersed in an atmospheric boundary
layer. To this purpose, a complete validation campaign has been
performed through a systematic comparison of numerical simulations
with wind tunnel experimental data.Several turbulence models and
spatial node distributions have been tested for five different vertical
positions, respectively from the upstream leading edge to the
downstream bottom edge of the analyzed model. Flow field
characteristics in the neighborhood of the building model have been
numerically investigated, allowing a quantification of the capabilities
of the CFD code to predict the flow separation and the extension of
the recirculation regions.The proposed calculations have allowed the
development of a preliminary procedure to be used as a guidance in
selecting the appropriate grid configuration and corresponding
turbulence model for the prediction of the flow field over a twodimensional
roof architecture dominated by flow separation.
Abstract: This work is to study a roll of the fluctuating density
gradient in the compressible flows for the computational fluid dynamics
(CFD). A new anisotropy tensor with the fluctuating density
gradient is introduced, and is used for an invariant modeling technique
to model the turbulent density gradient correlation equation derived
from the continuity equation. The modeling equation is decomposed
into three groups: group proportional to the mean velocity, and that
proportional to the mean strain rate, and that proportional to the mean
density. The characteristics of the correlation in a wake are extracted
from the results by the two dimensional direct simulation, and shows
the strong correlation with the vorticity in the wake near the body.
Thus, it can be concluded that the correlation of the density gradient
is a significant parameter to describe the quick generation of the
turbulent property in the compressible flows.
Abstract: This paper gives an overview of a deep drawing
process by pressurized liquid medium separated from the sheet by a
rubber diaphragm. Hydroforming deep drawing processing of sheet
metal parts provides a number of advantages over conventional
techniques. It generally increases the depth to diameter ratio possible
in cup drawing and minimizes the thickness variation of the drawn
cup. To explore the deformation mechanism, analytical and
numerical simulations are used for analyzing the drawing process of
an AA6061-T4 blank. The effects of key process parameters such as
coefficient of friction, initial thickness of the blank and radius
between cup wall and flange are investigated analytically and
numerically. The simulated results were in good agreement with the
results of the analytical model. According to finite element
simulations, the hydroforming deep drawing method provides a more
uniform thickness distribution compared to conventional deep
drawing and decreases the risk of tearing during the process.
Abstract: The intrusion detection problem has been frequently studied, but intrusion detection methods are often based on a single point of view, which always limits the results. In this paper, we introduce a new intrusion detection model based on the combination of different current methods. First we use a notion of distance to unify the different methods. Second we combine these methods using the Pearson correlation coefficients, which measure the relationship between two methods, and we obtain a combined distance. If the combined distance is greater than a predetermined threshold, an intrusion is detected. We have implemented and tested the combination model with two different public data sets: the data set of masquerade detection collected by Schonlau & al., and the data set of program behaviors from the University of New Mexico. The results of the experiments prove that the combination model has better performances.
Abstract: In this paper, we apply a semismooth active set method to image inpainting. The method exploits primal and dual features of a proposed regularized total variation model, following after the technique presented in [4]. Numerical results show that the method is fast and efficient in inpainting sufficiently thin domains.