Abstract: The running logs of a process hold valuable
information about its executed activity behavior and generated activity
logic structure. Theses informative logs can be extracted, analyzed and
utilized to improve the efficiencies of the process's execution and
conduction. One of the techniques used to accomplish the process
improvement is called as process mining. To mine similar processes is
such an improvement mission in process mining. Rather than directly
mining similar processes using a single comparing coefficient or a
complicate fitness function, this paper presents a simplified heuristic
process mining algorithm with two similarity comparisons that are
able to relatively conform the activity logic sequences (traces) of
mining processes with those of a normalized (regularized) one. The
relative process conformance is to find which of the mining processes
match the required activity sequences and relationships, further for
necessary and sufficient applications of the mined processes to process
improvements. One similarity presented is defined by the relationships
in terms of the number of similar activity sequences existing in
different processes; another similarity expresses the degree of the
similar (identical) activity sequences among the conforming processes.
Since these two similarities are with respect to certain typical behavior
(activity sequences) occurred in an entire process, the common
problems, such as the inappropriateness of an absolute comparison and
the incapability of an intrinsic information elicitation, which are often
appeared in other process conforming techniques, can be solved by the
relative process comparison presented in this paper. To demonstrate
the potentiality of the proposed algorithm, a numerical example is
illustrated.
Abstract: Thousands of masters athletes participate
quadrennially in the World Masters Games (WMG), yet this cohort
of athletes remains proportionately under-investigated. Due to a
growing global obesity pandemic in context of benefits of physical
activity across the lifespan, the BMI trends for this unique population
was of particular interest. The nexus between health, physical
activity and aging is complex and has raised much interest in recent
times due to the realization that a multifaceted approach is necessary
in order to counteract the obesity pandemic. By investigating age
based trends within a population adhering to competitive sport at
older ages, further insight might be gleaned to assist in understanding
one of many factors influencing this relationship.BMI was derived
using data gathered on a total of 6,071 masters athletes (51.9% male,
48.1% female) aged 25 to 91 years ( =51.5, s =±9.7), competing at
the Sydney World Masters Games (2009). Using linear and loess
regression it was demonstrated that the usual tendency for prevalence
of higher BMI increasing with age was reversed in the sample. This
trend in reversal was repeated for both male and female only sub-sets
of the sample participants, indicating the possibility of improved
prevalence of BMI with increasing age for both the sample as a
whole and these individual sub-groups.This evidence of improved
classification in one index of health (reduced BMI) for masters
athletes (when compared to the general population) implies there are
either improved levels of this index of health with aging due to
adherence to sport or possibly the reduced BMI is advantageous and
contributes to this cohort adhering (or being attracted) to masters
sport at older ages.
Abstract: This paper describes the development of an
autonomous robot for painting the interior walls of buildings. The
robot consists of a painting arm with an end effector roller that scans
the walls vertically and a mobile platform to give horizontal feed to
paint the whole area of the wall. The painting arm has a planar twolink
mechanism with two joints. Joints are driven from a stepping
motor through a ball screw-nut mechanism. Four ultrasonic sensors
are attached to the mobile platform and used to maintain a certain
distance from the facing wall and to avoid collision with side walls.
When settled on adjusted distance from the wall, the controller starts
the painting process autonomously. Simplicity, relatively low weight
and short painting time were considered in our design. Different
modules constituting the robot have been separately tested then
integrated. Experiments have shown successfulness of the robot in its
intended tasks.
Abstract: Natural Language Understanding Systems (NLU) will not be widely deployed unless they are technically mature and cost effective to develop. Cost effective development hinges on the availability of tools and techniques enabling the rapid production of NLU applications through minimal human resources. Further, these tools and techniques should allow quick development of applications in a user friendly way and should be easy to upgrade in order to continuously follow the evolving technologies and standards. This paper presents a visual tool for the structuring and editing of dialog forms, the key element of driving conversation in NLU applications based on IBM technology. The main focus is given on the basic component used to describe Human – Machine interactions of that kind, the Dialogue Manager. In essence, the description of a tool that enables the visual representation of the Dialogue Manager mainly during the implementation phase is illustrated.
Abstract: Organization of video databases is becoming difficult
task as the amount of video content increases. Video classification
based on the content of videos can significantly increase the speed of
tasks such as browsing and searching for a particular video in a
database. In this paper, a content-based videos classification system
for the classes indoor and outdoor is presented. The system is
intended to be used on a mobile platform with modest resources. The
algorithm makes use of the temporal redundancy in videos, which
allows using an uncomplicated classification model while still
achieving reasonable accuracy. The training and evaluation was done
on a video database of 443 videos downloaded from a video sharing
service. A total accuracy of 87.36% was achieved.
Abstract: In the traditional concept of product life cycle management, the activities of design, manufacturing, and assembly are performed in a sequential way. The drawback is that the considerations in design may contradict the considerations in manufacturing and assembly. The different designs of components can lead to different assembly sequences. Therefore, in some cases, a good design may result in a high cost in the downstream assembly activities. In this research, an integrated design evaluation and assembly sequence planning model is presented. Given a product requirement, there may be several design alternative cases to design the components for the same product. If a different design case is selected, the assembly sequence for constructing the product can be different. In this paper, first, the designed components are represented by using graph based models. The graph based models are transformed to assembly precedence constraints and assembly costs. A particle swarm optimization (PSO) approach is presented by encoding a particle using a position matrix defined by the design cases and the assembly sequences. The PSO algorithm simultaneously performs design evaluation and assembly sequence planning with an objective of minimizing the total assembly costs. As a result, the design cases and the assembly sequences can both be optimized. The main contribution lies in the new concept of integrated design evaluation and assembly sequence planning model and the new PSO solution method. The test results show that the presented method is feasible and efficient for solving the integrated design evaluation and assembly planning problem. In this paper, an example product is tested and illustrated.
Abstract: In recent years Malaysia has included renewable
energy as an alternative fuel to help in diversifying the country-s
energy reliance on oil, natural gas, coal and hydropower with
biomass and solar energy gaining priority. The scope of this paper is
to look at the designing procedures and analysis of a solar thermal
parabolic trough concentrator by simulation utilizing meteorological
data in several parts of Malaysia. Parameters which include the
aperture area, the diameter of the receiver and the working fluid may
be varied to optimize the design. Aperture area is determined by
considering the width and the length of the concentrator whereas the
geometric concentration ratio (CR) is obtained by considering the
width and diameter of the receiver. Three types of working fluid are
investigated. Theoretically, concentration ratios can be very high in
the range of 10 to 40 000 depending on the optical elements used and
continuous tracking of the sun. However, a thorough analysis is
essential as discussed in this paper where optical precision and
thermal analysis must be carried out to evaluate the performance of
the parabolic trough concentrator as the theoretical CR is not the only
factor that should be considered.
Abstract: An original Direct Numerical Simulation (DNS) method to tackle the problem of particulate flows at moderate to high concentration and finite Reynolds number is presented. Our method is built on the framework established by Glowinski and his coworkers [1] in the sense that we use their Distributed Lagrange Multiplier/Fictitious Domain (DLM/FD) formulation and their operator-splitting idea but differs in the treatment of particle collisions. The novelty of our contribution relies on replacing the simple artificial repulsive force based collision model usually employed in the literature by an efficient Discrete Element Method (DEM) granular solver. The use of our DEM solver enables us to consider particles of arbitrary shape (at least convex) and to account for actual contacts, in the sense that particles actually touch each other, in contrast with the simple repulsive force based collision model. We recently upgraded our serial code, GRIFF 1 [2], to full MPI capabilities. Our new code, PeliGRIFF 2, is developed under the framework of the full MPI open source platform PELICANS [3]. The new MPI capabilities of PeliGRIFF open new perspectives in the study of particulate flows and significantly increase the number of particles that can be considered in a full DNS approach: O(100000) in 2D and O(10000) in 3D. Results on the 2D/3D sedimentation/fluidization of isometric polygonal/polyedral particles with collisions are presented.
Abstract: Combustion of sprays is of technological importance, but its flame behavior is not fully understood. Furthermore, the multiplicity of dependent variables such as pressure, temperature, equivalence ratio, and droplet sizes complicates the study of spray combustion. Fundamental study on the influence of the presence of liquid droplets has revealed that laminar flames within aerosol mixtures more readily become unstable than for gaseous ones and this increases the practical burning rate. However, fundamental studies on turbulent flames of aerosol mixtures are limited particularly those under near mono-dispersed droplet conditions. In the present work, centrally ignited expanding flames at near atmospheric pressures are employed to quantify the burning rates in gaseous and aerosol flames. Iso-octane-air aerosols are generated by expansion of the gaseous pre-mixture to produce a homogeneously distributed suspension of fuel droplets. The effects of the presence of droplets and turbulence velocity in relation to the burning rates of the flame are also investigated.
Abstract: Corner detection and optical flow are common techniques for feature-based video stabilization. However, these algorithms are computationally expensive and should be performed at a reasonable rate. This paper presents an algorithm for discarding irrelevant feature points and maintaining them for future use so as to improve the computational cost. The algorithm starts by initializing a maintained set. The feature points in the maintained set are examined against its accuracy for modeling. Corner detection is required only when the feature points are insufficiently accurate for future modeling. Then, optical flows are computed from the maintained feature points toward the consecutive frame. After that, a motion model is estimated based on the simplified affine motion model and least square method, with outliers belonging to moving objects presented. Studentized residuals are used to eliminate such outliers. The model estimation and elimination processes repeat until no more outliers are identified. Finally, the entire algorithm repeats along the video sequence with the points remaining from the previous iteration used as the maintained set. As a practical application, an efficient video stabilization can be achieved by exploiting the computed motion models. Our study shows that the number of times corner detection needs to perform is greatly reduced, thus significantly improving the computational cost. Moreover, optical flow vectors are computed for only the maintained feature points, not for outliers, thus also reducing the computational cost. In addition, the feature points after reduction can sufficiently be used for background objects tracking as demonstrated in the simple video stabilizer based on our proposed algorithm.
Abstract: Wetting characteristics of reactive (Sn–0.7Cu solder)
and non– reactive (castor oil) wetting of liquids on Cu and Ag plated
Al substrates have been investigated. Solder spreading exhibited
capillary, gravity and viscous regimes. Oils did not exhibit noticeable
spreading regimes. Solder alloy showed better wettability on Ag
coated Al substrate compared to Cu plating. In the case of castor oil,
Cu coated Al substrate exhibited good wettability as compared to Ag
coated Al substrates. The difference in wettability during reactive
wetting of solder and non–reactive wetting of oils is attributed to the
change in the surface energies of Al substrates brought about by the
formation of intermetallic compounds (IMCs).
Abstract: This article proposes a voltage-mode
multifunction filter using differential voltage current
controllable current conveyor transconductance amplifier
(DV-CCCCTA). The features of the circuit are that: the
quality factor and pole frequency can be tuned independently
via the values of capacitors: the circuit description is very
simple, consisting of merely 1 DV-CCCCTA, and 2
capacitors. Without any component matching conditions, the
proposed circuit is very appropriate to further develop into
an integrated circuit. Additionally, each function response
can be selected by suitably selecting input signals with
digital method. The PSpice simulation results are depicted.
The given results agree well with the theoretical anticipation.
Abstract: The dynamic spectrum allocation solutions such as
cognitive radio networks have been proposed as a key technology to
exploit the frequency segments that are spectrally underutilized.
Cognitive radio users work as secondary users who need to
constantly and rapidly sense the presence of primary users or
licensees to utilize their frequency bands if they are inactive. Short
sensing cycles should be run by the secondary users to achieve
higher throughput rates as well as to provide low level of interference
to the primary users by immediately vacating their channels once
they have been detected. In this paper, the throughput-sensing time
relationship in local and cooperative spectrum sensing has been
investigated under two distinct scenarios, namely, constant primary
user protection (CPUP) and constant secondary user spectrum
usability (CSUSU) scenarios. The simulation results show that the
design of sensing slot duration is very critical and depends on the
number of cooperating users under CPUP scenario whereas under
CSUSU, cooperating more users has no effect if the sensing time
used exceeds 5% of the total frame duration.
Abstract: Using activity theory, organisational theory and
didactics as theoretical foundations, a comprehensive model of the
organisational dimensions relevant for learning and knowledge
transfer will be developed. In a second step, a Learning Assessment
Guideline will be elaborated. This guideline will be designed to
permit a targeted analysis of organisations to identify the status quo
in those areas crucial to the implementation of learning and
knowledge transfer. In addition, this self-analysis tool will enable
learning managers to select adequate didactic models for e- and
blended learning. As part of the European Integrated Project
"Process-oriented Learning and Information Exchange" (PROLIX),
this model of organisational prerequisites for learning and knowledge
transfer will be empirically tested in four profit and non-profit
organisations in Great Britain, Germany and France (to be finalized
in autumn 2006). The findings concern not only the capability of the
model of organisational dimensions, but also the predominant
perceptions of and obstacles to learning in organisations.
Abstract: In this research, heat transfer of a poly Ethylene
fluidized bed reactor without reaction were studied experimentally
and computationally at different superficial gas velocities. A multifluid
Eulerian computational model incorporating the kinetic theory
for solid particles was developed and used to simulate the heat
conducting gas–solid flows in a fluidized bed configuration.
Momentum exchange coefficients were evaluated using the Syamlal–
O-Brien drag functions. Temperature distributions of different phases
in the reactor were also computed. Good agreement was found
between the model predictions and the experimentally obtained data
for the bed expansion ratio as well as the qualitative gas–solid flow
patterns. The simulation and experimental results showed that the gas
temperature decreases as it moves upward in the reactor, while the
solid particle temperature increases. Pressure drop and temperature
distribution predicted by the simulations were in good agreement
with the experimental measurements at superficial gas velocities
higher than the minimum fluidization velocity. Also, the predicted
time-average local voidage profiles were in reasonable agreement
with the experimental results. The study showed that the
computational model was capable of predicting the heat transfer and
the hydrodynamic behavior of gas-solid fluidized bed flows with
reasonable accuracy.
Abstract: Early detection of lung cancer through chest radiography is a widely used method due to its relatively affordable cost. In this paper, an approach to improve lung nodule visualization on chest radiographs is presented. The approach makes use of linear phase high-frequency emphasis filter for digital filtering and
histogram equalization for contrast enhancement to achieve improvements. Results obtained indicate that a filtered image can
reveal sharper edges and provide more details. Also, contrast enhancement offers a way to further enhance the global (or local) visualization by equalizing the histogram of the pixel values within
the whole image (or a region of interest). The work aims to improve lung nodule visualization of chest radiographs to aid detection of lung cancer which is currently the leading cause of cancer deaths worldwide.
Abstract: The quantitative determination of several trace
elements (Cr, As, Se, Cd, Hg, Pb) existing as inorganic impurities in
some oriental herb-products such as Lingzhi Mushroom capsules,
Philamin powder, etc using ICP-MS has been studied. Various
instrumental parameters such as power, gas flow rate, sample depth, as
well as the concentration of nitric acid and thick background due to
high concentration of possible interferences on the determination of
these above-mentioned elements was investigated and the optimum
working conditions of the sample measurement on ICP-MS
(Agilent-7500a) were reported. Appropriate isotope internal standards
were also used to improve the accuracy of mercury determination.
Optimal parameters for sampling digestion were also investigated. The
recovery of analytical procedure was examined by using a Certified
Reference Material (IAEA-CRM 359). The recommended procedure
was then applied for the quantitative determination of Cr, As, Se, Cd,
Hg, Pb in Lingzhi Mushroom capsule, and Philamine powder samples.
The reproducibility of sample measurement (average value between
94 and 102%) and the uncertainty of analytical data (less than 20%)
are acceptable.
Abstract: End milling process is one of the common metal
cutting operations used for machining parts in manufacturing
industry. It is usually performed at the final stage in manufacturing a
product and surface roughness of the produced job plays an
important role. In general, the surface roughness affects wear
resistance, ductility, tensile, fatigue strength, etc., for machined parts
and cannot be neglected in design. In the present work an
experimental investigation of end milling of aluminium alloy with
carbide tool is carried out and the effect of different cutting
parameters on the response are studied with three-dimensional
surface plots. An artificial neural network (ANN) is used to establish
the relationship between the surface roughness and the input cutting
parameters (i.e., spindle speed, feed, and depth of cut). The Matlab
ANN toolbox works on feed forward back propagation algorithm is
used for modeling purpose. 3-12-1 network structure having
minimum average prediction error found as best network architecture
for predicting surface roughness value. The network predicts surface
roughness for unseen data and found that the result/prediction is
better. For desired surface finish of the component to be produced
there are many different combination of cutting parameters are
available. The optimum cutting parameter for obtaining desired
surface finish, to maximize tool life is predicted. The methodology is
demonstrated, number of problems are solved and algorithm is coded
in Matlab®.
Abstract: The data presented in this work show that in Armenia
a rise of air temperature is expected in the season, and annual terms.
As a result of the noted increase in temperature, a significant growth
of vulnerability of the territory of Armenia in relation to malaria is
expected. Zoning by the risk of renewed malaria transmission has
been performed.
Abstract: There are two types of drought as conceptual drought
and operational drought. The three parameters as the beginning, the
end and the degree of severity of the drought can be identifying in
operational drought by average precipitation in the whole region. One
of the methods classified to measure drought is Reconnaissance
Drought Index (RDI). Evapotranspiration is calculated using
Penman-Monteith method by analyzing thirty nine years prolong
climatic data. The evapotranspiration is then utilized in RDI to
classify normalized and standardized RDI. These RDI classifications
led to what kind of drought faced in Bhavnagar region on 12 month
time scale basis. The comparison between actual drought conditions
and RDI method used to find out drought are also illustrated. It can
be concluded that the index results of drought in a particular year are
same in both methods but having different index values where as
severity remain same.