Abstract: The purpose of this paper primarily intends to develop GIS interface for estimating sequences of stream-flows at ungauged stations based on known flows at gauged stations. The integrated GIS interface is composed of three major steps. The first, precipitation characteristics using statistical analysis is the procedure for making multiple linear regression equation to get the long term mean daily flow at ungauged stations. The independent variables in regression equation are mean daily flow and drainage area. Traditionally, mean flow data are generated by using Thissen polygon method. However, method for obtaining mean flow data can be selected by user such as Kriging, IDW (Inverse Distance Weighted), Spline methods as well as other traditional methods. At the second, flow duration curve (FDC) is computing at unguaged station by FDCs in gauged stations. Finally, the mean annual daily flow is computed by spatial interpolation algorithm. The third step is to obtain watershed/topographic characteristics. They are the most important factors which govern stream-flows. In summary, the simulated daily flow time series are compared with observed times series. The results using integrated GIS interface are closely similar and are well fitted each other. Also, the relationship between the topographic/watershed characteristics and stream flow time series is highly correlated.
Abstract: Production of biogas from bakery waste was enhanced
by additional bacterial cell. This study was divided into 2 steps. First
step, grease waste from bakery industry-s grease trap was initially
degraded by Pseudomonas aeruginosa. The concentration of byproduct,
especially glycerol, was determined and found that glycerol
concentration increased from 12.83% to 48.10%. Secondary step, 3
biodigesters were set up in 3 different substrates: non-degraded waste
as substrate in first biodigester, degraded waste as substrate in
secondary biodigester, and degraded waste mixed with swine manure
in ratio 1:1 as substrate in third biodigester. The highest
concentration of biogas was found in third biodigester that was
44.33% of methane and 63.71% of carbon dioxide. The lower
concentration at 24.90% of methane and 18.98% of carbon dioxide
was exhibited in secondary biodigester whereas the lowest was found
in non-degraded waste biodigester. It was demonstrated that the
biogas production was greatly increased with the initial grease waste
degradation by Pseudomonas aeruginosa.
Abstract: The rapid development of manufacturing and information systems has caused significant changes in manufacturing environments in recent decades. Mass production has given way to flexible manufacturing systems, in which an important characteristic is customized or "on demand" production. In this scenario, the seamless and without gaps information flow becomes a key factor for success of enterprises. In this paper we present a framework to support the mapping of features into machining workingsteps compliant with the ISO 14649 standard (known as STEP-NC). The system determines how the features can be made with the available manufacturing resources. Examples of the mapping method are presented for features such as a pocket with a general surface.
Abstract: Methods of clustering which were developed in the
data mining theory can be successfully applied to the investigation of
different kinds of dependencies between the conditions of
environment and human activities. It is known, that environmental
parameters such as temperature, relative humidity, atmospheric
pressure and illumination have significant effects on the human
mental performance. To investigate these parameters effect, data
mining technique of clustering using entropy and Information Gain
Ratio (IGR) K(Y/X) = (H(X)–H(Y/X))/H(Y) is used, where
H(Y)=-ΣPi ln(Pi). This technique allows adjusting the boundaries of
clusters. It is shown that the information gain ratio (IGR) grows
monotonically and simultaneously with degree of connectivity
between two variables. This approach has some preferences if
compared, for example, with correlation analysis due to relatively
smaller sensitivity to shape of functional dependencies. Variant of an
algorithm to implement the proposed method with some analysis of
above problem of environmental effects is also presented. It was
shown that proposed method converges with finite number of steps.
Abstract: Clusters of microcalcifications in mammograms are an
important sign of breast cancer. This paper presents a complete
Computer Aided Detection (CAD) scheme for automatic detection of
clustered microcalcifications in digital mammograms. The proposed
system, MammoScan μCaD, consists of three main steps. Firstly
all potential microcalcifications are detected using a a method for
feature extraction, VarMet, and adaptive thresholding. This will also
give a number of false detections. The goal of the second step,
Classifier level 1, is to remove everything but microcalcifications.
The last step, Classifier level 2, uses learned dictionaries and sparse
representations as a texture classification technique to distinguish
single, benign microcalcifications from clustered microcalcifications,
in addition to remove some remaining false detections. The system
is trained and tested on true digital data from Stavanger University
Hospital, and the results are evaluated by radiologists. The overall
results are promising, with a sensitivity > 90 % and a low false
detection rate (approx 1 unwanted pr. image, or 0.3 false pr. image).
Abstract: Graph rewriting-based visual model processing is a
widely used technique for model transformation. Visual model
transformations often need to follow an algorithm that requires a
strict control over the execution sequence of the transformation steps.
Therefore, in Visual Model Processors (VMPs) the execution order
of the transformation steps is crucial. This paper presents the visual
control flow support of Visual Modeling and Transformation System
(VMTS), which facilitates composing complex model
transformations of simple transformation steps and executing them.
The VMTS Visual Control Flow Language (VCFL) uses stereotyped
activity diagrams to specify control flow structures and OCL
constraints to choose between different control flow branches. This
paper introduces VCFL, discusses its termination properties and
provides an algorithm to support the termination analysis of VCFL
transformations.
Abstract: The process of wafer fabrication is arguably the most
technologically complex and capital intensive stage in semiconductor
manufacturing. This large-scale discrete-event process is highly reentrant,
and involves hundreds of machines, restrictions, and
processing steps. Therefore, production control of wafer fabrication
facilities (fab), specifically scheduling, is one of the most challenging
problems that this industry faces. Dispatching rules have been
extensively applied to the scheduling problems in semiconductor
manufacturing. Moreover, lot release policies are commonly used in
this manufacturing setting to further improve the performance of such
systems and reduce its inherent variability. In this work, simulation is
used in the scheduling of re-entrant flow shop manufacturing systems
with an application in semiconductor wafer fabrication; where, a
simulation model has been developed for the Intel Five-Machine Six
Step Mini-Fab using the ExtendTM simulation environment. The
Mini-Fab has been selected as it captures the challenges involved in
scheduling the highly re-entrant semiconductor manufacturing lines.
A number of scenarios have been developed and have been used to
evaluate the effect of different dispatching rules and lot release
policies on the selected performance measures. Results of simulation
showed that the performance of the Mini-Fab can be drastically
improved using a combination of dispatching rules and lot release
policy.
Abstract: Like other external sorting algorithms, the presented
algorithm is a two step algorithm including internal and external
steps. The first part of the algorithm is like the other similar
algorithms but second part of that is including a new easy
implementing method which has reduced the vast number of inputoutput
operations saliently. As decreasing processor operating time
does not have any effect on main algorithm speed, any improvement
in it should be done through decreasing the number of input-output
operations. This paper propose an easy algorithm for choose the
correct record location of the final list. This decreases the time
complexity and makes the algorithm faster.
Abstract: A direct search approach to determine optimal reservoir operating is proposed with ant colony optimization for continuous domains (ACOR). The model is applied to a system of single reservoir to determine the optimum releases during 42 years of monthly steps. A disadvantage of ant colony based methods and the ACOR in particular, refers to great amount of computer run time consumption. In this study a highly effective procedure for decreasing run time has been developed. The results are compared to those of a GA based model.
Abstract: A new strategy for oriented immobilization of proteins was proposed. The strategy contains two steps. The first step is to search for a docking site away from the active site on the protein surface. The second step is trying to find a ligand that is able to grasp the targeted site of the protein. To avoid ligand binding to the active site of protein, the targeted docking site is selected to own opposite charges to those near the active site. To enhance the ligand-protein binding, both hydrophobic and electrostatic interactions need to be included. The targeted docking site should therefore contain hydrophobic amino acids. The ligand is then selected through the help of molecular docking simulations. The enzyme α-amylase derived from Aspergillus oryzae (TAKA) was taken as an example for oriented immobilization. The active site of TAKA is surrounded by negatively charged amino acids. All the possible hydrophobic sites on the surface of TAKA were evaluated by the free energy estimation through benzene docking. A hydrophobic site on the opposite side of TAKA-s active site was found to be positive in net charges. A possible ligand, 3,3-,4,4- – Biphenyltetra- carboxylic acid (BPTA), was found to catch TAKA by the designated docking site. Then, the BPTA molecules were grafted onto silica gels and measured the affinity of TAKA adsorption and the specific activity of thereby immobilized enzymes. It was found that TAKA had a dissociation constant as low as 7.0×10-6 M toward the ligand BPTA on silica gel. The increase in ionic strength has little effect on the adsorption of TAKA, which indicated the existence of hydrophobic interaction between ligands and proteins. The specific activity of the immobilized TAKA was compared with the randomly adsorbed TAKA on primary amine containing silica gel. It was found that the orderly immobilized TAKA owns a specific activity twice as high as the one randomly adsorbed by ionic interaction.
Abstract: With the advancement of wireless sensor network technology,
its practical utilization is becoming an important challange.
This paper overviews my past environmental monitoring project,
and discusses the process of starting the monitoring by classifying
it into four steps. The steps to start environmental monitoring can
be complicated, but not well discussed by researchers of wireless
sensor network technology. This paper demonstrates our activity and
challenges in each of the four steps to ease the process, and argues
future challenges to enable quick start of environmental monitoring.
Abstract: In this paper we propose a new knowledge model using
the Dempster-Shafer-s evidence theory for image segmentation and
fusion. The proposed method is composed essentially of two steps.
First, mass distributions in Dempster-Shafer theory are obtained from
the membership degrees of each pixel covering the three image
components (R, G and B). Each membership-s degree is determined by
applying Fuzzy C-Means (FCM) clustering to the gray levels of the
three images. Second, the fusion process consists in defining three
discernment frames which are associated with the three images to be
fused, and then combining them to form a new frame of discernment.
The strategy used to define mass distributions in the combined
framework is discussed in detail. The proposed fusion method is
illustrated in the context of image segmentation. Experimental
investigations and comparative studies with the other previous methods
are carried out showing thus the robustness and superiority of the
proposed method in terms of image segmentation.
Abstract: Building intelligent traffic guide systems has been an
interesting subject recently. A good system should be able to observe
all important visual information to be able to analyze the context of
the scene. To do so, signs in general, and traffic signs in particular,
are usually taken into account as they contain rich information to
these systems. Therefore, many researchers have put an effort on
sign recognition field. Sign localization or sign detection is the most
important step in the sign recognition process. This step filters out
non informative area in the scene, and locates candidates in later
steps. In this paper, we apply a new approach in detecting sign
locations using a new color invariant model. Experiments are carried
out with different datasets introduced in other works where authors
claimed the difficulty in detecting signs under unfavorable imaging
conditions. Our method is simple, fast and most importantly it gives
a high detection rate in locating signs.
Abstract: In this paper, a novel and fast algorithm for segmental
and subsegmental lung vessel segmentation is introduced using
Computed Tomography Angiography images. This process is quite
important especially at the detection of pulmonary embolism, lung
nodule, and interstitial lung disease. The applied method has been
realized at five steps. At the first step, lung segmentation is achieved.
At the second one, images are threshold and differences between the
images are detected. At the third one, left and right lungs are gathered
with the differences which are attained in the second step and Exact
Lung Image (ELI) is achieved. At the fourth one, image, which is
threshold for vessel, is gathered with the ELI. Lastly, identifying and
segmentation of segmental and subsegmental lung vessel have been
carried out thanks to image which is obtained in the fourth step. The
performance of the applied method is found quite well for
radiologists and it gives enough results to the surgeries medically.
Abstract: Time series analysis often requires data that represents
the evolution of an observed variable in equidistant time steps. In
order to collect this data sampling is applied. While continuous
signals may be sampled, analyzed and reconstructed applying
Shannon-s sampling theorem, time-discrete signals have to be dealt
with differently. In this article we consider the discrete-event
simulation (DES) of job-shop-systems and study the effects of
different sampling rates on data quality regarding completeness and
accuracy of reconstructed inventory evolutions. At this we discuss
deterministic as well as non-deterministic behavior of system
variables. Error curves are deployed to illustrate and discuss the
sampling rate-s impact and to derive recommendations for its wellfounded
choice.
Abstract: The myocardial sintigraphy is an imaging modality which provides functional informations. Whereas, coronarography modality gives useful informations about coronary arteries anatomy. In case of coronary artery disease (CAD), the coronarography can not determine precisely which moderate lesions (artery reduction between 50% and 70%), known as the “gray zone", are haemodynamicaly significant. In this paper, we aim to define the relationship between the location and the degree of the stenosis in coronary arteries and the observed perfusion on the myocardial scintigraphy. This allows us to model the impact evolution of these stenoses in order to justify a coronarography or to avoid it for patients suspected being in the gray zone. Our approach is decomposed in two steps. The first step consists in modelling a coronary artery bed and stenoses of different location and degree. The second step consists in modelling the left ventricle at stress and at rest using the sphercical harmonics model and myocardial scintigraphic data. We use the spherical harmonics descriptors to analyse left ventricle model deformation between stress and rest which permits us to conclude if ever an ischemia exists and to quantify it.
Abstract: This paper presents the study of hardness profile of spur gear heated by induction heating process in function of the machine parameters, such as the power (kW), the heating time (s) and the generator frequency (kHz). The global work is realized by 3D finite-element simulation applied to the process by coupling and resolving the electromagnetic field and the heat transfer problems, and it was performed in three distinguished steps. First, a Comsol 3D model was built using an adequate formulation and taking into account the material properties and the machine parameters. Second, the convergence study was conducted to optimize the mesh. Then, the surface temperatures and the case depths were deeply analyzed in function of the initial current density and the heating time in medium frequency (MF) and high frequency (HF) heating modes and the edge effect were studied. Finally, the simulations results are validated using experimental tests.
Abstract: In pattern recognition applications the low level
segmentation and the high level object recognition are generally
considered as two separate steps. The paper presents a method that
bridges the gap between the low and the high level object
recognition. It is based on a Bayesian network representation and
network propagation algorithm. At the low level it uses hierarchical
structure of quadratic spline wavelet image bases. The method is
demonstrated for a simple circuit diagram component identification
problem.
Abstract: In this study, an optimization of supersonic air-to-air ejector is carried out by a recently developed single-objective genetic algorithm based on adaption of sequence of individuals. Adaptation of sequence is based on Shape-based distance of individuals and embedded micro-genetic algorithm. The optimal sequence found defines the succession of CFD-aimed objective calculation within each generation of regular micro-genetic algorithm. A spring-based deformation mutates the computational grid starting the initial individualvia adapted population in the optimized sequence. Selection of a generation initial individual is knowledge-based. A direct comparison of the newly defined and standard micro-genetic algorithm is carried out for supersonic air-to-air ejector. The only objective is to minimize the loose of total stagnation pressure in the ejector. The result is that sequence-adopted micro-genetic algorithm can provide comparative results to standard algorithm but in significantly lower number of overall CFD iteration steps.
Abstract: In a state-of-the-art industrial production line of
photovoltaic products the handling and automation processes are of
particular importance and implication. While processing a fully
functional crystalline solar cell an as-cut photovoltaic wafer is subject
to numerous repeated handling steps. With respect to stronger
requirements in productivity and decreasing rejections due to defects
the mechanical stress on the thin wafers has to be reduced to a
minimum as the fragility increases by decreasing wafer thicknesses.
In relation to the increasing wafer fragility, researches at the
Fraunhofer Institutes IPA and CSP showed a negative correlation
between multiple handling processes and the wafer integrity. Recent
work therefore focused on the analysis and optimization of the dry
wafer stack separation process with compressed air. The achievement
of a wafer sensitive process capability and a high production
throughput rate is the basic motivation in this research.