Abstract: In the last decades to supply the various and different
demands of clients, a lot of manufacturers trend to use the mixedmodel
assembly line (MMAL) in their production lines, since this
policy make possible to assemble various and different models of the
equivalent goods on the same line with the MTO approach.
In this article, we determine the sequence of (MMAL) line, with
applying the kitting approach and planning of rest time for general
workers to reduce the wastages, increase the workers effectiveness
and apply the sector of lean production approach.
This Multi-objective sequencing problem solved in small size with
GAMS22.2 and PSO meta heuristic in 10 test problems and compare
their results together and conclude that their results are very similar
together, next we determine the important factors in computing the
cost, which improving them cost reduced. Since this problem, is NPhard
in large size, we use the particle swarm optimization (PSO)
meta-heuristic for solving it. In large size we define some test
problems to survey it-s performance and determine the important
factors in calculating the cost, that by change or improved them
production in minimum cost will be possible.
Abstract: The purpose of this research was develop a biological
nutrient removal (BNR) system which has low energy consumption, sludge production, and land usage. These indicate that BNR system could be a alternative of future wastewater treatment in ubiquitous
city(U-city). Organics and nitrogen compounds could be removed by this system so that secondary or tertiary stages of wastewater treatment satisfy their standards. This system was composed of oxic and anoxic
filter filed with PVDC and POM media. Anoxic/oxic filter system operated under empty bed contact time of 4 hours by increasing
recirculation ratio from 0 to 100 %. The system removals of total nitrogen and COD were 76.3% and 93%, respectively. To be observed
internal behavior in this system SCOD, NH3-N, and NO3-N were
conducted and removal shows range of 25~100%, 59~99%, and
70~100%, respectively.
Abstract: In this paper, we represent protein structure by using
graph. A protein structure database will become a graph database.
Each graph is represented by a spectral vector. We use Jacobi
rotation algorithm to calculate the eigenvalues of the normalized
Laplacian representation of adjacency matrix of graph. To measure
the similarity between two graphs, we calculate the Euclidean
distance between two graph spectral vectors. To cluster the graphs,
we use M-tree with the Euclidean distance to cluster spectral vectors.
Besides, M-tree can be used for graph searching in graph database.
Our proposal method was tested with graph database of 100 graphs
representing 100 protein structures downloaded from Protein Data
Bank (PDB) and we compare the result with the SCOP hierarchical
structure.
Abstract: Planning capacities when regenerating complex investment goods involves particular challenges in that the planning is subject to a large degree of uncertainty regarding load information. Using information fusion – by applying Bayesian Networks – a method is being developed for forecasting the anticipated expenditures (human labor, tool and machinery utilization, time etc.) for regenerating a good. The generated forecasts then later serve as a tool for planning capacities and ensure a greater stability in the planning processes.
Abstract: In this paper, the least-squares design of variable fractional-delay (VFD) finite impulse response (FIR) digital differentiators is proposed. The used transfer function is formulated so that Farrow structure can be applied to realize the designed system. Also, the symmetric characteristics of filter coefficients are derived, which leads to the complexity reduction by saving almost a half of the number of coefficients. Moreover, all the elements of related vectors or matrices for the optimal process can be represented in closed forms, which make the design easier. Design example is also presented to illustrate the effectiveness of the proposed method.
Abstract: In this research a comparison between k-epsilon and
LES model for a deoiling hydrocyclone is conducted. Flow field of
hydrocyclone is obtained by three-dimensional simulations with
OpenFOAM code. Potential of prediction for both methods of this
complex swirl flow is discussed. Large eddy simulation method
results have more similarity to experiment and its results are
presented in figures from different hydrocyclone cross sections.
Abstract: Web applications have become very complex and
crucial, especially when combined with areas such as CRM
(Customer Relationship Management) and BPR (Business Process
Reengineering), the scientific community has focused attention to
Web applications design, development, analysis, and testing, by
studying and proposing methodologies and tools. This paper
proposes an approach to automatic multi-dimensional concern
mining for Web Applications, based on concepts analysis, impact
analysis, and token-based concern identification. This approach lets
the user to analyse and traverse Web software relevant to a particular
concern (concept, goal, purpose, etc.) via multi-dimensional
separation of concerns, to document, understand and test Web
applications. This technique was developed in the context of WAAT
(Web Applications Analysis and Testing) project. A semi-automatic
tool to support this technique is currently under development.
Abstract: This study examined a habitat-suitability assessment method namely the Ecological Niche Factor Analysis (ENFA). A virtual species was created and then dispatched in a geographic information system model of a real landscape in three historic scenarios: (1) spreading, (2) equilibrium, and (3) overabundance. In each scenario, the virtual species was sampled and these simulated data sets were used as inputs for the ENFA to reconstruct the habitat suitability model. The 'equilibrium' scenario gives the highest quantity and quality among three scenarios. ENFA was sensitive to the distribution scenarios but not sensitive to sample sizes. The use of a virtual species proved to be a very efficient method, allowing one to fully control the quality of the input data as well as to accurately evaluate the predictive power of the analyses.
Abstract: Noise level has critical effects on the diagnostic
performance of signal-averaged electrocardiogram (SAECG), because
the true starting and end points of QRS complex would be masked by
the residual noise and sensitive to the noise level. Several studies and
commercial machines have used a fixed number of heart beats
(typically between 200 to 600 beats) or set a predefined noise level
(typically between 0.3 to 1.0 μV) in each X, Y and Z lead to perform
SAECG analysis. However different criteria or methods used to
perform SAECG would cause the discrepancies of the noise levels
among study subjects. According to the recommendations of 1991
ESC, AHA and ACC Task Force Consensus Document for the use of
SAECG, the determinations of onset and offset are related closely to
the mean and standard deviation of noise sample. Hence this study
would try to perform SAECG using consistent root-mean-square
(RMS) noise levels among study subjects and analyze the noise level
effects on SAECG. This study would also evaluate the differences
between normal subjects and chronic renal failure (CRF) patients in
the time-domain SAECG parameters.
The study subjects were composed of 50 normal Taiwanese and 20
CRF patients. During the signal-averaged processing, different RMS
noise levels were adjusted to evaluate their effects on three time
domain parameters (1) filtered total QRS duration (fQRSD), (2) RMS
voltage of the last QRS 40 ms (RMS40), and (3) duration of the low
amplitude signals below 40 μV (LAS40). The study results
demonstrated that the reduction of RMS noise level can increase
fQRSD and LAS40 and decrease the RMS40, and can further increase
the differences of fQRSD and RMS40 between normal subjects and
CRF patients. The SAECG may also become abnormal due to the
reduction of RMS noise level. In conclusion, it is essential to establish
diagnostic criteria of SAECG using consistent RMS noise levels for
the reduction of the noise level effects.
Abstract: The processing of the electrocardiogram (ECG) signal consists essentially in the detection of the characteristic points of
signal which are an important tool in the diagnosis of heart diseases. The most suitable are the detection of R waves. In this paper, we
present various mathematical tools used for filtering ECG using digital filtering and Discreet Wavelet Transform (DWT) filtering. In
addition, this paper will include two main R peak detection methods
by applying a windowing process: The first method is based on calculations derived, the second is a time-frequency method based on
Dyadic Wavelet Transform DyWT.
Abstract: The new programming technologies allow for the
creation of components which can be automatically or manually
assembled to reach a new experience in knowledge understanding
and mastering or in getting skills for a specific knowledge area. The
project proposes an interactive framework that permits the creation,
combination and utilization of components that are specific to
mathematical training in high schools.
The main framework-s objectives are:
• authoring lessons by the teacher or the students; all they need
are simple operating skills for Equation Editor (or something
similar, or Latex); the rest are just drag & drop operations,
inserting data into a grid, or navigating through menus
• allowing sonorous presentations of mathematical texts and
solving hints (easier understood by the students)
• offering graphical representations of a mathematical function
edited in Equation
• storing of learning objects in a database
• storing of predefined lessons (efficient for expressions and
commands, the rest being calculations; allows a high
compression)
• viewing and/or modifying predefined lessons, according to the
curricula
The whole thing is focused on a mathematical expressions minicompiler,
storing the code that will be later used for different
purposes (tables, graphics, and optimisations).
Programming technologies used. A Visual C# .NET
implementation is proposed. New and innovative digital learning
objects for mathematics will be developed; they are capable to
interpret, contextualize and react depending on the architecture
where they are assembled.
Abstract: The purpose of this study was to determine the
influence of physical activity and dietary fat intake on Body Mass
Index (BMI) of lecturers within a higher learning institutionalized
setting. The study adopted a Cross-sectional Correlational Design
and included 120 lecturers selected proportionately by simple
random sampling techniques from a population of 600 lecturers. Data
was collected using questionnaires, which had sections including
physical activity checklist adopted from the international physical
activity questionnaire (IPAQ), 24-hour food recall, anthropometric
measurements mainly weight and height. Analysis involved the use
of bivariate correlations and linear regression. A significant inverse
association was registered between BMI and duration (in minutes)
spent doing moderate intense physical activity per day (r=-0.322,
p
Abstract: The stem cells have ability to differentiated
themselves through mitotic cell division and various range of
specialized cell types. Cellular differentiation is a way by which few
specialized cell develops into more specialized.This paper studies the
fundamental problem of computational schema for an artificial neural
network based on chemical, physical and biological variables of
state. By doing this type of study system could be model for a viable
propagation of various economically important stem cells
differentiation. This paper proposes various differentiation outcomes
of artificial neural network into variety of potential specialized cells
on implementing MATLAB version 2009. A feed-forward back
propagation kind of network was created to input vector (five input
elements) with single hidden layer and one output unit in output
layer. The efficiency of neural network was done by the assessment
of results achieved from this study with that of experimental data
input and chosen target data. The propose solution for the efficiency
of artificial neural network assessed by the comparatative analysis of
“Mean Square Error" at zero epochs. There are different variables of
data in order to test the targeted results.
Abstract: Historic preservation areas are extremely vulnerable to disasters because they are home to many vulnerable people and contain many closely spaced wooden houses. However, the narrow streets in these regions have historic meaning, which means that they cannot be widened and can become blocked easily during large disasters. Here, we describe our efforts to establish a methodology for the planning of evacuation route sin such historic preservation areas. In particular, this study aims to clarify the effectiveness of measures intended to secure two-way evacuation routes for vulnerable people during large disasters in a historic area preserved under the Cultural Properties Protection Law, Japan.
Abstract: Appropriate ventilation in a classroom is helpful for
enhancing air exchange rate and student concentration. This study
focuses on the effects of fenestration in a four-story school building by
performing numerical simulation of a building when considering
indoor and outdoor environments simultaneously. The wind profile
function embedded in PHOENICS code was set as the inlet boundary
condition in a suburban environment. Sixteen fenestration
combinations were compared in a classroom containing thirty seats.
This study evaluates mean age of air (AGE) and airflow pattern of a
classroom on different floors. Considering both wind profile and
fenestration effects, the airflow on higher floors is channeled toward
the area near ceiling in a room and causes older mean age of air in the
breathing zone. The results in this study serve as a useful guide for
enhancing natural ventilation in a typical school building.
Abstract: The dispersion of heavy particles line in an isotropic
and incompressible three-dimensional turbulent flow has been
studied using the Kinematic Simulation techniques to find out the
evolution of the line fractal dimension. In this study, the fractal
dimension of the line is found for different cases of heavy particles
inertia (different Stokes numbers) in the absence of the particle
gravity with a comparison with the fractal dimension obtained in the
diffusion case of material line at the same Reynolds number. It can
be concluded for the dispersion of heavy particles line in turbulent
flow that the particle inertia affect the fractal dimension of a line
released in a turbulent flow for Stokes numbers 0.02 < St < 2. At the
beginning for small times, most of the different cases are not affected
by the inertia until a certain time, the particle response time τa, with
larger time as the particles inertia increases, the fractal dimension of
the line increases owing to the particles becoming more sensitive to
the small scales which cause the change in the line shape during its
journey.
Abstract: The Virtual Reality (VR) is becoming increasingly
important for business, education, and entertainment, therefore VR
technology have been applied for training purposes in the areas of
military, safety training and flying simulators. In particular, the
superior and high reliability VR training system is very important in
immersion. Manipulation training in immersive virtual environments
is difficult partly because users must do without the hap contact with
real objects they rely on in the real world to orient themselves and
their manipulated.
In this paper, we create a convincing questionnaire of immersion
and an experiment to assess the influence of immersion on
performance in VR training system. The Immersion Questionnaire
(IQ) included spatial immersion, Psychological immersion, and
Sensory immersion. We show that users with a training system
complete visual attention and detection of signals. Twenty subjects
were allocated to a factorial design consisting of two different VR
systems (Desktop VR and Projector VR). The results indicated that
different VR representation methods significantly affected the
participants- Immersion dimensions.
Abstract: This study has investigated a vehicle Lumped
Parameter Model (LPM) in frontal crash. There are several ways for
determining spring and damper characteristics and type of problem
shall be considered as system identification. This study use Genetic
Algorithm (GA) procedure, being an effective procedure in case of
optimization issues, for optimizing errors, between target data
(experimental data) and calculated results (being obtained by
analytical solving). In this study analyzed model in 5-DOF then
compared our results with 5-DOF serial model. Finally, the response
of model due to external excitement is investigated.
Abstract: Reverse engineering of full-genomic interaction networks based on compendia of expression data has been successfully applied for a number of model organisms. This study adapts these approaches for an important non-model organism: The major human fungal pathogen Candida albicans. During the infection process, the pathogen can adapt to a wide range of environmental niches and reversibly changes its growth form. Given the importance of these processes, it is important to know how they are regulated. This study presents a reverse engineering strategy able to infer fullgenomic interaction networks for C. albicans based on a linear regression, utilizing the sparseness criterion (LASSO). To overcome the limited amount of expression data and small number of known interactions, we utilize different prior-knowledge sources guiding the network inference to a knowledge driven solution. Since, no database of known interactions for C. albicans exists, we use a textmining system which utilizes full-text research papers to identify known regulatory interactions. By comparing with these known regulatory interactions, we find an optimal value for global modelling parameters weighting the influence of the sparseness criterion and the prior-knowledge. Furthermore, we show that soft integration of prior-knowledge additionally improves the performance. Finally, we compare the performance of our approach to state of the art network inference approaches.
Abstract: The response of growth and yield of rainfed-chickpea
to population density should be evaluated based on long-term
experiments to include the climate variability. This is achievable just
by simulation. In this simulation study, this evaluation was done by
running the CYRUS model for long-term daily weather data of five
locations in Iran. The tested population densities were 7 to 59 (with
interval of 2) stands per square meter. Various functions, including
quadratic, segmented, beta, broken linear, and dent-like functions,
were tested. Considering root mean square of deviations and linear
regression statistics [intercept (a), slope (b), and correlation
coefficient (r)] for predicted versus observed variables, the quadratic
and broken linear functions appeared to be appropriate for describing
the changes in biomass and grain yield, and in harvest index,
respectively. Results indicated that in all locations, grain yield tends
to show increasing trend with crowding the population, but
subsequently decreases. This was also true for biomass in five
locations. The harvest index appeared to have plateau state across
low population densities, but decreasing trend with more increasing
density. The turning point (optimum population density) for grain
yield was 30.68 stands per square meter in Isfahan, 30.54 in Shiraz,
31.47 in Kermanshah, 34.85 in Tabriz, and 32.00 in Mashhad. The
optimum population density for biomass ranged from 24.6 (in
Tabriz) to 35.3 stands per square meter (Mashhad). For harvest index
it varied between 35.87 and 40.12 stands per square meter.