Abstract: This paper proposes a modeling method of the laws controlling manufacturing systems with temporal and non temporal constraints. A methodology of robust control construction generating the margins of passive and active robustness is being elaborated. Indeed, two paramount models are presented in this paper. The first utilizes the P-time Petri Nets which is used to manage the flow type disturbances. The second, the quality model, exploits the Intervals Constrained Petri Nets (ICPN) tool which allows the system to preserve its quality specificities. The redundancy of the robustness of the elementary parameters between passive and active is also used. The final model built allows the correlation of temporal and non temporal criteria by putting two paramount models in interaction. To do so, a set of definitions and theorems are employed and affirmed by applicator examples.
Abstract: The paper describes the workings for four models of
CONWIP systems used till date; the basic CONWIP system, the
hybrid CONWIP system, the multi-product CONWIP system, and the
parallel CONWIP system. The final novel model is introduced in this
paper in a general form. These models may be adopted for analysis
for both simulation studies and implementation on the shop floor. For
each model, input parameters of interest are highlighted and their
impacts on several system performance measures are addressed.
Abstract: In this survey the process of crack propagation at the
toe of concrete gravity dam is investigated by applying principals
and criteria of linear elastic fracture mechanic. Simulating process of
earthquake conditions for three models of dam with different
geometrical condition, in empty reservoir under plain stress is
calculated through special fracture mechanic software FRANNC2D
[1] for determining fracture mechanic criteria. The outcomes showed
that in spite of the primary expectations, the simultaneous existence
of fillet in both toe and heel area (model 3), the rate of maximum
principal stress has not been decreased; however, even the maximum
principal stress has increased, so it caused stress intensity factors
increase which is undesirable. On the other hand, the dam with heel
fillet has shown the best attitude and it is because of items like
decreasing the rates of maximum and minimum principal stresses and
also is related to decreasing the rates of stress intensity factors for 1st
& 2nd modes of the model.
Abstract: This paper presents data annotation models at five levels of granularity (database, relation, column, tuple, and cell) of relational data to address the problem of unsuitability of most relational databases to express annotations. These models do not require any structural and schematic changes to the underlying database. These models are also flexible, extensible, customizable, database-neutral, and platform-independent. This paper also presents an SQL-like query language, named Annotation Query Language (AnQL), to query annotation documents. AnQL is simple to understand and exploits the already-existent wide knowledge and skill set of SQL.
Abstract: This paper presents a dynamic model for mechanical
loads of an electric drive, including angular misalignment and
including load unbalance. The misalignment model represents the
effects of the universal joint between the motor and the mechanical
load. Simulation results are presented for an induction motor driving
a mechanical load with angular misalignment for both flexible and
rigid coupling. The models presented are very useful in the study of
mechanical fault detection in induction motors, using mechanical and
electrical signals already available in a drive system, such as speed,
torque and stator currents.
Abstract: In the traditional concept of product life cycle management, the activities of design, manufacturing, and assembly are performed in a sequential way. The drawback is that the considerations in design may contradict the considerations in manufacturing and assembly. The different designs of components can lead to different assembly sequences. Therefore, in some cases, a good design may result in a high cost in the downstream assembly activities. In this research, an integrated design evaluation and assembly sequence planning model is presented. Given a product requirement, there may be several design alternative cases to design the components for the same product. If a different design case is selected, the assembly sequence for constructing the product can be different. In this paper, first, the designed components are represented by using graph based models. The graph based models are transformed to assembly precedence constraints and assembly costs. A particle swarm optimization (PSO) approach is presented by encoding a particle using a position matrix defined by the design cases and the assembly sequences. The PSO algorithm simultaneously performs design evaluation and assembly sequence planning with an objective of minimizing the total assembly costs. As a result, the design cases and the assembly sequences can both be optimized. The main contribution lies in the new concept of integrated design evaluation and assembly sequence planning model and the new PSO solution method. The test results show that the presented method is feasible and efficient for solving the integrated design evaluation and assembly planning problem. In this paper, an example product is tested and illustrated.
Abstract: Corner detection and optical flow are common techniques for feature-based video stabilization. However, these algorithms are computationally expensive and should be performed at a reasonable rate. This paper presents an algorithm for discarding irrelevant feature points and maintaining them for future use so as to improve the computational cost. The algorithm starts by initializing a maintained set. The feature points in the maintained set are examined against its accuracy for modeling. Corner detection is required only when the feature points are insufficiently accurate for future modeling. Then, optical flows are computed from the maintained feature points toward the consecutive frame. After that, a motion model is estimated based on the simplified affine motion model and least square method, with outliers belonging to moving objects presented. Studentized residuals are used to eliminate such outliers. The model estimation and elimination processes repeat until no more outliers are identified. Finally, the entire algorithm repeats along the video sequence with the points remaining from the previous iteration used as the maintained set. As a practical application, an efficient video stabilization can be achieved by exploiting the computed motion models. Our study shows that the number of times corner detection needs to perform is greatly reduced, thus significantly improving the computational cost. Moreover, optical flow vectors are computed for only the maintained feature points, not for outliers, thus also reducing the computational cost. In addition, the feature points after reduction can sufficiently be used for background objects tracking as demonstrated in the simple video stabilizer based on our proposed algorithm.
Abstract: Using activity theory, organisational theory and
didactics as theoretical foundations, a comprehensive model of the
organisational dimensions relevant for learning and knowledge
transfer will be developed. In a second step, a Learning Assessment
Guideline will be elaborated. This guideline will be designed to
permit a targeted analysis of organisations to identify the status quo
in those areas crucial to the implementation of learning and
knowledge transfer. In addition, this self-analysis tool will enable
learning managers to select adequate didactic models for e- and
blended learning. As part of the European Integrated Project
"Process-oriented Learning and Information Exchange" (PROLIX),
this model of organisational prerequisites for learning and knowledge
transfer will be empirically tested in four profit and non-profit
organisations in Great Britain, Germany and France (to be finalized
in autumn 2006). The findings concern not only the capability of the
model of organisational dimensions, but also the predominant
perceptions of and obstacles to learning in organisations.
Abstract: Total liquid ventilation can support gas exchange in animal models of lung injury. Clinical application awaits further technical improvements and performance verification. Our aim was to develop a liquid ventilator, able to deliver accurate tidal volumes, and a computerized system for measuring lung mechanics. The computer-assisted, piston-driven respirator controlled ventilatory parameters that were displayed and modified on a real-time basis. Pressure and temperature transducers along with a lineal displacement controller provided the necessary signals to calculate lung mechanics. Ten newborn lambs (
Abstract: Exclusive breastfeeding is the feeding of a baby on no other milk apart from breast milk. Exclusive breastfeeding during the first 6 months of life is very important as it supports optimal growth and development during infancy and reduces the risk of obliterating diseases and problems. Moreover, it helps to reduce the incidence and/or severity of diarrhea, lower respiratory infection and urinary tract infection. In this paper, we make a survey of the factors that influence exclusive breastfeeding and use two dispersed statistical models to analyze data. The models are the Generalized Poisson regression model and the Com-Poisson regression models.
Abstract: SDMA (Space-Division Multiple Access) is a MIMO
(Multiple-Input and Multiple-Output) based wireless communication
network architecture which has the potential to significantly increase
the spectral efficiency and the system performance. The maximum
likelihood (ML) detection provides the optimal performance, but its
complexity increases exponentially with the constellation size of
modulation and number of users. The QR decomposition (QRD)
MUD can be a substitute to ML detection due its low complexity and
near optimal performance. The minimum mean-squared-error
(MMSE) multiuser detection (MUD) minimises the mean square
error (MSE), which may not give guarantee that the BER of the
system is also minimum. But the minimum bit error rate (MBER)
MUD performs better than the classic MMSE MUD in term of
minimum probability of error by directly minimising the BER cost
function. Also the MBER MUD is able to support more users than
the number of receiving antennas, whereas the rest of MUDs fail in
this scenario. In this paper the performance of various MUD
techniques is verified for the correlated MIMO channel models based
on IEEE 802.16n standard.
Abstract: Removal of Methylene Blue (MB) from aqueous
solution by adsorbing it on Gypsum was investigated by batch
method. The studies were conducted at 25°C and included the effects
of pH and initial concentration of Methylene Blue. The adsorption
data was analyzed by using the Langmuir, Freundlich and Tempkin
isotherm models. The maximum monolayer adsorption capacity was
found to be 36 mg of the dye per gram of gypsum. The data were
also analyzed in terms of their kinetic behavior and was found to
obey the pseudo second order equation.
Abstract: This paper presents a integer frequency offset (IFO)
estimation scheme for the 3GPP long term evolution (LTE) downlink
system. Firstly, the conventional joint detection method for IFO and
sector cell index (CID) information is introduced. Secondly, an IFO
estimation without explicit sector CID information is proposed, which
can operate jointly with the proposed IFO estimation and reduce
the time delay in comparison with the conventional joint method.
Also, the proposed method is computationally efficient and has almost
similar performance in comparison with the conventional method over
the Pedestrian and Vehicular channel models.
Abstract: In many applications, data is in graph structure, which
can be naturally represented as graph-structured XML. Existing
queries defined on tree-structured and graph-structured XML data
mainly focus on subgraph matching, which can not cover all the
requirements of querying on graph. In this paper, a new kind of
queries, topological query on graph-structured XML is presented.
This kind of queries consider not only the structure of subgraph but
also the topological relationship between subgraphs. With existing
subgraph query processing algorithms, efficient algorithms for topological
query processing are designed. Experimental results show the
efficiency of implementation algorithms.
Abstract: In the numerical solution of the forward dynamics of a
multibody system, the positions and velocities of the bodies in the
system are obtained first. With the information of the system state
variables at each time step, the internal and external forces acting on
the system are obtained by appropriate contact force models if the
continuous contact method is used instead of a discrete contact
method. The local deformation of the bodies in contact, represented
by penetration, is used to compute the contact force. The ability and
suitability with current cylindrical contact force models to describe
the contact between bodies with cylindrical geometries with
particular focus on internal contacting geometries involving low
clearances and high loads simultaneously is discussed in this paper.
A comparative assessment of the performance of each model under
analysis for different contact conditions, in particular for very
different penetration and clearance values, is presented. It is
demonstrated that some models represent a rough approximation to
describe the conformal contact between cylindrical geometries
because contact forces are underestimated.
Abstract: Most simple nonlinear thresholding rules for
wavelet- based denoising assume that the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependencies. This paper attempts to give a recipe for selecting one of the popular image-denoising algorithms based
on VisuShrink, SureShrink, OracleShrink, BayesShrink and BiShrink and also this paper compares different Bivariate models used for image denoising applications. The first part of the paper
compares different Shrinkage functions used for image-denoising.
The second part of the paper compares different bivariate models
and the third part of this paper uses the Bivariate model with modified marginal variance which is based on Laplacian assumption. This paper gives an experimental comparison on six 512x512 commonly used images, Lenna, Barbara, Goldhill,
Clown, Boat and Stonehenge. The following noise powers 25dB,26dB, 27dB, 28dB and 29dB are added to the six standard images and the corresponding Peak Signal to Noise Ratio (PSNR) values
are calculated for each noise level.
Abstract: Using spatial models as a shared common basis of
information about the environment for different kinds of contextaware
systems has been a heavily researched topic in the last years.
Thereby the research focused on how to create, to update, and to
merge spatial models so as to enable highly dynamic, consistent and
coherent spatial models at large scale. In this paper however, we
want to concentrate on how context-aware applications could use this
information so as to adapt their behavior according to the situation
they are in. The main idea is to provide the spatial model
infrastructure with a situation recognition component based on
generic situation templates. A situation template is – as part of a
much larger situation template library – an abstract, machinereadable
description of a certain basic situation type, which could be
used by different applications to evaluate their situation. In this
paper, different theoretical and practical issues – technical, ethical
and philosophical ones – are discussed important for understanding
and developing situation dependent systems based on situation
templates. A basic system design is presented which allows for the
reasoning with uncertain data using an improved version of a
learning algorithm for the automatic adaption of situation templates.
Finally, for supporting the development of adaptive applications, we
present a new situation-aware adaptation concept based on
workflows.
Abstract: Protection and proper management of archaeological heritage are an essential process of studying and interpreting the generations present and future. Protecting the archaeological heritage is based upon multidiscipline professional collaboration. This study aims to gather data by different sources (Photogrammetry and Geographic Information System (GIS)) integrated for the purpose of documenting one the of significant archeological sites (Ahl-Alkahf, Jordan). 3D modeling deals with the actual image of the features, shapes and texture to represent reality as realistically as possible by using texture. The 3D coordinates that result of the photogrammetric adjustment procedures are used to create 3D-models of the study area. Adding Textures to the 3D-models surfaces gives a 'real world' appearance to the displayed models. GIS system combined all data, including boundary maps, indicating the location of archeological sites, transportation layer, digital elevation model and orthoimages. For realistic representation of the study area, 3D - GIS model prepared, where efficient generation, management and visualization of such special data can be achieved.
Abstract: We consider linear regression models where both input data (the values of independent variables) and output data (the observations of the dependent variable) are interval-censored. We introduce a possibilistic generalization of the least squares estimator, so called OLS-set for the interval model. This set captures the impact of the loss of information on the OLS estimator caused by interval censoring and provides a tool for quantification of this effect. We study complexity-theoretic properties of the OLS-set. We also deal with restricted versions of the general interval linear regression model, in particular the crisp input – interval output model. We give an argument that natural descriptions of the OLS-set in the crisp input – interval output cannot be computed in polynomial time. Then we derive easily computable approximations for the OLS-set which can be used instead of the exact description. We illustrate the approach by an example.
Abstract: This paper proposes a methodology for analysis of
the dynamic behavior of a robotic manipulator in continuous
time. Initially this system (nonlinear system) will be decomposed
into linear submodels and analyzed in the context of the Linear
and Parameter Varying (LPV) Systems. The obtained linear
submodels, which represent the local dynamic behavior of the
robotic manipulator in some operating points were grouped in
a Takagi-Sugeno fuzzy structure. The obtained fuzzy model was
analyzed and validated through analog simulation, as universal
approximator of the robotic manipulator.