Abstract: This paper proposes a modeling method of the laws controlling manufacturing systems with temporal and non temporal constraints. A methodology of robust control construction generating the margins of passive and active robustness is being elaborated. Indeed, two paramount models are presented in this paper. The first utilizes the P-time Petri Nets which is used to manage the flow type disturbances. The second, the quality model, exploits the Intervals Constrained Petri Nets (ICPN) tool which allows the system to preserve its quality specificities. The redundancy of the robustness of the elementary parameters between passive and active is also used. The final model built allows the correlation of temporal and non temporal criteria by putting two paramount models in interaction. To do so, a set of definitions and theorems are employed and affirmed by applicator examples.
Abstract: Wavelet transforms is a very powerful tools for image compression. One of its advantage is the provision of both spatial and frequency localization of image energy. However, wavelet transform coefficients are defined by both a magnitude and sign. While algorithms exist for efficiently coding the magnitude of the transform coefficients, they are not efficient for the coding of their sign. It is generally assumed that there is no compression gain to be obtained from the coding of the sign. Only recently have some authors begun to investigate the sign of wavelet coefficients in image coding. Some authors have assumed that the sign information bit of wavelet coefficients may be encoded with the estimated probability of 0.5; the same assumption concerns the refinement information bit. In this paper, we propose a new method for Separate Sign Coding (SSC) of wavelet image coefficients. The sign and the magnitude of wavelet image coefficients are examined to obtain their online probabilities. We use the scalar quantization in which the information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also examined. We show that the sign information and the refinement information may be encoded by the probability of approximately 0.5 only after about five bit planes. Two maps are separately entropy encoded: the sign map and the magnitude map. The refinement information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also entropy encoded. An algorithm is developed and simulations are performed on three standard images in grey scale: Lena, Barbara and Cameraman. Five scales are performed using the biorthogonal wavelet transform 9/7 filter bank. The obtained results are compared to JPEG2000 standard in terms of peak signal to noise ration (PSNR) for the three images and in terms of subjective quality (visual quality). It is shown that the proposed method outperforms the JPEG2000. The proposed method is also compared to other codec in the literature. It is shown that the proposed method is very successful and shows its performance in term of PSNR.
Abstract: The design of weight is one of the important parts in
fuzzy decision making, as it would have a deep effect on the evaluation
results. Entropy is one of the weight measure based on objective
evaluation. Non--probabilistic-type entropy measures for fuzzy set
and interval type-2 fuzzy sets (IT2FS) have been developed and applied
to weight measure. Since the entropy for (IT2FS) for decision
making yet to be explored, this paper proposes a new objective
weight method by using entropy weight method for multiple attribute
decision making (MADM). This paper utilizes the nature of IT2FS
concept in the evaluation process to assess the attribute weight based
on the credibility of data. An example was presented to demonstrate
the feasibility of the new method in decision making. The entropy
measure of interval type-2 fuzzy sets yield flexible judgment and
could be applied in decision making environment.
Abstract: When acid is pumped into damaged reservoirs for
damage removal/stimulation, distorted inflow of acid into the
formation occurs caused by acid preferentially traveling into highly
permeable regions over low permeable regions, or (in general) into
the path of least resistance. This can lead to poor zonal coverage and
hence warrants diversion to carry out an effective placement of acid.
Diversion is desirably a reversible technique of temporarily reducing
the permeability of high perm zones, thereby forcing the acid into
lower perm zones.
The uniqueness of each reservoir can pose several challenges to
engineers attempting to devise optimum and effective diversion
strategies. Diversion techniques include mechanical placement and/or
chemical diversion of treatment fluids, further sub-classified into ball
sealers, bridge plugs, packers, particulate diverters, viscous gels,
crosslinked gels, relative permeability modifiers (RPMs), foams,
and/or the use of placement techniques, such as coiled tubing (CT)
and the maximum pressure difference and injection rate (MAPDIR)
methodology.
It is not always realized that the effectiveness of diverters greatly
depends on reservoir properties, such as formation type, temperature,
reservoir permeability, heterogeneity, and physical well
characteristics (e.g., completion type, well deviation, length of
treatment interval, multiple intervals, etc.). This paper reviews the
mechanisms by which each variety of diverter functions and
discusses the effect of various reservoir properties on the efficiency
of diversion techniques. Guidelines are recommended to help
enhance productivity from zones of interest by choosing the best
methods of diversion while pumping an optimized amount of
treatment fluid. The success of an overall acid treatment often
depends on the effectiveness of the diverting agents.
Abstract: Over the years, many implementations have been
proposed for solving IA networks. These implementations are
concerned with finding a solution efficiently. The primary goal of
our implementation is simplicity and ease of use.
We present an IA network implementation based on finite domain
non-binary CSPs, and constraint logic programming. The
implementation has a GUI which permits the drawing of arbitrary IA
networks. We then show how the implementation can be extended to
find all the solutions to an IA network. One application of finding all
the solutions, is solving probabilistic IA networks.
Abstract: With the globalized production and logistics
environment, the need for reducing the product development interval
and lead time, having a faster response to orders, conforming to quality
standards, fair tracking, and boosting information exchanging
activities with customers and partners, and coping with changes in the
management environment, manufacturers are in dire need of an
information management system in their manufacturing environments.
There are lots of information systems that have been designed to
manage the condition or operation of equipment in the field but
existing systems have a decentralized architecture, which is not
unified. Also, these systems cannot effectively handle the status data
extraction process upon encountering a problem related to protocols or
changes in the equipment or the setting. In this regard, this paper will
introduce a system for processing and saving the status info of
production equipment, which uses standard representation formats, to
enable flexible responses to and support for variables in the field
equipment. This system can be used for a variety of manufacturing and
equipment settings and is capable of interacting with higher-tier
systems such as MES.
Abstract: The reliability of the tools developed to learn the
learning styles is essential to find out students- learning styles
trustworthily. For this purpose, the psychometric features of Grasha-
Riechman Student Learning Style Inventory developed by Grasha
was studied to contribute to this field. The study was carried out on
6th, 7th, and 8th graders of 10 primary education schools in Konya.
The inventory was applied twice with an interval of one month, and
according to the data of this application, the reliability coefficient
numbers of the 6 sub-dimensions pointed in the theory of the
inventory was found to be medium. Besides, it was found that the
inventory does not have a structure with 6 factors for both
Mathematics and English courses as represented in the theory.
Abstract: We consider linear regression models where both input data (the values of independent variables) and output data (the observations of the dependent variable) are interval-censored. We introduce a possibilistic generalization of the least squares estimator, so called OLS-set for the interval model. This set captures the impact of the loss of information on the OLS estimator caused by interval censoring and provides a tool for quantification of this effect. We study complexity-theoretic properties of the OLS-set. We also deal with restricted versions of the general interval linear regression model, in particular the crisp input – interval output model. We give an argument that natural descriptions of the OLS-set in the crisp input – interval output cannot be computed in polynomial time. Then we derive easily computable approximations for the OLS-set which can be used instead of the exact description. We illustrate the approach by an example.
Abstract: The purpose of this study was to present a reliable mean for human-computer interfacing based on finger gestures made in two dimensions, which could be interpreted and adequately used in controlling a remote robot's movement. The gestures were captured and interpreted using an algorithm based on trigonometric functions, in calculating the angular displacement from one point of touch to another as the user-s finger moved within a time interval; thereby allowing for pattern spotting of the captured gesture. In this paper the design and implementation of such a gesture based user interface was presented, utilizing the aforementioned algorithm. These techniques were then used to control a remote mobile robot's movement. A resistive touch screen was selected as the gesture sensor, then utilizing a programmed microcontroller to interpret them respectively.
Abstract: A new approach based on the consideration that electroencephalogram (EEG) signals are chaotic signals was presented for automated diagnosis of electroencephalographic changes. This consideration was tested successfully using the nonlinear dynamics tools, like the computation of Lyapunov exponents. This paper presented the usage of statistics over the set of the Lyapunov exponents in order to reduce the dimensionality of the extracted feature vectors. Since classification is more accurate when the pattern is simplified through representation by important features, feature extraction and selection play an important role in classifying systems such as neural networks. Multilayer perceptron neural network (MLPNN) architectures were formulated and used as basis for detection of electroencephalographic changes. Three types of EEG signals (EEG signals recorded from healthy volunteers with eyes open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. The selected Lyapunov exponents of the EEG signals were used as inputs of the MLPNN trained with Levenberg- Marquardt algorithm. The classification results confirmed that the proposed MLPNN has potential in detecting the electroencephalographic changes.
Abstract: A novel algorithm for construct a seamless video mosaic of the entire panorama continuously by automatically analyzing and managing feature points, including management of quantity and quality, from the sequence is presented. Since a video contains significant redundancy, so that not all consecutive video images are required to create a mosaic. Only some key images need to be selected. Meanwhile, feature-based methods for mosaicing rely on correction of feature points? correspondence deeply, and if the key images have large frame interval, the mosaic will often be interrupted by the scarcity of corresponding feature points. A unique character of the method is its ability to handle all the problems above in video mosaicing. Experiments have been performed under various conditions, the results show that our method could achieve fast and accurate video mosaic construction. Keywords?video mosaic, feature points management, homography estimation.
Abstract: In this paper, a new dependable algorithm based on an adaptation of the standard variational iteration method (VIM) is used for analyzing the transition from steady convection to chaos for lowto-intermediate Rayleigh numbers convection in porous media. The solution trajectories show the transition from steady convection to chaos that occurs at a slightly subcritical value of Rayleigh number, the critical value being associated with the loss of linear stability of the steady convection solution. The VIM is treated as an algorithm in a sequence of intervals for finding accurate approximate solutions to the considered model and other dynamical systems. We shall call this technique as the piecewise VIM. Numerical comparisons between the piecewise VIM and the classical fourth-order Runge–Kutta (RK4) numerical solutions reveal that the proposed technique is a promising tool for the nonlinear chaotic and nonchaotic systems.
Abstract: The study of the generated defects on manufactured
parts shows the difficulty to maintain parts in their positions during
the machining process and to estimate them during the pre-process
plan. This work presents a contribution to the development of 3D
models for the optimization of the manufacturing tolerances. An
experimental study allows the measurement of the defects of part
positioning for the determination of ε and the choice of an optimal
setup of the part. An approach of 3D tolerance based on the small
displacements method permits the determination of the
manufacturing errors upstream. A developed tool, allows an
automatic generation of the tolerance intervals along the three axes.
Abstract: For gamma radiation detection, assemblies having
scintillation crystals and a photomultiplier tube, also there is a
preamplifier connected to the detector because the signals from
photomultiplier tube are of small amplitude. After pre-amplification
the signals are sent to the amplifier and then to the multichannel
analyser. The multichannel analyser sorts all incoming electrical
signals according to their amplitudes and sorts the detected photons
in channels covering small energy intervals. The energy range of
each channel depends on the gain settings of the multichannel
analyser and the high voltage across the photomultiplier tube. The
exit spectrum data of the two main isotopes studied ,putting data in
biomass program ,process it by Matlab program to get the solid
holdup image (solid spherical nuclear fuel)
Abstract: The measurement of anesthetic depth is necessary in
anesthesiology. NN10 is very simple method among the RR intervals
analysis methods. NN10 parameter means the numbers of above the 10
ms intervals of the normal to normal RR intervals.
Bispectrum analysis is defined as 2D FFT. EEG signal reflected the
non-linear peristalsis phenomena according to the change brain
function. After analyzing the bispectrum of the 2 dimension, the most
significant power spectrum density peaks appeared abundantly at the
specific area in awakening and anesthesia state. These points are
utilized to create the new index since many peaks appeared at the
specific area in the frequency coordinate. The measured range of an
index was 0-100. An index is 20-50 at an anesthesia, while the index is
90-60 at the awake.
In this paper, the relation between NN10 parameter using ECG and
bisepctrum index using EEG is observed to estimate the depth of
anesthesia during anesthesia and then we estimated the utility of the
anesthetic.
Abstract: Recently, there have been considerable efforts towards the convergence between P2P and Grid computing in order to reach a solution that takes the best of both worlds by exploiting the advantages that each offers. Augmenting the peer-to-peer model to the services of the Grid promises to eliminate bottlenecks and ensure greater scalability, availability, and fault-tolerance. The Grid Information Service (GIS) directly influences quality of service for grid platforms. Most of the proposed solutions for decentralizing the GIS are based on completely flat overlays. The main contributions for this paper are: the investigation of a novel resource discovery framework for Grid implementations based on a hierarchy of structured peer-to-peer overlay networks, and introducing a discovery algorithm utilizing the proposed framework. Validation of the framework-s performance is done via simulation. Experimental results show that the proposed organization has the advantage of being scalable while providing fault-isolation, effective bandwidth utilization, and hierarchical access control. In addition, it will lead to a reliable, guaranteed sub-linear search which returns results within a bounded interval of time and with a smaller amount of generated traffic within each domain.
Abstract: The purposes of researches - to estimate implicit ethnic attitudes by direct and indirect methods, to determine the accordance of two types measuring, to investigate influence of task type used in an experiment, on the results of measuring, as well as to determine a presence or communication between recent episodic events and chronologic correlations of ethnic attitudes. Method of the implicit measuring - an evaluative priming (EPT) carried out with the use of different SOA intervals, explicit methods of research are G.Soldatova-s types of ethnic identity, G.Soldatova-s index of tolerance, E.Bogardus scale of social distance. During five stages of researches received results open some aspects of implicit measuring, its correlation with the results of self-reports on different SOA intervals, connection of implicit measuring with emotional valence of episodic events of participants and other indexes, presenting a contribution to the decision of implicit measuring application problem for study of different social constructs
Abstract: This research proposes an algorithm for the simulation
of time-periodic unsteady problems via the solution unsteady Euler
and Navier-Stokes equations. This algorithm which is called Time
Spectral method uses a Fourier representation in time and hence
solve for the periodic state directly without resolving transients
(which consume most of the resources in a time-accurate scheme).
Mathematical tools used here are discrete Fourier transformations. It
has shown tremendous potential for reducing the computational cost
compared to conventional time-accurate methods, by enforcing
periodicity and using Fourier representation in time, leading to
spectral accuracy. The accuracy and efficiency of this technique is
verified by Euler and Navier-Stokes calculations for pitching airfoils.
Because of flow turbulence nature, Baldwin-Lomax turbulence
model has been used at viscous flow analysis. The results presented
by the Time Spectral method are compared with experimental data. It
has shown tremendous potential for reducing the computational cost
compared to the conventional time-accurate methods, by enforcing
periodicity and using Fourier representation in time, leading to
spectral accuracy, because results verify the small number of time
intervals per pitching cycle required to capture the flow physics.
Abstract: Carbon nanotubes (CNTs) possess unique structural,
mechanical, thermal and electronic properties, and have been
proposed to be used for applications in many fields. However, to
reach the full potential of the CNTs, many problems still need to be
solved, including the development of an easy and effective
purification procedure, since synthesized CNTs contain impurities,
such as amorphous carbon, carbon nanoparticles and metal particles.
Different purification methods yield different CNT characteristics
and may be suitable for the production of different types of CNTs. In
this study, the effect of different purification chemicals on carbon
nanotube quality was investigated. CNTs were firstly synthesized by
chemical vapor deposition (CVD) of acetylene (C2H2) on a
magnesium oxide (MgO) powder impregnated with an iron nitrate
(Fe(NO3)3·9H2O) solution. The synthesis parameters were selected
as: the synthesis temperature of 800°C, the iron content in the
precursor of 5% and the synthesis time of 30 min. The liquid phase
oxidation method was applied for the purification of the synthesized
CNT materials. Three different acid chemicals (HNO3, H2SO4, and
HCl) were used in the removal of the metal catalysts from the
synthesized CNT material to investigate the possible effects of each
acid solution to the purification step. Purification experiments were
carried out at two different temperatures (75 and 120 °C), two
different acid concentrations (3 and 6 M) and for three different time
intervals (6, 8 and 15 h). A 30% H2O2 : 3M HCl (1:1 v%) solution
was also used in the purification step to remove both the metal
catalysts and the amorphous carbon. The purifications using this
solution were performed at the temperature of 75°C for 8 hours.
Purification efficiencies at different conditions were evaluated by
thermogravimetric analysis. Thermal and electrical properties of
CNTs were also determined. It was found that the obtained electrical
conductivity values for the carbon nanotubes were typical for organic
semiconductor materials and thermal stabilities were changed
depending on the purification chemicals.
Abstract: This paper suggests ranking alternatives under fuzzy
MCDM (multiple criteria decision making) via an centroid based
ranking approach, where criteria are classified to benefit qualitative,
benefit quantitative and cost quantitative ones. The ratings of
alternatives versus qualitative criteria and the importance weights of
all criteria are assessed in linguistic values represented by fuzzy
numbers. The membership function for the final fuzzy evaluation
value of each alternative can be developed through α-cuts and
interval arithmetic of fuzzy numbers. The distance between the
original point and the relative centroid is applied to defuzzify the
final fuzzy evaluation values in order to rank alternatives. Finally a
numerical example demonstrates the computation procedure of the
proposed model.