Abstract: An alternative approach to the use of Discrete Fourier
Transform (DFT) for Magnetic Resonance Imaging (MRI) reconstruction
is the use of parametric modeling technique. This method
is suitable for problems in which the image can be modeled by
explicit known source functions with a few adjustable parameters.
Despite the success reported in the use of modeling technique as an
alternative MRI reconstruction technique, two important problems
constitutes challenges to the applicability of this method, these are
estimation of Model order and model coefficient determination. In
this paper, five of the suggested method of evaluating the model
order have been evaluated, these are: The Final Prediction Error
(FPE), Akaike Information Criterion (AIC), Residual Variance (RV),
Minimum Description Length (MDL) and Hannan and Quinn (HNQ)
criterion. These criteria were evaluated on MRI data sets based on the
method of Transient Error Reconstruction Algorithm (TERA). The
result for each criterion is compared to result obtained by the use of a
fixed order technique and three measures of similarity were evaluated.
Result obtained shows that the use of MDL gives the highest measure
of similarity to that use by a fixed order technique.
Abstract: In this article, using finite element analysis (FEA)
and an X-ray diffractometer (XRD), cold-sprayed titanium particles
on a steel substrate is investigated in term of cooling time and the
development of residual strains. Three cooling-down models of
sprayed particles after deposition stage are simulated and discussed:
the first model (m1) considers conduction effect to the substrate only,
the second model (m2) considers both conduction as well as
convection effect to the environment, and the third model (m3) which
is the same as the second model but with the substrate heated to a
near particle temperature before spraying. Thereafter, residual strains
developed in the third model is compared with the experimental
measurement of residual strains, which involved a Bruker D8
Advance Diffractometer using CuKa radiation (40kV, 40mA)
monochromatised with a graphite sample monochromator. For
deposition conditions of this study, a good correlation was found to
exist between the FEA results and XRD measurements of residual
strains.
Abstract: In this paper, the detection of a fault in the Global Positioning System (GPS) measurement is addressed. The class of faults considered is a bias in the GPS pseudorange measurements. This bias is modeled as an unknown constant. The fault could be the result of a receiver fault or signal fault such as multipath error. A bias bank is constructed based on set of possible fault hypotheses. Initially, there is equal probability of occurrence for any of the biases in the bank. Subsequently, as the measurements are processed, the probability of occurrence for each of the biases is sequentially updated. The fault with a probability approaching unity will be declared as the current fault in the GPS measurement. The residual formed from the GPS and Inertial Measurement Unit (IMU) measurements is used to update the probability of each fault. Results will be presented to show the performance of the presented algorithm.
Abstract: Reinforced Concrete (RC) structures strengthened
with fiber reinforced polymer (FRP) lack in thermal resistance under
elevated temperatures in the event of fire. This phenomenon led to
the lining of strengthened concrete with thin high performance
cementitious composites (THPCC) to protect the substrate against
elevated temperature. Elevated temperature effects on THPCC, based
on different cementitious materials have been studied in the past but
high-alumina cement (HAC)-based THPCC have not been well
characterized. This research study will focus on the THPCC based on
HAC replaced by 60%, 70%, 80% and 85% of ground granulated
blast furnace slag (GGBS). Samples were evaluated by the
measurement of their mechanical strength (28 & 56 days of curing)
after exposed to 400°C, 600°C and 28°C of room temperature for
comparison and corroborated by their microstructure study. Results
showed that among all mixtures, the mix containing only HAC
showed the highest compressive strength after exposed to 600°C as
compared to other mixtures. However, the tensile strength of THPCC
made of HAC and 60% GGBS content was comparable to the
THPCC with HAC only after exposed to 600°C. Field emission
scanning electron microscopy (FESEM) images of THPCC
accompanying Energy Dispersive X-ray (EDX) microanalysis
revealed that the microstructure deteriorated considerably after
exposure to elevated temperatures which led to the decrease in
mechanical strength.
Abstract: A minimal complexity version of component mode
synthesis is presented that requires simplified computer
programming, but still provides adequate accuracy for modeling
lower eigenproperties of large structures and their transient
responses. The novelty is that a structural separation into components
is done along a plane/surface that exhibits rigid-like behavior, thus
only normal modes of each component is sufficient to use, without
computing any constraint, attachment, or residual-attachment modes.
The approach requires only such input information as a few (lower)
natural frequencies and corresponding undamped normal modes of
each component. A novel technique is shown for formulation of
equations of motion, where a double transformation to generalized
coordinates is employed and formulation of nonproportional damping
matrix in generalized coordinates is shown.
Abstract: Nanoemulsions are a class of emulsions with a droplet
size in the range of 50–500 nm and have attracted a great deal of
attention in recent years because it is unique characteristics. The
physicochemical properties of nanoemulsion suggests that it can be
successfully used to recover the residual oil which is trapped in the
fine pore of reservoir rock by capillary forces after primary and
secondary recovery. Oil-in-water nanoemulsion which can be formed
by high-energy emulsification techniques using specific surfactants
can reduce oil-water interfacial tension (IFT) by 3-4 orders of
magnitude. The present work is aimed on characterization of oil-inwater
nanoemulsion in terms of its phase behavior, morphological
studies; interfacial energy; ability to reduce the interfacial tension and
understanding the mechanisms of mobilization and displacement of
entrapped oil blobs by lowering interfacial tension both at the
macroscopic and microscopic level. In order to investigate the
efficiency of oil-water nanoemulsion in enhanced oil recovery
(EOR), experiments were performed to characterize the emulsion in
terms of their physicochemical properties and size distribution of the
dispersed oil droplet in water phase. Synthetic mineral oil and a series
of surfactants were used to prepare oil-in-water emulsions.
Characterization of emulsion shows that it follows pseudo-plastic
behaviour and drop size of dispersed oil phase follows lognormal
distribution. Flooding experiments were also carried out in a
sandpack system to evaluate the effectiveness of the nanoemulsion as
displacing fluid for enhanced oil recovery. Substantial additional
recoveries (more than 25% of original oil in place) over conventional
water flooding were obtained in the present investigation.
Abstract: Sugarcane Shoots is an abundantly available
residual resources consisting of lignocelluloses which take it into
the benefit. The present study was focused on utilizing of
sugarcane shoot for reducing sugar production as a substrate in
ethanol production. Physical and chemical pretreatments of
sugarcane shoot were investigated. Results showed that the size of
sugarcane shoot influenced the cellulose content. The maximum
cellulose yield (60 %) can be obtained from alkaline pretreated
sugarcane shoot with 1.0 M NaOH at 30 oC for 90 min. The
cellulose yield reached up to 93.9% (w/w). Enzymatically
hydrolyzed of cellulosic residual in 0.04 citrate buffer (pH 5) with
celluclast 1.5L (0.7 FPU/ml) resulted in the highest amount of
reducing sugar at a rate of 32.1 g/l after 4 h incubation at 50°C,
and 100 oC for 5 min . Cellulose conversion was 55.5%.
Abstract: In this study arsenate [As(V)] removal from drinking water by coagulation process was investigated. Ferric chloride (FeCl3.6H2O) and ferrous sulfate (FeSO4.7H2O) were used as coagulant. The effects of major operating variables such as coagulant dose (1–30 mg/L) and pH (5.5–9.5) were investigated. Ferric chloride and ferrous sulfate were found as effective and reliable coagulant due to required dose, residual arsenate and coagulant concentration. Optimum pH values for maximum arsenate removal for ferrous sulfate and ferric chloride were found as 8 and 7.5. The arsenate removal efficiency decreased at neutral and acidic pH values for Fe(II) and at the high acidic and high alkaline pH for Fe(III). It was found that the increase of coagulant dose caused a substantial increase in the arsenate removal. But above a certain ferric chloride and ferrous sulfate dosage, the increase in arsenate removal was not significant. Ferric chloride and ferrous sulfate dose above 8 mg/L slightly increased arsenate removal.
Abstract: In this paper, estimation of the linear regression
model is made by ordinary least squares method and the
partially linear regression model is estimated by penalized
least squares method using smoothing spline. Then, it is
investigated that differences and similarity in the sum of
squares related for linear regression and partial linear
regression models (semi-parametric regression models). It is
denoted that the sum of squares in linear regression is reduced
to sum of squares in partial linear regression models.
Furthermore, we indicated that various sums of squares in the
linear regression are similar to different deviance statements in
partial linear regression. In addition to, coefficient of the
determination derived in linear regression model is easily
generalized to coefficient of the determination of the partial
linear regression model. For this aim, it is made two different
applications. A simulated and a real data set are considered to
prove the claim mentioned here. In this way, this study is
supported with a simulation and a real data example.
Abstract: A thin layer on the component surface can be found
with high tensile residual stresses, due to turning operations, which
can dangerously affect the fatigue performance of the component. In
this paper an analytical approach is presented to reconstruct the
residual stress field from a limited incomplete set of measurements.
Airy stress function is used as the primary unknown to directly solve
the equilibrium equations and satisfying the boundary conditions. In
this new method there exists the flexibility to impose the physical
conditions that govern the behavior of residual stress to achieve a
meaningful complete stress field. The analysis is also coupled to a
least squares approximation and a regularization method to provide
stability of the inverse problem. The power of this new method is
then demonstrated by analyzing some experimental measurements
and achieving a good agreement between the model prediction and
the results obtained from residual stress measurement.
Abstract: In this paper, the residual stress of thermal spray
coatings in gas turbine component by curvature method has been
studied. The samples and shaft were coated by hard WC-12Co
cermets using high velocity oxy fuel (HVOF) after preparation in
same conditions. The curvature of coated samples was measured by
using of coordinate measurement machine (CMM). The metallurgical
and Tribological studies has been made on the coated shaft using
optical microscopy and scanning electron microscopy (SEM)
Abstract: The electrical substation components are often subject to degradation due to over-voltage or over-current, caused by a short circuit or a lightning. A particular interest is given to the circuit breaker, regarding the importance of its function and its dangerous failure. This component degrades gradually due to the use, and it is also subject to the shock process resulted from the stress of isolating the fault when a short circuit occurs in the system. In this paper, based on failure mechanisms developments, the wear out of the circuit breaker contacts is modeled. The aim of this work is to evaluate its reliability and consequently its residual lifetime. The shock process is based on two random variables such as: the arrival of shocks and their magnitudes. The arrival of shocks was modeled using homogeneous Poisson process (HPP). By simulation, the dates of short-circuit arrivals were generated accompanied with their magnitudes. The same principle of simulation is applied to the amount of cumulative wear out contacts. The objective reached is to find the formulation of the wear function depending on the number of solicitations of the circuit breaker.
Abstract: The main objective of this work is to provide a fault detection and isolation based on Markov parameters for residual generation and a neural network for fault classification. The diagnostic approach is accomplished in two steps: In step 1, the system is identified using a series of input / output variables through an identification algorithm. In step 2, the fault is diagnosed comparing the Markov parameters of faulty and non faulty systems. The Artificial Neural Network is trained using predetermined faulty conditions serves to classify the unknown fault. In step 1, the identification is done by first formulating a Hankel matrix out of Input/ output variables and then decomposing the matrix via singular value decomposition technique. For identifying the system online sliding window approach is adopted wherein an open slit slides over a subset of 'n' input/output variables. The faults are introduced at arbitrary instances and the identification is carried out in online. Fault residues are extracted making a comparison of the first five Markov parameters of faulty and non faulty systems. The proposed diagnostic approach is illustrated on benchmark problems with encouraging results.
Abstract: A low bit rate still image compression scheme by
compressing the indices of Vector Quantization (VQ) and generating
residual codebook is proposed. The indices of VQ are compressed by
exploiting correlation among image blocks, which reduces the bit per
index. A residual codebook similar to VQ codebook is generated that
represents the distortion produced in VQ. Using this residual
codebook the distortion in the reconstructed image is removed,
thereby increasing the image quality. Our scheme combines these two
methods. Experimental results on standard image Lena show that our
scheme can give a reconstructed image with a PSNR value of 31.6 db
at 0.396 bits per pixel. Our scheme is also faster than the existing VQ
variants.
Abstract: The detection of outliers is very essential because of
their responsibility for producing huge interpretative problem in
linear as well as in nonlinear regression analysis. Much work has
been accomplished on the identification of outlier in linear
regression, but not in nonlinear regression. In this article we propose
several outlier detection techniques for nonlinear regression. The
main idea is to use the linear approximation of a nonlinear model and
consider the gradient as the design matrix. Subsequently, the
detection techniques are formulated. Six detection measures are
developed that combined with three estimation techniques such as the
Least-Squares, M and MM-estimators. The study shows that among
the six measures, only the studentized residual and Cook Distance
which combined with the MM estimator, consistently capable of
identifying the correct outliers.
Abstract: Because support interference corrections are not properly
understood, engineers mostly rely on expensive dummy measurements
or CFD calculations. This paper presents a method based on uncorrected wind tunnel measurements and fast calculation techniques
(it is a hybrid method) to calculate wall interference, support interference and residual interference (when e.g. a support member
closely approaches the wind tunnel walls) for any type of wind tunnel and support configuration. The method provides with a simple formula
for the calculation of the interference gradient. This gradient is
based on the uncorrected measurements and a successive calculation
of the slopes of the interference-free aerodynamic coefficients. For the latter purpose a new vortex-lattice routine is developed that corrects
the slopes for viscous effects. A test case of a measurement on a wing proves the value of this hybrid method as trends and orders of
magnitudes of the interference are correctly determined.
Abstract: Due to the call of global warming effects, city planners aim at actions for reducing carbon emission. One of the approaches is to promote the usage of public transportation system toward the transit-oriented-development. For example, rapid transit system in Taipei city and Kaohsiung city are opening. However, until November 2008 the average daily patronage counted only 113,774 passengers at Kaohsiung MRT systems, much less than which was expected. Now the crucial questions: how the public transport competes with private transport? And more importantly, what factors would enhance the use of public transport? To give the answers to those questions, our study first applied regression to analyze the factors attracting people to use public transport around cities in the world. It is shown in our study that the number of MRT stations, city population, cost of living, transit fare, density, gasoline price, and scooter being a major mode of transport are the major factors. Subsequently, our study identified successful and unsuccessful cities in regard of the public transport usage based on the diagnosis of regression residuals. Finally, by comparing transportation strategies adopted by those successful cities, our conclusion stated that Kaohsiung City could apply strategies such as increasing parking fees, reducing parking spaces in downtown area, and reducing transfer time by providing more bus services and public bikes to promote the usage of public transport.
Abstract: Bond Graph as a unified multidisciplinary tool is widely
used not only for dynamic modelling but also for Fault Detection and
Isolation because of its structural and causal proprieties. A binary
Fault Signature Matrix is systematically generated but to make the
final binary decision is not always feasible because of the problems
revealed by such method. The purpose of this paper is introducing a
methodology for the improvement of the classical binary method of
decision-making, so that the unknown and identical failure signatures
can be treated to improve the robustness. This approach consists of
associating the evaluated residuals and the components reliability data
to build a Hybrid Bayesian Network. This network is used in two
distinct inference procedures: one for the continuous part and the
other for the discrete part. The continuous nodes of the network are
the prior probabilities of the components failures, which are used by
the inference procedure on the discrete part to compute the posterior
probabilities of the failures. The developed methodology is applied
to a real steam generator pilot process.
Abstract: This paper focuses on reducing the power consumption
of wireless sensor networks. Therefore, a communication protocol
named LEACH (Low-Energy Adaptive Clustering Hierarchy) is modified.
We extend LEACHs stochastic cluster-head selection algorithm
by a modifying the probability of each node to become cluster-head
based on its required energy to transmit to the sink. We present
an efficient energy aware routing algorithm for the wireless sensor
networks. Our contribution consists in rotation selection of clusterheads
considering the remoteness of the nodes to the sink, and then,
the network nodes residual energy. This choice allows a best distribution
of the transmission energy in the network. The cluster-heads
selection algorithm is completely decentralized. Simulation results
show that the energy is significantly reduced compared with the
previous clustering based routing algorithm for the sensor networks.
Abstract: We present a discussion of three adaptive filtering
algorithms well known for their one-step termination property, in
terms of their relationship with the minimal residual method. These
algorithms are the normalized least mean square (NLMS), Affine
Projection algorithm (APA) and the recursive least squares algorithm
(RLS). The NLMS is shown to be a result of the orthogonality
condition imposed on the instantaneous approximation of the Wiener
equation, while APA and RLS algorithm result from orthogonality
condition in multi-dimensional minimal residual formulation. Further
analysis of the minimal residual formulation for the RLS leads to
a triangular system which also possesses the one-step termination
property (in exact arithmetic)