Abstract: This study suggests the estimation method of stress
distribution for the beam structures based on TLS (Terrestrial Laser
Scanning). The main components of method are the creation of the
lattices of raw data from TLS to satisfy the suitable condition and
application of CSSI (Cubic Smoothing Spline Interpolation) for
estimating stress distribution. Estimation of stress distribution for the
structural member or the whole structure is one of the important
factors for safety evaluation of the structure. Existing sensors which
include ESG (Electric strain gauge) and LVDT (Linear Variable
Differential Transformer) can be categorized as contact type sensor
which should be installed on the structural members and also there are
various limitations such as the need of separate space where the
network cables are installed and the difficulty of access for sensor
installation in real buildings. To overcome these problems inherent in
the contact type sensors, TLS system of LiDAR (light detection and
ranging), which can measure the displacement of a target in a long
range without the influence of surrounding environment and also get
the whole shape of the structure, has been applied to the field of
structural health monitoring. The important characteristic of TLS
measuring is a formation of point clouds which has many points
including the local coordinate. Point clouds are not linear distribution
but dispersed shape. Thus, to analyze point clouds, the interpolation is
needed vitally. Through formation of averaged lattices and CSSI for
the raw data, the method which can estimate the displacement of
simple beam was developed. Also, the developed method can be
extended to calculate the strain and finally applicable to estimate a
stress distribution of a structural member. To verify the validity of the
method, the loading test on a simple beam was conducted and TLS
measured it. Through a comparison of the estimated stress and
reference stress, the validity of the method is confirmed.
Abstract: Subspace channel estimation methods have been
studied widely, where the subspace of the covariance matrix is
decomposed to separate the signal subspace from noise subspace. The
decomposition is normally done by using either the eigenvalue
decomposition (EVD) or the singular value decomposition (SVD) of
the auto-correlation matrix (ACM). However, the subspace
decomposition process is computationally expensive. This paper
considers the estimation of the multipath slow frequency hopping
(FH) channel using noise space based method. In particular, an
efficient method is proposed to estimate the multipath time delays by
applying multiple signal classification (MUSIC) algorithm which is
based on the null space extracted by the rank revealing LU (RRLU)
factorization. As a result, precise information is provided by the
RRLU about the numerical null space and the rank, (i.e., important
tool in linear algebra). The simulation results demonstrate the
effectiveness of the proposed novel method by approximately
decreasing the computational complexity to the half as compared
with RRQR methods keeping the same performance.
Abstract: In general, classical methods such as maximum
likelihood (ML) and least squares (LS) estimation methods are used
to estimate the shape parameters of the Burr XII distribution.
However, these estimators are very sensitive to the outliers. To
overcome this problem we propose alternative robust estimators
based on the M-estimation method for the shape parameters of the
Burr XII distribution. We provide a small simulation study and a real
data example to illustrate the performance of the proposed estimators
over the ML and the LS estimators. The simulation results show that
the proposed robust estimators generally outperform the classical
estimators in terms of bias and root mean square errors when there
are outliers in data.
Abstract: In this paper, the problem of fault detection and
isolation in the attitude control subsystem of spacecraft formation
flying is considered. In order to design the fault detection method, an
extended Kalman filter is utilized which is a nonlinear stochastic state
estimation method. Three fault detection architectures, namely,
centralized, decentralized, and semi-decentralized are designed based
on the extended Kalman filters. Moreover, the residual generation
and threshold selection techniques are proposed for these
architectures.
Abstract: Building loss estimation methodologies which have
been advanced considerably in recent decades are usually used to
estimate socio and economic impacts resulting from seismic structural
damage. In accordance with these methods, this paper presents the
evaluation of an annual loss probability of a reinforced concrete
moment resisting frame designed according to Korean Building Code.
The annual loss probability is defined by (1) a fragility curve obtained
from a capacity spectrum method which is similar to a method adopted
from HAZUS, and (2) a seismic hazard curve derived from annual
frequencies of exceedance per peak ground acceleration. Seismic
fragilities are computed to calculate the annual loss probability of a
certain structure using functions depending on structural capacity,
seismic demand, structural response and the probability of exceeding
damage state thresholds. This study carried out a nonlinear static
analysis to obtain the capacity of a RC moment resisting frame
selected as a prototype building. The analysis results show that the
probability of being extensive structural damage in the prototype
building is expected to 0.01% in a year.
Abstract: At-site flood frequency analysis is used to estimate
flood quantiles when at-site record length is reasonably long. In
Australia, FLIKE software has been introduced for at-site flood
frequency analysis. The advantage of FLIKE is that, for a given
application, the user can compare a number of most commonly
adopted probability distributions and parameter estimation methods
relatively quickly using a windows interface. The new version of
FLIKE has been incorporated with the multiple Grubbs and Beck test
which can identify multiple numbers of potentially influential low
flows. This paper presents a case study considering six catchments in
eastern Australia which compares two outlier identification tests
(original Grubbs and Beck test and multiple Grubbs and Beck test)
and two commonly applied probability distributions (Generalized
Extreme Value (GEV) and Log Pearson type 3 (LP3)) using FLIKE
software. It has been found that the multiple Grubbs and Beck test
when used with LP3 distribution provides more accurate flood
quantile estimates than when LP3 distribution is used with the
original Grubbs and Beck test. Between these two methods, the
differences in flood quantile estimates have been found to be up to
61% for the six study catchments. It has also been found that GEV
distribution (with L moments) and LP3 distribution with the multiple
Grubbs and Beck test provide quite similar results in most of the
cases; however, a difference up to 38% has been noted for flood
quantiles for annual exceedance probability (AEP) of 1 in 100 for one
catchment. This finding needs to be confirmed with a greater number
of stations across other Australian states.
Abstract: As the human race will continue to explore the space
by creating new space transportation means and sending them to other
planets, the enhance of atmospheric reentry study is crucial. In this
context, an analysis of mass recession rate of ablative materials for
thermal shields of reentry spacecrafts is important to be carried out.
The paper describes a new estimation method for calculating the mass
recession of an ablator system made of carbon fiber reinforced plastic
materials. This method is based on Arrhenius equation for low
temperatures and, for high temperatures, on a theory applied for the
recession phenomenon of carbon fiber reinforced plastic materials,
theory which takes into account the presence of the resin inside the
materials. The space mission of USERS spacecraft is considered as a
case study.
Abstract: Today, the need for water sources is swiftly increasing due to population growth. At the same time, it is known that some regions will face with shortage of water and drought because of the global warming and climate change. In this context, evaluation and analysis of hydrological data such as the observed trends, drought and flood prediction of short term flow has great deal of importance. The most accurate selection probability distribution is important to describe the low flow statistics for the studies related to drought analysis. As in many basins In Turkey, Gediz River basin will be affected enough by the drought and will decrease the amount of used water. The aim of this study is to derive appropriate probability distributions for frequency analysis of annual minimum flows at 6 gauging stations of the Gediz Basin. After applying 10 different probability distributions, six different parameter estimation methods and 3 fitness test, the Pearson 3 distribution and general extreme values distributions were found to give optimal results.
Abstract: This paper presents a fault identification, classification and fault location estimation method based on Discrete Wavelet Transform and Adaptive Network Fuzzy Inference System (ANFIS) for medium voltage cable in the distribution system.
Different faults and locations are simulated by ATP/EMTP, and then certain selected features of the wavelet transformed signals are used as an input for a training process on the ANFIS. Then an accurate fault classifier and locator algorithm was designed, trained and tested using current samples only. The results obtained from ANFIS output were compared with the real output. From the results, it was found that the percentage error between ANFIS output and real output is less than three percent. Hence, it can be concluded that the proposed technique is able to offer high accuracy in both of the fault classification and fault location.
Abstract: Execution of Business Software Systems (BSS) Development and Enhancement Projects (D&EP) is characterized by the exceptionally low effectiveness, leading to considerable financial losses. The general reason for low effectiveness of such projects is that they are inappropriately managed. One of the factors of proper BSS D&EP management is suitable (reliable and objective) method of project work effort estimation since this is what determines correct estimation of its major attributes: project cost and duration. BSS D&EP is usually considered to be accomplished effectively if product of a planned functionality is delivered without cost and time overrun. The goal of this paper is to prove that choosing approach to the BSS D&EP work effort estimation has a considerable influence on the effectiveness of such projects execution.
Abstract: Evapotranspiration (ET) is a major component of the hydrologic cycle and its accurate estimation is essential for hydrological studies. In past, various estimation methods have been developed for different climatological data, and the accuracy of these methods varies with climatic conditions. Reference crop evapotranspiration (ET0) is a key variable in procedures established for estimating evapotranspiration rates of agricultural crops. Values of ET0 are used with crop coefficients for many aspects of irrigation and water resources planning and management. Numerous methods are used for estimating ET0. As per internationally accepted procedures outlined in the United Nations Food and Agriculture Organization-s Irrigation and Drainage Paper No. 56(FAO-56), use of Penman-Monteith equation is recommended for computing ET0 from ground based climatological observations. In the present study, seven methods have been selected for performance evaluation. User friendly software has been developed using programming language visual basic. The visual basic has ability to create graphical environment using less coding. For given data availability the developed software estimates reference evapotranspiration for any given area and period for which data is available. The accuracy of the software has been checked by the examples given in FAO-56.The developed software is a user friendly tool for estimating ET0 under different data availability and climatic conditions.
Abstract: Estimation of runoff water quality parameters is required to determine appropriate water quality management options. Various models are used to estimate runoff water quality parameters. However, most models provide event-based estimates of water quality parameters for specific sites. The work presented in this paper describes the development of a model that continuously simulates the accumulation and wash-off of water quality pollutants in a catchment. The model allows estimation of pollutants build-up during dry periods and pollutants wash-off during storm events. The model was developed by integrating two individual models; rainfall-runoff model, and catchment water quality model. The rainfall-runoff model is based on the time-area runoff estimation method. The model allows users to estimate the time of concentration using a range of established methods. The model also allows estimation of the continuing runoff losses using any of the available estimation methods (i.e., constant, linearly varying or exponentially varying). Pollutants build-up in a catchment was represented by one of three pre-defined functions; power, exponential, or saturation. Similarly, pollutants wash-off was represented by one of three different functions; power, rating-curve, or exponential. The developed runoff water quality model was set-up to simulate the build-up and wash-off of total suspended solids (TSS), total phosphorus (TP) and total nitrogen (TN). The application of the model was demonstrated using available runoff and TSS field data from road and roof surfaces in the Gold Coast, Australia. The model provided excellent representation of the field data demonstrating the simplicity yet effectiveness of the proposed model.
Abstract: Precise frequency estimation methods for pulseshaped echoes are a prerequisite to determine the relative velocity between sensor and reflector. Signal frequencies are analysed using three different methods: Fourier Transform, Chirp ZTransform and the MUSIC algorithm. Simulations of echoes are performed varying both the noise level and the number of reflecting points. The superposition of echoes with a random initial phase is found to influence the precision of frequency estimation severely for FFT and MUSIC. The standard deviation of the frequency using FFT is larger than for MUSIC. However, MUSIC is more noise-sensitive. The distorting effect of superpositions is less pronounced in experimental data.
Abstract: In this paper, we investigate a blind channel estimation method for Multi-carrier CDMA systems that use a subspace decomposition technique. This technique exploits the orthogonality property between the noise subspace and the received user codes to obtain channel of each user. In the past we used Singular Value Decomposition (SVD) technique but SVD have most computational complexity so in this paper use a new algorithm called URV Decomposition, which serve as an intermediary between the QR decomposition and SVD, replaced in SVD technique to track the noise space of the received data. Because of the URV decomposition has almost the same estimation performance as the SVD, but has less computational complexity.
Abstract: This paper focuses on operational risk measurement
techniques and on economic capital estimation methods. A data
sample of operational losses provided by an anonymous Central
European bank is analyzed using several approaches. Loss
Distribution Approach and scenario analysis method are considered.
Custom plausible loss events defined in a particular scenario are
merged with the original data sample and their impact on capital
estimates and on the financial institution is evaluated. Two main
questions are assessed – What is the most appropriate statistical
method to measure and model operational loss data distribution? and
What is the impact of hypothetical plausible events on the financial
institution? The g&h distribution was evaluated to be the most
suitable one for operational risk modeling. The method based on the
combination of historical loss events modeling and scenario analysis
provides reasonable capital estimates and allows for the measurement
of the impact of extreme events on banking operations.
Abstract: In this paper, a target signal detection method using
multiple signal classification (MUSIC) algorithm is proposed. The
MUSIC algorithm is a subspace-based direction of arrival (DOA)
estimation method. The algorithm detects the DOAs of multiple
sources using the inverse of the eigenvalue-weighted eigen spectra. To
apply the algorithm to target signal detection for GSC-based
beamforming, we utilize its spectral response for the target DOA in
noisy conditions. For evaluation of the algorithm, the performance of
the proposed target signal detection method is compared with that of
the normalized cross-correlation (NCC), the fixed beamforming, and
the power ratio method. Experimental results show that the proposed
algorithm significantly outperforms the conventional ones in receiver
operating characteristics(ROC) curves.
Abstract: The main criteria of designing in the most hydraulic
constructions essentially are based on runoff or discharge of water. Two of those important criteria are runoff and return period. Mostly,
these measures are calculated or estimated by stochastic data.
Another feature in hydrological data is their impreciseness.
Therefore, in order to deal with uncertainty and impreciseness, based
on Buckley-s estimation method, a new fuzzy method of evaluating hydrological measures are developed. The method introduces
triangular shape fuzzy numbers for different measures in which both
of the uncertainty and impreciseness concepts are considered. Besides, since another important consideration in most of the
hydrological studies is comparison of a measure during different
months or years, a new fuzzy method which is consistent with special form of proposed fuzzy numbers, is also developed. Finally, to
illustrate the methods more explicitly, the two algorithms are tested on one simple example and a real case study.
Abstract: Ultra-wide band (UWB) communication is one of
the most promising technologies for high data rate wireless networks
for short range applications. This paper proposes a blind channel
estimation method namely IMM (Interactive Multiple Model) Based
Kalman algorithm for UWB OFDM systems. IMM based Kalman
filter is proposed to estimate frequency selective time varying
channel. In the proposed method, two Kalman filters are concurrently
estimate the channel parameters. The first Kalman filter namely
Static Model Filter (SMF) gives accurate result when the user is static
while the second Kalman filter namely the Dynamic Model Filter
(DMF) gives accurate result when the receiver is in moving state. The
static transition matrix in SMF is assumed as an Identity matrix
where as in DMF, it is computed using Yule-Walker equations. The
resultant filter estimate is computed as a weighted sum of individual
filter estimates. The proposed method is compared with other existing
channel estimation methods.
Abstract: Independent component analysis can estimate unknown
source signals from their mixtures under the assumption that the
source signals are statistically independent. However, in a real environment,
the separation performance is often deteriorated because
the number of the source signals is different from that of the sensors.
In this paper, we propose an estimation method for the number of
the sources based on the joint distribution of the observed signals
under two-sensor configuration. From several simulation results, it
is found that the number of the sources is coincident to that of
peaks in the histogram of the distribution. The proposed method can
estimate the number of the sources even if it is larger than that of
the observed signals. The proposed methods have been verified by
several experiments.
Abstract: Newton-Raphson State Estimation method using bus
admittance matrix remains as an efficient and most popular method to
estimate the state variables. Elements of Jacobian matrix are computed
from standard expressions which lack physical significance. In this
paper, elements of the state estimation Jacobian matrix are obtained
considering the power flow measurements in the network elements.
These elements are processed one-by-one and the Jacobian matrix H is
updated suitably in a simple manner. The constructed Jacobian matrix
H is integrated with Weight Least Square method to estimate the state
variables. The suggested procedure is successfully tested on IEEE
standard systems.