Abstract: The evaluation of energy release rate and centre Crack
Opening Displacement (COD) for circumferential Through-Wall
Cracked (TWC) pipes is an important issue in the assessment of
critical crack length for unstable fracture. The ability to predict crack
growth continues to be an important component of research for
several structural materials. Crack growth predictions can aid the
understanding of the useful life of a structural component and the
determination of inspection intervals and criteria. In this context,
studies were carried out at CSIR-SERC on Nuclear Power Plant
(NPP) piping components subjected to monotonic as well as cyclic
loading to assess the damage for crack growth due to low-cycle
fatigue in circumferentially TWC pipes.
Abstract: Software project effort estimation is frequently seen
as complex and expensive for individual software engineers.
Software production is in a crisis. It suffers from excessive costs.
Software production is often out of control. It has been suggested that
software production is out of control because we do not measure.
You cannot control what you cannot measure. During last decade, a
number of researches on cost estimation have been conducted. The
metric-set selection has a vital role in software cost estimation
studies; its importance has been ignored especially in neural network
based studies. In this study we have explored the reasons of those
disappointing results and implemented different neural network
models using augmented new metrics. The results obtained are
compared with previous studies using traditional metrics. To be able
to make comparisons, two types of data have been used. The first
part of the data is taken from the Constructive Cost Model
(COCOMO'81) which is commonly used in previous studies and the
second part is collected according to new metrics in a leading
international company in Turkey. The accuracy of the selected
metrics and the data samples are verified using statistical techniques.
The model presented here is based on Multi-Layer Perceptron
(MLP). Another difficulty associated with the cost estimation studies
is the fact that the data collection requires time and care. To make a
more thorough use of the samples collected, k-fold, cross validation
method is also implemented. It is concluded that, as long as an
accurate and quantifiable set of metrics are defined and measured
correctly, neural networks can be applied in software cost estimation
studies with success
Abstract: Eight difference schemes and five limiters are applied to numerical computation of Riemann problem. The resolution of discontinuities of each scheme produced is compared. Numerical dissipation and its estimation are discussed. The result shows that the numerical dissipation of each scheme is vital to improve scheme-s accuracy and stability. MUSCL methodology is an effective approach to increase computational efficiency and resolution. Limiter should be selected appropriately by balancing compressive and diffusive performance.
Abstract: Design for cost (DFC) is a method that reduces life
cycle cost (LCC) from the angle of designers. Multiple domain
features mapping (MDFM) methodology was given in DFC. Using
MDFM, we can use design features to estimate the LCC. From the
angle of DFC, the design features of family cars were obtained, such
as all dimensions, engine power and emission volume. At the
conceptual design stage, cars- LCC were estimated using back
propagation (BP) artificial neural networks (ANN) method and
case-based reasoning (CBR). Hamming space was used to measure the
similarity among cases in CBR method. Levenberg-Marquardt (LM)
algorithm and genetic algorithm (GA) were used in ANN. The
differences of LCC estimation model between CBR and artificial
neural networks (ANN) were provided. ANN and CBR separately
each method has its shortcomings. By combining ANN and CBR
improved results accuracy was obtained. Firstly, using ANN selected
some design features that affect LCC. Then using LCC estimation
results of ANN could raise the accuracy of LCC estimation in CBR
method. Thirdly, using ANN estimate LCC errors and correct errors in
CBR-s estimation results if the accuracy is not enough accurate.
Finally, economically family cars and sport utility vehicle (SUV) was
given as LCC estimation cases using this hybrid approach combining
ANN and CBR.
Abstract: Full search block matching algorithm is widely used for hardware implementation of motion estimators in video compression algorithms. In this paper we are proposing a new architecture, which consists of a 2D parallel processing unit and a 1D unit both working in parallel. The proposed architecture reduces both data access power and computational power which are the main causes of power consumption in integer motion estimation. It also completes the operations with nearly the same number of clock cycles as compared to a 2D systolic array architecture. In this work sum of absolute difference (SAD)-the most repeated operation in block matching, is calculated in two steps. The first step is to calculate the SAD for alternate rows by a 2D parallel unit. If the SAD calculated by the parallel unit is less than the stored minimum SAD, the SAD of the remaining rows is calculated by the 1D unit. Early termination, which stops avoidable computations has been achieved with the help of alternate rows method proposed in this paper and by finding a low initial SAD value based on motion vector prediction. Data reuse has been applied to the reference blocks in the same search area which significantly reduced the memory access.
Abstract: Modeling of the distributed systems allows us to
represent the whole its functionality. The working system instance
rarely fulfils the whole functionality represented by model; usually
some parts of this functionality should be accessible periodically.
The reporting system based on the Data Warehouse concept seams to
be an intuitive example of the system that some of its functionality is
required only from time to time. Analyzing an enterprise risk
associated with the periodical change of the system functionality, we
should consider not only the inaccessibility of the components
(object) but also their functions (methods), and the impact of such a
situation on the system functionality from the business point of view.
In the paper we suggest that the risk attributes should be estimated
from risk attributes specified at the requirements level (Use Case in
the UML model) on the base of the information about the structure of
the model (presented at other levels of the UML model). We argue
that it is desirable to consider the influence of periodical changes in
requirements on the enterprise risk estimation. Finally, the
proposition of such a solution basing on the UML system model is
presented.
Abstract: The cables in a nuclear power plant are designed to be
used for about 40 years in safe operation environment. However, the
heat and radiation in the nuclear power plant causes the rapid
performance deterioration of cables in nuclear vessels and heat
exchangers, which requires cable lifetime estimation. The most
accurate method of estimating the cable lifetime is to evaluate the
cables in a laboratory. However, removing cables while the plant is
operating is not allowed because of its safety and cost. In this paper, a
robot system to estimate the cable lifetime in nuclear power plants is
developed and tested. The developed robot system can calculate a
modulus value to estimate the cable lifetime even when the nuclear
power plant is in operation.
Abstract: Monitoring the tool flank wear without affecting the
throughput is considered as the prudent method in production
technology. The examination has to be done without affecting the
machining process. In this paper we proposed a novel work that is
used to determine tool flank wear by observing the sound signals
emitted during the turning process. The work-piece material we used
here is steel and aluminum and the cutting insert was carbide
material. Two different cutting speeds were used in this work. The
feed rate and the cutting depth were constant whereas the flank wear
was a variable. The emitted sound signal of a fresh tool (0 mm flank
wear) a slightly worn tool (0.2 -0.25 mm flank wear) and a severely
worn tool (0.4mm and above flank wear) during turning process were
recorded separately using a high sensitive microphone. Analysis
using Singular Value Decomposition was done on these sound
signals to extract the feature sound components. Observation of the
results showed that an increase in tool flank wear correlates with an
increase in the values of SVD features produced out of the sound
signals for both the materials. Hence it can be concluded that wear
monitoring of tool flank during turning process using SVD features
with the Fuzzy C means classification on the emitted sound signal is
a potential and relatively simple method.
Abstract: Ultra-wide band (UWB) communication is one of
the most promising technologies for high data rate wireless networks
for short range applications. This paper proposes a blind channel
estimation method namely IMM (Interactive Multiple Model) Based
Kalman algorithm for UWB OFDM systems. IMM based Kalman
filter is proposed to estimate frequency selective time varying
channel. In the proposed method, two Kalman filters are concurrently
estimate the channel parameters. The first Kalman filter namely
Static Model Filter (SMF) gives accurate result when the user is static
while the second Kalman filter namely the Dynamic Model Filter
(DMF) gives accurate result when the receiver is in moving state. The
static transition matrix in SMF is assumed as an Identity matrix
where as in DMF, it is computed using Yule-Walker equations. The
resultant filter estimate is computed as a weighted sum of individual
filter estimates. The proposed method is compared with other existing
channel estimation methods.
Abstract: Air conditioning is mainly to be used as human
comfort medium. It has been use more often in country in which the
daily temperatures are high. In scientific, air conditioning is defined
as a process of controlling the moisture, cooling, heating and cleaning
air. Without proper estimation of cooling load, big amount of waste
energy been used because of unsuitable of air conditioning system are
not considering to overcoming heat gains from surrounding. This is
due to the size of the room is too big and the air conditioning has to
use more energy to cool the room and the air conditioning is too
small for the room. The studies are basically to develop a program to
calculate cooling load. Through this study it is easy to calculate
cooling load estimation. Furthermore it-s help to compare the cooling
load estimation by hourly and yearly. Base on the last study that been
done, the developed software are not user-friendly. For individual
without proper knowledge of calculating cooling load estimation
might be problem. Easy excess and user-friendly should be the main
objective to design something. This program will allow cooling load
able be estimate by any users rather than estimation by using rule of
thumb. Several of limitation of case study is judged to sure it-s
meeting to Malaysia building specification. Finally validation is done
by comparison manual calculation and by developed program.
Abstract: Independent component analysis can estimate unknown
source signals from their mixtures under the assumption that the
source signals are statistically independent. However, in a real environment,
the separation performance is often deteriorated because
the number of the source signals is different from that of the sensors.
In this paper, we propose an estimation method for the number of
the sources based on the joint distribution of the observed signals
under two-sensor configuration. From several simulation results, it
is found that the number of the sources is coincident to that of
peaks in the histogram of the distribution. The proposed method can
estimate the number of the sources even if it is larger than that of
the observed signals. The proposed methods have been verified by
several experiments.
Abstract: Variational methods for optical flow estimation are
known for their excellent performance. The method proposed by Brox
et al. [5] exemplifies the strength of that framework. It combines
several concepts into single energy functional that is then minimized
according to clear numerical procedure. In this paper we propose
a modification of that algorithm starting from the spatiotemporal
gradient constancy assumption. The numerical scheme allows to
establish the connection between our model and the CLG(H) method
introduced in [18]. Experimental evaluation carried out on synthetic
sequences shows the significant superiority of the spatial variant of
the proposed method. The comparison between methods for the realworld
sequence is also enclosed.
Abstract: A fast adaptive Tomlinson Harashima (T-H) precoder structure is presented for indoor wireless communications, where the channel may vary due to rotation and small movement of the mobile terminal. A frequency-selective slow fading channel which is time-invariant over a frame is assumed. In this adaptive T-H precoder, feedback coefficients are updated at the end of every uplink frame by using system identification technique for channel estimation in contrary with the conventional T-H precoding concept where the channel is estimated during the starting of the uplink frame via Wiener solution. In conventional T-H precoder it is assumed the channel is time-invariant in both uplink and downlink frames. However assuming the channel is time-invariant over only one frame instead of two, the proposed adaptive T-H precoder yields better performance than conventional T-H precoder if the channel is varied in uplink after receiving the training sequence.
Abstract: In this paper, the estimation of the stress-strength
parameter R = P(Y < X), when X and Y are independent and both
are Lomax distributions with the common scale parameters but
different shape parameters is studied. The maximum likelihood
estimator of R is derived. Assuming that the common scale parameter
is known, the bayes estimator and exact confidence interval of R are
discussed. Simulation study to investigate performance of the
different proposed methods has been carried out.
Abstract: Estimation of stature is an important step in developing a biological profile for human identification. It may provide a valuable indicator for unknown individual in a population. The aim of this study was to analyses the relationship between stature and lower limb dimensions in the Malaysian population. The sample comprised 100 corpses, which included 69 males and 31 females between age ranges of 20 to 90 years old. The parameters measured were stature, thigh length, lower leg length, leg length, foot length, foot height and foot breadth. Results showed that mean values in males were significantly higher than those in females (P < 0.05). There were significant correlations between lower limb dimensions and stature. Cross-validation of the equation on 100 individuals showed close approximation between known stature and estimated stature. It was concluded that lower limb dimensions were useful for estimation of stature, which should be validated in future studies.
Abstract: In this paper, a data mining model to SMEs for detecting financial and operational risk indicators by data mining is presenting. The identification of the risk factors by clarifying the relationship between the variables defines the discovery of knowledge from the financial and operational variables. Automatic and estimation oriented information discovery process coincides the definition of data mining. During the formation of model; an easy to understand, easy to interpret and easy to apply utilitarian model that is far from the requirement of theoretical background is targeted by the discovery of the implicit relationships between the data and the identification of effect level of every factor. In addition, this paper is based on a project which was funded by The Scientific and Technological Research Council of Turkey (TUBITAK).
Abstract: Newton-Raphson State Estimation method using bus
admittance matrix remains as an efficient and most popular method to
estimate the state variables. Elements of Jacobian matrix are computed
from standard expressions which lack physical significance. In this
paper, elements of the state estimation Jacobian matrix are obtained
considering the power flow measurements in the network elements.
These elements are processed one-by-one and the Jacobian matrix H is
updated suitably in a simple manner. The constructed Jacobian matrix
H is integrated with Weight Least Square method to estimate the state
variables. The suggested procedure is successfully tested on IEEE
standard systems.
Abstract: In this paper, novel statistical sampling based equalization techniques and CNN based detection are proposed to increase the spectral efficiency of multiuser communication systems over fading channels. Multiuser communication combined with selective fading can result in interferences which severely deteriorate the quality of service in wireless data transmission (e.g. CDMA in mobile communication). The paper introduces new equalization methods to combat interferences by minimizing the Bit Error Rate (BER) as a function of the equalizer coefficients. This provides higher performance than the traditional Minimum Mean Square Error equalization. Since the calculation of BER as a function of the equalizer coefficients is of exponential complexity, statistical sampling methods are proposed to approximate the gradient which yields fast equalization and superior performance to the traditional algorithms. Efficient estimation of the gradient is achieved by using stratified sampling and the Li-Silvester bounds. A simple mechanism is derived to identify the dominant samples in real-time, for the sake of efficient estimation. The equalizer weights are adapted recursively by minimizing the estimated BER. The near-optimal performance of the new algorithms is also demonstrated by extensive simulations. The paper has also developed a (Cellular Neural Network) CNN based approach to detection. In this case fast quadratic optimization has been carried out by t, whereas the task of equalizer is to ensure the required template structure (sparseness) for the CNN. The performance of the method has also been analyzed by simulations.
Abstract: This paper addresses the problem of how one can
improve the performance of a non-optimal filter. First the theoretical question on dynamical representation for a given time correlated
random process is studied. It will be demonstrated that for a wide class of random processes, having a canonical form, there exists
a dynamical system equivalent in the sense that its output has the
same covariance function. It is shown that the dynamical approach is more effective for simulating and estimating a Markov and non-
Markovian random processes, computationally is less demanding,
especially with increasing of the dimension of simulated processes.
Numerical examples and estimation problems in low dimensional
systems are given to illustrate the advantages of the approach. A very useful application of the proposed approach is shown for the
problem of state estimation in very high dimensional systems. Here a modified filter for data assimilation in an oceanic numerical model
is presented which is proved to be very efficient due to introducing
a simple Markovian structure for the output prediction error process
and adaptive tuning some parameters of the Markov equation.
Abstract: The performance of sensor-less controlled induction
motor drive depends on the accuracy of the estimated speed.
Conventional estimation techniques being mathematically complex
require more execution time resulting in poor dynamic response. The
nonlinear mapping capability and powerful learning algorithms of
neural network provides a promising alternative for on-line speed
estimation. The on-line speed estimator requires the NN model to be
accurate, simpler in design, structurally compact and computationally
less complex to ensure faster execution and effective control in real
time implementation. This in turn to a large extent depends on the
type of Neural Architecture. This paper investigates three types of
neural architectures for on-line speed estimation and their
performance is compared in terms of accuracy, structural
compactness, computational complexity and execution time. The
suitable neural architecture for on-line speed estimation is identified
and the promising results obtained are presented.