Abstract: A thin layer on the component surface can be found
with high tensile residual stresses, due to turning operations, which
can dangerously affect the fatigue performance of the component. In
this paper an analytical approach is presented to reconstruct the
residual stress field from a limited incomplete set of measurements.
Airy stress function is used as the primary unknown to directly solve
the equilibrium equations and satisfying the boundary conditions. In
this new method there exists the flexibility to impose the physical
conditions that govern the behavior of residual stress to achieve a
meaningful complete stress field. The analysis is also coupled to a
least squares approximation and a regularization method to provide
stability of the inverse problem. The power of this new method is
then demonstrated by analyzing some experimental measurements
and achieving a good agreement between the model prediction and
the results obtained from residual stress measurement.
Abstract: The numerical analytic continuation of a function f(z) = f(x + iy) on a strip is discussed in this paper. The data are only given approximately on the real axis. The periodicity of given data is assumed. A truncated Fourier spectral method has been introduced to deal with the ill-posedness of the problem. The theoretic results show that the discrepancy principle can work well for this problem. Some numerical results are also given to show the efficiency of the method.
Abstract: In the proposed method for Web page-ranking, a
novel theoretic model is introduced and tested by examples of order
relationships among IP addresses. Ranking is induced using a
convexity feature, which is learned according to these examples
using a self-organizing procedure. We consider the problem of selforganizing
learning from IP data to be represented by a semi-random
convex polygon procedure, in which the vertices correspond to IP
addresses. Based on recent developments in our regularization
theory for convex polygons and corresponding Euclidean distance
based methods for classification, we develop an algorithmic
framework for learning ranking functions based on a Computational
Geometric Theory. We show that our algorithm is generic, and
present experimental results explaining the potential of our approach.
In addition, we explain the generality of our approach by showing its
possible use as a visualization tool for data obtained from diverse
domains, such as Public Administration and Education.
Abstract: The widely used Total Variation de-noising algorithm can preserve sharp edge, while removing noise. However, since fixed regularization parameter over entire image, small details and textures are often lost in the process. In this paper, we propose a modified Total Variation algorithm to better preserve smaller-scaled features. This is done by allowing an adaptive regularization parameter to control the amount of de-noising in any region of image, according to relative information of local feature scale. Experimental results demonstrate the efficient of the proposed algorithm. Compared with standard Total Variation, our algorithm can better preserve smaller-scaled features and show better performance.
Abstract: Nondestructive testing in engineering is an inverse
Cauchy problem for Laplace equation. In this paper the problem
of nondestructive testing is expressed by a Laplace-s equation with
third-kind boundary conditions. In order to find unknown values on
the boundary, the method of fundamental solution is introduced and
realized. Because of the ill-posedness of studied problems, the TSVD
regularization technique in combination with L-curve criteria and
Generalized Cross Validation criteria is employed. Numerical results
are shown that the TSVD method combined with L-curve criteria is
more efficient than the TSVD method combined with GCV criteria.
The abstract goes here.
Abstract: Although there have been many researches in cluster
analysis to consider on feature weights, little effort is made on sample
weights. Recently, Yu et al. (2011) considered a probability
distribution over a data set to represent its sample weights and then
proposed sample-weighted clustering algorithms. In this paper, we
give a sample-weighted version of generalized fuzzy clustering
regularization (GFCR), called the sample-weighted GFCR
(SW-GFCR). Some experiments are considered. These experimental
results and comparisons demonstrate that the proposed SW-GFCR is
more effective than the most clustering algorithms.
Abstract: This paper presents an improved image segmentation
model with edge preserving regularization based on the
piecewise-smooth Mumford-Shah functional. A level set formulation
is considered for the Mumford-Shah functional minimization in
segmentation, and the corresponding partial difference equations are
solved by the backward Euler discretization. Aiming at encouraging
edge preserving regularization, a new edge indicator function is
introduced at level set frame. In which all the grid points which is used
to locate the level set curve are considered to avoid blurring the edges
and a nonlinear smooth constraint function as regularization term is
applied to smooth the image in the isophote direction instead of the
gradient direction. In implementation, some strategies such as a new
scheme for extension of u+ and u- computation of the grid points and
speedup of the convergence are studied to improve the efficacy of the
algorithm. The resulting algorithm has been implemented and
compared with the previous methods, and has been proved efficiently
by several cases.
Abstract: Statistical learning theory was developed by Vapnik. It
is a learning theory based on Vapnik-Chervonenkis dimension. It also
has been used in learning models as good analytical tools. In general, a
learning theory has had several problems. Some of them are local
optima and over-fitting problems. As well, statistical learning theory
has same problems because the kernel type, kernel parameters, and
regularization constant C are determined subjectively by the art of
researchers. So, we propose an evolutionary statistical learning theory
to settle the problems of original statistical learning theory.
Combining evolutionary computing into statistical learning theory,
our theory is constructed. We verify improved performances of an
evolutionary statistical learning theory using data sets from KDD cup.
Abstract: This work focuses on analysis of classical heat transfer equation regularized with Maxwell-Cattaneo transfer law. Computer simulations are performed in MATLAB environment. Numerical experiments are first developed on classical Fourier equation, then Maxwell-Cattaneo law is considered. Corresponding equation is regularized with a balancing diffusion term to stabilize discretizing scheme with adjusted time and space numerical steps. Several cases including a convective term in model equations are discussed, and results are given. It is shown that limiting conditions on regularizing parameters have to be satisfied in convective case for Maxwell-Cattaneo regularization to give physically acceptable solutions. In all valid cases, uniform convergence to solution of initial heat equation with Fourier law is observed, even in nonlinear case.
Abstract: Compensating physiological motion in the context
of minimally invasive cardiac surgery has become an attractive
issue since it outperforms traditional cardiac procedures offering
remarkable benefits. Owing to space restrictions, computer vision
techniques have proven to be the most practical and suitable solution.
However, the lack of robustness and efficiency of existing methods
make physiological motion compensation an open and challenging
problem. This work focusses on increasing robustness and efficiency
via exploration of the classes of 1−and 2−regularized optimization,
emphasizing the use of explicit regularization. Both approaches are
based on natural features of the heart using intensity information.
Results pointed out the 1−regularized optimization class as the best
since it offered the shortest computational cost, the smallest average
error and it proved to work even under complex deformations.
Abstract: This article presents a short discussion on
optimum neighborhood size selection in a spherical selforganizing
feature map (SOFM). A majority of the literature
on the SOFMs have addressed the issue of selecting optimal
learning parameters in the case of Cartesian topology SOFMs.
However, the use of a Spherical SOFM suggested that the
learning aspects of Cartesian topology SOFM are not directly
translated. This article presents an approach on how to
estimate the neighborhood size of a spherical SOFM based on
the data. It adopts the L-curve criterion, previously suggested
for choosing the regularization parameter on problems of
linear equations where their right-hand-side is contaminated
with noise. Simulation results are presented on two artificial
4D data sets of the coupled Hénon-Ikeda map.
Abstract: From a set of shifted, blurred, and decimated image , super-resolution image reconstruction can get a high-resolution image. So it has become an active research branch in the field of image restoration. In general, super-resolution image restoration is an ill-posed problem. Prior knowledge about the image can be combined to make the problem well-posed, which contributes to some regularization methods. In the regularization methods at present, however, regularization parameter was selected by experience in some cases and other techniques have too heavy computation cost for computing the parameter. In this paper, we construct a new super-resolution algorithm by transforming the solving of the System stem Є=An into the solving of the equations X+A*X-1A=I , and propose an inverse iterative method.
Abstract: This paper presents a forgetting factor scheme for variable step-size affine projection algorithms (APA). The proposed scheme uses a forgetting processed input matrix as the projection matrix of pseudo-inverse to estimate system deviation. This method introduces temporal weights into the projection matrix, which is typically a better model of the real error's behavior than homogeneous temporal weights. The regularization overcomes the ill-conditioning introduced by both the forgetting process and the increasing size of the input matrix. This algorithm is tested by independent trials with coloured input signals and various parameter combinations. Results show that the proposed algorithm is superior in terms of convergence rate and misadjustment compared to existing algorithms. As a special case, a variable step size NLMS with forgetting factor is also presented in this paper.
Abstract: In this paper, we consider the problem for identifying the unknown source in the Poisson equation. A modified Tikhonov regularization method is presented to deal with illposedness of the problem and error estimates are obtained with an a priori strategy and an a posteriori choice rule to find the regularization parameter. Numerical examples show that the proposed method is effective and stable.