Abstract: The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.
Abstract: Recently, low-dose computed tomography (CT) has become highly desirable due to increasing attention to the potential risks of excessive radiation. For low-dose CT imaging, ensuring image quality while reducing radiation dose is a major challenge. To facilitate low-dose CT imaging, we propose an improved statistical iterative reconstruction scheme based on the Penalized Weighted Least Squares (PWLS) standard combined with total variation (TV) minimization and sparse dictionary learning (DL) to improve reconstruction performance. We call this method "PWLS-TV-DL". In order to evaluate the PWLS-TV-DL method, we performed experiments on digital phantoms and physical phantoms, respectively. The experimental results show that our method is in image quality and calculation. The efficiency is superior to other methods, which confirms the potential of its low-dose CT imaging.
Abstract: This paper is devoted to the numerical solution of
large-scale linear ill-posed systems. A multilevel regularization
method is proposed. This method is based on a synthesis of
the Arnoldi-Tikhonov regularization technique and the multilevel
technique. We show that if the Arnoldi-Tikhonov method is
a regularization method, then the multilevel method is also a
regularization one. Numerical experiments presented in this paper
illustrate the effectiveness of the proposed method.
Abstract: In seismic data processing, attenuation of random noise
is the basic step to improve quality of data for further application
of seismic data in exploration and development in different gas
and oil industries. The signal-to-noise ratio of the data also highly
determines quality of seismic data. This factor affects the reliability
as well as the accuracy of seismic signal during interpretation
for different purposes in different companies. To use seismic data
for further application and interpretation, we need to improve the
signal-to-noise ration while attenuating random noise effectively.
To improve the signal-to-noise ration and attenuating seismic
random noise by preserving important features and information
about seismic signals, we introduce the concept of anisotropic
total fractional order denoising algorithm. The anisotropic total
fractional order variation model defined in fractional order bounded
variation is proposed as a regularization in seismic denoising. The
split Bregman algorithm is employed to solve the minimization
problem of the anisotropic total fractional order variation model
and the corresponding denoising algorithm for the proposed method
is derived. We test the effectiveness of theproposed method for
synthetic and real seismic data sets and the denoised result is
compared with F-X deconvolution and non-local means denoising
algorithm.
Abstract: The reconstruction from sparse-view projections is one
of important problems in computed tomography (CT) limited by
the availability or feasibility of obtaining of a large number of
projections. Traditionally, convex regularizers have been exploited
to improve the reconstruction quality in sparse-view CT, and the
convex constraint in those problems leads to an easy optimization
process. However, convex regularizers often result in a biased
approximation and inaccurate reconstruction in CT problems. Here,
we present a nonconvex, Lipschitz continuous and non-smooth
regularization model. The CT reconstruction is formulated as a
nonconvex constrained L1 − L2 minimization problem and solved
through a difference of convex algorithm and alternating direction
of multiplier method which generates a better result than L0 or L1
regularizers in the CT reconstruction. We compare our method with
previously reported high performance methods which use convex
regularizers such as TV, wavelet, curvelet, and curvelet+TV (CTV)
on the test phantom images. The results show that there are benefits in
using the nonconvex regularizer in the sparse-view CT reconstruction.
Abstract: This article presents a numerical method to find the
heat flux in an inhomogeneous inverse heat conduction problem with
linear boundary conditions and an extra specification at the terminal.
The method is based upon applying the satisfier function along with
the Ritz-Galerkin technique to reduce the approximate solution of the
inverse problem to the solution of a system of algebraic equations.
The instability of the problem is resolved by taking advantage of
the Landweber’s iterations as an admissible regularization strategy.
In computations, we find the stable and low-cost results which
demonstrate the efficiency of the technique.
Abstract: Tikhonov regularization and reproducing kernels are the
most popular approaches to solve ill-posed problems in computational
mathematics and applications. And the Fourier multiplier operators
are an essential tool to extend some known linear transforms
in Euclidean Fourier analysis, as: Weierstrass transform, Poisson
integral, Hilbert transform, Riesz transforms, Bochner-Riesz mean
operators, partial Fourier integral, Riesz potential, Bessel potential,
etc. Using the theory of reproducing kernels, we construct a simple
and efficient representations for some class of Fourier multiplier
operators Tm on the Paley-Wiener space Hh. In addition, we give
an error estimate formula for the approximation and obtain some
convergence results as the parameters and the independent variables
approaches zero. Furthermore, using numerical quadrature integration
rules to compute single and multiple integrals, we give numerical
examples and we write explicitly the extremal function and the
corresponding Fourier multiplier operators.
Abstract: Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.
Abstract: The boundary value problem on non-canonical and arbitrary shaped contour is solved with a numerically effective method called Analytical Regularization Method (ARM) to calculate propagation parameters. As a result of regularization, the equation of first kind is reduced to the infinite system of the linear algebraic equations of the second kind in the space of L2. This equation can be solved numerically for desired accuracy by using truncation method. The parameters as cut-off wavenumber and cut-off frequency are used in waveguide evolutionary equations of electromagnetic theory in time-domain to illustrate the real-valued TM fields with lossy and lossless media.
Abstract: We propose two affine projection algorithms (APA)
with variable regularization parameter. The proposed algorithms
dynamically update the regularization parameter that is fixed in the
conventional regularized APA (R-APA) using a gradient descent
based approach. By introducing the normalized gradient, the proposed
algorithms give birth to an efficient and a robust update scheme for
the regularization parameter. Through experiments we demonstrate
that the proposed algorithms outperform conventional R-APA in
terms of the convergence rate and the misadjustment error.
Abstract: We present a normalized LMS (NLMS) algorithm
with robust regularization. Unlike conventional NLMS with the
fixed regularization parameter, the proposed approach dynamically
updates the regularization parameter. By exploiting a gradient
descent direction, we derive a computationally efficient and robust
update scheme for the regularization parameter. In simulation, we
demonstrate the proposed algorithm outperforms conventional NLMS
algorithms in terms of convergence rate and misadjustment error.
Abstract: This paper presents a normalized subband adaptive
filtering (NSAF) algorithm to cope with the sparsity condition of
an underlying system in the context of compressive sensing. By
regularizing a weighted l1-norm of the filter taps estimate onto the
cost function of the NSAF and utilizing a subgradient analysis,
the update recursion of the l1-norm constraint NSAF is derived.
Considering two distinct weighted l1-norm regularization cases, two
versions of the l1-norm constraint NSAF are presented. Simulation
results clearly indicate the superior performance of the proposed
l1-norm constraint NSAFs comparing with the classical NSAF.
Abstract: This work presents a new type of the affine projection
(AP) algorithms which incorporate the sparsity condition of a
system. To exploit the sparsity of the system, a weighted l1-norm
regularization is imposed on the cost function of the AP algorithm.
Minimizing the cost function with a subgradient calculus and
choosing two distinct weighting for l1-norm, two stochastic gradient
based sparsity regularized AP (SR-AP) algorithms are developed.
Experimental results exhibit that the SR-AP algorithms outperform
the typical AP counterparts for identifying sparse systems.
Abstract: As DNA microarray data contain relatively small
sample size compared to the number of genes, high dimensional
models are often employed. In high dimensional models, the selection
of tuning parameter (or, penalty parameter) is often one of the crucial
parts of the modeling. Cross-validation is one of the most common
methods for the tuning parameter selection, which selects a parameter
value with the smallest cross-validated score. However, selecting a
single value as an ‘optimal’ value for the parameter can be very
unstable due to the sampling variation since the sample sizes of
microarray data are often small. Our approach is to choose multiple candidates of tuning parameter
first, then average the candidates with different weights depending
on their performance. The additional step of estimating the weights
and averaging the candidates rarely increase the computational cost,
while it can considerably improve the traditional cross-validation. We
show that the selected value from the suggested methods often lead to
stable parameter selection as well as improved detection of significant
genetic variables compared to the tradition cross-validation via real
data and simulated data sets.
Abstract: A seizure prediction method is proposed by extracting
global features using phase correlation between adjacent epochs for
detecting relative changes and local features using fluctuation/
deviation within an epoch for determining fine changes of different
EEG signals. A classifier and a regularization technique are applied
for the reduction of false alarms and improvement of the overall
prediction accuracy. The experiments show that the proposed method
outperforms the state-of-the-art methods and provides high prediction
accuracy (i.e., 97.70%) with low false alarm using EEG signals in
different brain locations from a benchmark data set.
Abstract: Most of greenhouse growers desire a determined amount of yields in order to accurately meet market requirements. The purpose of this paper is to model a simple but often satisfactory supervised classification method. The original naive Bayes have a serious weakness, which is producing redundant predictors. In this paper, utilized regularization technique was used to obtain a computationally efficient classifier based on naive Bayes. The suggested construction, utilized L1-penalty, is capable of clearing redundant predictors, where a modification of the LARS algorithm is devised to solve this problem, making this method applicable to a wide range of data. In the experimental section, a study conducted to examine the effect of redundant and irrelevant predictors, and test the method on WSG data set for tomato yields, where there are many more predictors than data, and the urge need to predict weekly yield is the goal of this approach. Finally, the modified approach is compared with several naive Bayes variants and other classification algorithms (SVM and kNN), and is shown to be fairly good.
Abstract: A gradient learning method to regulate the trajectories
of some nonlinear chaotic systems is proposed. The method is
motivated by the gradient descent learning algorithms for neural
networks. It is based on two systems: dynamic optimization system
and system for finding sensitivities. Numerical results of several
examples are presented, which convincingly illustrate the efficiency
of the method.
Abstract: In face recognition, feature extraction techniques
attempts to search for appropriate representation of the data. However,
when the feature dimension is larger than the samples size, it brings
performance degradation. Hence, we propose a method called
Normalization Discriminant Independent Component Analysis
(NDICA). The input data will be regularized to obtain the most
reliable features from the data and processed using Independent
Component Analysis (ICA). The proposed method is evaluated on
three face databases, Olivetti Research Ltd (ORL), Face Recognition
Technology (FERET) and Face Recognition Grand Challenge
(FRGC). NDICA showed it effectiveness compared with other
unsupervised and supervised techniques.
Abstract: Sufficient linear matrix inequalities (LMI) conditions for regularization of discrete-time singular systems are given. Then a new class of regularizing stabilizing controllers is discussed. The proposed controllers are the sum of predictive and memoryless state feedbacks. The predictive controller aims to regularizing the singular system while the memoryless state feedback is designed to stabilize the resulting regularized system. A systematic procedure is given to calculate the controller gains through linear matrix inequalities.
Abstract: In this paper, we propose the variational approach to solve single image defogging problem. In the inference process of the atmospheric veil, we defined new functional for atmospheric veil that satisfy edge-preserving regularization property. By using the fundamental lemma of calculus of variations, we derive the Euler-Lagrange equation foratmospheric veil that can find the maxima of a given functional. This equation can be solved by using a gradient decent method and time parameter. Then, we can have obtained the estimated atmospheric veil, and then have conducted the image restoration by using inferred atmospheric veil. Finally we have improved the contrast of restoration image by various histogram equalization methods. The experimental results show that the proposed method achieves rather good defogging results.