Abstract: The paper is a comparative study of two classical vari-ants of parallel projection methods for solving the convex feasibility problem with their equivalents that involve variable weights in the construction of the solutions. We used a graphical representation of these methods for inpainting a convex area of an image in order to investigate their effectiveness in image reconstruction applications. We also presented a numerical analysis of the convergence of these four algorithms in terms of the average number of steps and execution time, in classical CPU and, alternativaly, in parallel GPU implementation.
Abstract: Recently, low-dose computed tomography (CT) has become highly desirable due to increasing attention to the potential risks of excessive radiation. For low-dose CT imaging, ensuring image quality while reducing radiation dose is a major challenge. To facilitate low-dose CT imaging, we propose an improved statistical iterative reconstruction scheme based on the Penalized Weighted Least Squares (PWLS) standard combined with total variation (TV) minimization and sparse dictionary learning (DL) to improve reconstruction performance. We call this method "PWLS-TV-DL". In order to evaluate the PWLS-TV-DL method, we performed experiments on digital phantoms and physical phantoms, respectively. The experimental results show that our method is in image quality and calculation. The efficiency is superior to other methods, which confirms the potential of its low-dose CT imaging.
Abstract: The photoacoustic images are obtained from a custom developed linear array photoacoustic tomography system. The biological specimens are imitated by conducting phantom tests in order to retrieve a fully functional photoacoustic image. The acquired image undergoes the active region based contour filtering to remove the noise and accurately segment the object area for further processing. The universal back projection method is used as the image reconstruction algorithm. The active contour filtering is analyzed by evaluating the signal to noise ratio and comparing it with the other filtering methods.
Abstract: Images are important source of information used as
evidence during any investigation process. Their clarity and accuracy
is essential and of the utmost importance for any investigation.
Images are vulnerable to losing blocks and having noise added to
them either after alteration or when the image was taken initially,
therefore, having a high performance image processing system and it
is implementation is very important in a forensic point of view. This
paper focuses on improving the quality of the forensic images.
For different reasons packets that store data can be affected,
harmed or even lost because of noise. For example, sending the
image through a wireless channel can cause loss of bits. These types
of errors might give difficulties generally for the visual display
quality of the forensic images.
Two of the images problems: noise and losing blocks are covered.
However, information which gets transmitted through any way of
communication may suffer alteration from its original state or even
lose important data due to the channel noise. Therefore, a developed
system is introduced to improve the quality and clarity of the forensic
images.
Abstract: Photoacoustic imaging (PAI) is a non-invasive and
non-ionizing imaging modality that combines the absorption contrast
of light with ultrasound resolution. Laser is used to deposit optical
energy into a target (i.e., optical fluence). Consequently, the target
temperature rises, and then thermal expansion occurs that leads to
generating a PA signal. In general, most image reconstruction
algorithms for PAI assume uniform fluence within an imaging object.
However, it is known that optical fluence distribution within the
object is non-uniform. This could affect the reconstruction of PA
images. In this study, we have investigated the influence of optical
fluence distribution on PA back-propagation imaging using finite
element method. The uniform fluence was simulated as a triangular
waveform within the object of interest. The non-uniform fluence
distribution was estimated by solving light propagation within a
tissue model via Monte Carlo method. The results show that the PA
signal in the case of non-uniform fluence is wider than the uniform
case by 23%. The frequency spectrum of the PA signal due to the
non-uniform fluence has missed some high frequency components in
comparison to the uniform case. Consequently, the reconstructed
image with the non-uniform fluence exhibits a strong smoothing
effect.
Abstract: Electric impedance imaging is a method of
reconstructing spatial distribution of electrical conductivity inside a
subject. In this paper, a new method of electrical impedance imaging
using eddy current is proposed. The eddy current distribution in the
body depends on the conductivity distribution and the magnetic field
pattern. By changing the position of magnetic core, a set of voltage
differences is measured with a pair of electrodes. This set of voltage
differences is used in image reconstruction of conductivity
distribution. The least square error minimization method is used as a
reconstruction algorithm. The back projection algorithm is used to
get two dimensional images. Based on this principle, a measurement
system is developed and some model experiments were performed
with a saline filled phantom. The shape of each model in the
reconstructed image is similar to the corresponding model,
respectively. From the results of these experiments, it is confirmed
that the proposed method is applicable in the realization of electrical
imaging.
Abstract: Super resolution (SR) technologies are now being
applied to video to improve resolution. Some TV sets are now
equipped with SR functions. However, it is not known if super
resolution image reconstruction (SRR) for TV really works or not.
Super resolution with non-linear signal processing (SRNL) has
recently been proposed. SRR and SRNL are the only methods for
processing video signals in real time. The results from subjective
assessments of SSR and SRNL are described in this paper. SRR video
was produced in simulations with quarter precision motion vectors and
100 iterations. These are ideal conditions for SRR. We found that the
image quality of SRNL is better than that of SRR even though SRR
was processed under ideal conditions.
Abstract: As the Computed Tomography(CT) requires normally
hundreds of projections to reconstruct the image, patients are exposed
to more X-ray energy, which may cause side effects such as cancer.
Even when the variability of the particles in the object is very less,
Computed Tomography requires many projections for good quality
reconstruction. In this paper, less variability of the particles in an
object has been exploited to obtain good quality reconstruction.
Though the reconstructed image and the original image have same
projections, in general, they need not be the same. In addition
to projections, if a priori information about the image is known,
it is possible to obtain good quality reconstructed image. In this
paper, it has been shown by experimental results why conventional
algorithms fail to reconstruct from a few projections, and an efficient
polynomial time algorithm has been given to reconstruct a bi-level
image from its projections along row and column, and a known sub
image of unknown image with smoothness constraints by reducing the
reconstruction problem to integral max flow problem. This paper also
discusses the necessary and sufficient conditions for uniqueness and
extension of 2D-bi-level image reconstruction to 3D-bi-level image
reconstruction.
Abstract: Computed tomography and laminography are heavily investigated in a compressive sensing based image reconstruction framework to reduce the dose to the patients as well as to the radiosensitive devices such as multilayer microelectronic circuit boards. Nowadays researchers are actively working on optimizing the compressive sensing based iterative image reconstruction algorithm to obtain better quality images. However, the effects of the sampled data’s properties on reconstructed the image’s quality, particularly in an insufficient sampled data conditions have not been explored in computed laminography. In this paper, we investigated the effects of two data properties i.e. sampling density and data incoherence on the reconstructed image obtained by conventional computed laminography and a recently proposed method called spherical sinusoidal scanning scheme. We have found that in a compressive sensing based image reconstruction framework, the image quality mainly depends upon the data incoherence when the data is uniformly sampled.
Abstract: In this paper, we present an analytical analysis of the
representation of images as the magnitudes of their transform with
the discrete wavelets. Such a representation plays as a model for
complex cells in the early stage of visual processing and of high
technical usefulness for image understanding, because it makes the
representation insensitive to small local shifts. We found that if the
signals are band limited and of zero mean, then reconstruction from
the magnitudes is unique up to the sign for almost all signals. We
also present an iterative reconstruction algorithm which yields very
good reconstruction up to the sign minor numerical errors in the very
low frequencies.
Abstract: From a set of shifted, blurred, and decimated image , super-resolution image reconstruction can get a high-resolution image. So it has become an active research branch in the field of image restoration. In general, super-resolution image restoration is an ill-posed problem. Prior knowledge about the image can be combined to make the problem well-posed, which contributes to some regularization methods. In the regularization methods at present, however, regularization parameter was selected by experience in some cases and other techniques have too heavy computation cost for computing the parameter. In this paper, we construct a new super-resolution algorithm by transforming the solving of the System stem Є=An into the solving of the equations X+A*X-1A=I , and propose an inverse iterative method.
Abstract: The standard approach to image reconstruction is to stabilize the problem by including an edge-preserving roughness penalty in addition to faithfulness to the data. However, this methodology produces noisy object boundaries and creates a staircase effect. The existing attempts to favor the formation of smooth contour lines take the edge field explicitly into account; they either are computationally expensive or produce disappointing results. In this paper, we propose to incorporate the smoothness of the edge field in an implicit way by means of an additional penalty term defined in the wavelet domain. We also derive an efficient half-quadratic algorithm to solve the resulting optimization problem, including the case when the data fidelity term is non-quadratic and the cost function is nonconvex. Numerical experiments show that our technique preserves edge sharpness while smoothing contour lines; it produces visually pleasing reconstructions which are quantitatively better than those obtained without wavelet-domain constraints.