Abstract: In this paper a new robust and efficient algorithm to automatic text extraction from colored book and journal cover sheets is proposed. First, we perform wavelet transform. Next for edge detecting from detail wavelet coefficient, we use dynamic threshold. By blurring approximate coefficients with alternative heuristic thresholding, achieve effective edge,. Afterward, with ROI technique get binary image. Finally text boxes would be extracted with new projection profile.
Abstract: Speed estimation is one of the important and practical tasks in machine vision, Robotic and Mechatronic. the availability of high quality and inexpensive video cameras, and the increasing need for automated video analysis has generated a great deal of interest in machine vision algorithms. Numerous approaches for speed estimation have been proposed. So classification and survey of the proposed methods can be very useful. The goal of this paper is first to review and verify these methods. Then we will propose a novel algorithm to estimate the speed of moving object by using fuzzy concept. There is a direct relation between motion blur parameters and object speed. In our new approach we will use Radon transform to find direction of blurred image, and Fuzzy sets to estimate motion blur length. The most benefit of this algorithm is its robustness and precision in noisy images. Our method was tested on many images with different range of SNR and is satisfiable.
Abstract: This work deals with unsupervised image deblurring.
We present a new deblurring procedure on images provided by lowresolution
synthetic aperture radar (SAR) or simply by multimedia in
presence of multiplicative (speckle) or additive noise, respectively.
The method we propose is defined as a two-step process. First, we
use an original technique for noise reduction in wavelet domain.
Then, the learning of a Kohonen self-organizing map (SOM) is
performed directly on the denoised image to take out it the blur. This
technique has been successfully applied to real SAR images, and the
simulation results are presented to demonstrate the effectiveness of
the proposed algorithms.
Abstract: The burst noise is a kind of noises that are destructive
and frequently found in semiconductor devices and ICs, yet detecting
and removing the noise has proved challenging for IC designers or users. According to the properties of burst noise, a methodological
approach is presented (proposed) in the paper, by which the burst noise
can be analysed and detected in time domain. In this paper, principles
and properties of burst noise are expounded first, Afterwards,
feasibility (viable) of burst noise detection by means of wavelet
transform in the time domain is corroborated in the paper, and the multi-resolution characters of Gaussian noise, burst noise and blurred
burst noise are discussed in details by computer emulation. Furthermore, the practical method to decide parameters of wavelet
transform is acquired through a great deal of experiment and data statistics. The methodology may yield an expectation in a wide variety of applications.
Abstract: The purpose of this research is to compare the original
intra-oral digital dental radiograph images with images that are
enhanced using a combination of image processing algorithms. Intraoral
digital dental radiograph images are often noisy, blur edges and
low in contrast. A combination of sharpening and enhancement
method are used to overcome these problems. Three types of
proposed compound algorithms used are Sharp Adaptive Histogram
Equalization (SAHE), Sharp Median Adaptive Histogram
Equalization (SMAHE) and Sharp Contrast adaptive histogram
equalization (SCLAHE). This paper presents an initial study of the
perception of six dentists on the details of abnormal pathologies and
improvement of image quality in ten intra-oral radiographs. The
research focus on the detection of only three types of pathology
which is periapical radiolucency, widen periodontal ligament space
and loss of lamina dura. The overall result shows that SCLAHE-s
slightly improve the appearance of dental abnormalities- over the
original image and also outperform the other two proposed
compound algorithms.
Abstract: This paper proposes a copyright protection scheme for color images using secret sharing and wavelet transform. The scheme contains two phases: the share image generation phase and the watermark retrieval phase. In the generation phase, the proposed scheme first converts the image into the YCbCr color space and creates a special sampling plane from the color space. Next, the scheme extracts the features from the sampling plane using the discrete wavelet transform. Then, the scheme employs the features and the watermark to generate a principal share image. In the retrieval phase, an expanded watermark is first reconstructed using the features of the suspect image and the principal share image. Next, the scheme reduces the additional noise to obtain the recovered watermark, which is then verified against the original watermark to examine the copyright. The experimental results show that the proposed scheme can resist several attacks such as JPEG compression, blurring, sharpening, noise addition, and cropping. The accuracy rates are all higher than 97%.
Abstract: Quality evaluation of an image is an important task in image processing applications. In case of image compression, quality of decompressed image is also the criterion for evaluation of given coding scheme. In the process of compression -decompression various artifacts such as blocking artifacts, blur artifact, ringing or edge artifact are observed. However quantification of these artifacts is a difficult task. We propose here novel method to quantify blur and ringing artifact in an image.
Abstract: Extracting and elaborating software requirements and
transforming them into viable software architecture are still an
intricate task. This paper defines a solution architecture which is
based on the blurred amalgamation of problem space and solution
space. The dependencies between domain constraints, requirements
and architecture and their importance are described that are to be
considered collectively while evolving from problem space to
solution space. This paper proposes a revised version of Twin Peaks
Model named Win Peaks Model that reconciles software
requirements and architecture in more consistent and adaptable
manner. Further the conflict between stakeholders- win-requirements
is resolved by proposed Voting methodology that is simple
adaptation of win-win requirements negotiation model and QARCC.
Abstract: From a set of shifted, blurred, and decimated image , super-resolution image reconstruction can get a high-resolution image. So it has become an active research branch in the field of image restoration. In general, super-resolution image restoration is an ill-posed problem. Prior knowledge about the image can be combined to make the problem well-posed, which contributes to some regularization methods. In the regularization methods at present, however, regularization parameter was selected by experience in some cases and other techniques have too heavy computation cost for computing the parameter. In this paper, we construct a new super-resolution algorithm by transforming the solving of the System stem Є=An into the solving of the equations X+A*X-1A=I , and propose an inverse iterative method.
Abstract: Aerial and satellite images are information rich. They are also complex to analyze. For GIS systems, many features require fast and reliable extraction of roads and intersections. In this paper, we study efficient and reliable automatic extraction algorithms to address some difficult issues that are commonly seen in high resolution aerial and satellite images, nonetheless not well addressed in existing solutions, such as blurring, broken or missing road boundaries, lack of road profiles, heavy shadows, and interfering surrounding objects. The new scheme is based on a new method, namely reference circle, to properly identify the pixels that belong to the same road and use this information to recover the whole road network. This feature is invariable to the shape and direction of roads and tolerates heavy noise and disturbances. Road extraction based on reference circles is much more noise tolerant and flexible than the previous edge-detection based algorithms. The scheme is able to extract roads reliably from images with complex contents and heavy obstructions, such as the high resolution aerial/satellite images available from Google maps.
Abstract: High Power Lasers produce an intense burst of
Bremmstrahlung radiation which has potential applications in broadband
x-ray radiography. Since the radiation produced is through the
interaction of accelerated electrons with the remaining laser target,
these bursts are extremely short – in the region of a few ps. As a
result, the laser-produced x-rays are capable of imaging complex
dynamic objects with zero motion blur.
Abstract: Median filter is widely used to remove impulse noise
without blurring sharp edges. However, when noise level increased,
or with thin edges, median filter may work poorly. This paper
proposes a new filter, which will detect edges along four possible
directions, and then replace noise corrupted pixel with estimated
noise-free edge median value. Simulations show that the proposed
multi-stage directional median filter can provide excellent
performance of suppressing impulse noise in all situations.