Abstract: The paper shows the necessity to increase the security
level for paper management in the cadastral field by using specific
graphical watermarks. Using the graphical watermarking will
increase the security in the cadastral content management;
furthermore any altered document will be validated afterwards of its
originality by checking the graphic watermark. If, by any reasons the
document is changed for counterfeiting, it is invalidated and found
that is an illegal copy due to the graphic check of the watermarking,
check made at pixel level
Abstract: A new fuzzy filter is presented for noise reduction of
images corrupted with additive noise. The filter consists of two
stages. In the first stage, all the pixels of image are processed for
determining noisy pixels. For this, a fuzzy rule based system
associates a degree to each pixel. The degree of a pixel is a real
number in the range [0,1], which denotes a probability that the pixel
is not considered as a noisy pixel. In the second stage, another fuzzy
rule based system is employed. It uses the output of the previous
fuzzy system to perform fuzzy smoothing by weighting the
contributions of neighboring pixel values. Experimental results are
obtained to show the feasibility of the proposed filter. These results
are also compared to other filters by numerical measure and visual
inspection.
Abstract: Early detection of lung cancer through chest radiography is a widely used method due to its relatively affordable cost. In this paper, an approach to improve lung nodule visualization on chest radiographs is presented. The approach makes use of linear phase high-frequency emphasis filter for digital filtering and
histogram equalization for contrast enhancement to achieve improvements. Results obtained indicate that a filtered image can
reveal sharper edges and provide more details. Also, contrast enhancement offers a way to further enhance the global (or local) visualization by equalizing the histogram of the pixel values within
the whole image (or a region of interest). The work aims to improve lung nodule visualization of chest radiographs to aid detection of lung cancer which is currently the leading cause of cancer deaths worldwide.
Abstract: Because of increasing demands for security in today-s
society and also due to paying much more attention to machine
vision, biometric researches, pattern recognition and data retrieval in
color images, face detection has got more application. In this article
we present a scientific approach for modeling human skin color, and
also offer an algorithm that tries to detect faces within color images
by combination of skin features and determined threshold in the
model. Proposed model is based on statistical data in different color
spaces. Offered algorithm, using some specified color threshold, first,
divides image pixels into two groups: skin pixel group and non-skin
pixel group and then based on some geometric features of face
decides which area belongs to face.
Two main results that we received from this research are as follow:
first, proposed model can be applied easily on different databases and
color spaces to establish proper threshold. Second, our algorithm can
adapt itself with runtime condition and its results demonstrate
desirable progress in comparison with similar cases.
Abstract: This paper compares Hilditch, Rosenfeld, Zhang-
Suen, dan Nagendraprasad Wang Gupta (NWG) thinning algorithms
for Javanese character image recognition. Thinning is an effective
process when the focus in not on the size of the pattern, but rather on
the relative position of the strokes in the pattern. The research
analyzes the thinning of 60 Javanese characters.
Time-wise, Zhang-Suen algorithm gives the best results with the
average process time being 0.00455188 seconds. But if we look at
the percentage of pixels that meet one-pixel thickness, Rosenfelt
algorithm gives the best results, with a 99.98% success rate. From the
number of pixels that are erased, NWG algorithm gives the best
results with the average number of pixels erased being 84.12%. It can
be concluded that the Hilditch algorithm performs least successfully
compared to the other three algorithms.
Abstract: Using efficient classification methods is necessary for automatic fingerprint recognition system. This paper introduces a new structural approach to fingerprint classification by using the directional image of fingerprints to increase the number of subclasses. In this method, the directional image of fingerprints is segmented into regions consisting of pixels with the same direction. Afterwards the relational graph to the segmented image is constructed and according to it, the super graph including prominent information of this graph is formed. Ultimately we apply a matching technique to compare obtained graph with the model graphs in order to classify fingerprints by using cost function. Increasing the number of subclasses with acceptable accuracy in classification and faster processing in fingerprints recognition, makes this system superior.
Abstract: In this paper a way of hiding text message (Steganography) in the gray image has been presented. In this method tried to find binary value of each character of text message and then in the next stage, tried to find dark places of gray image (black) by converting the original image to binary image for labeling each object of image by considering on 8 connectivity. Then these images have been converted to RGB image in order to find dark places. Because in this way each sequence of gray color turns into RGB color and dark level of grey image is found by this way if the Gary image is very light the histogram must be changed manually to find just dark places. In the final stage each 8 pixels of dark places has been considered as a byte and binary value of each character has been put in low bit of each byte that was created manually by dark places pixels for increasing security of the main way of steganography (LSB).
Abstract: Our Medicine-oriented research is based on a medical
data set of real patients. It is a security problem to share
patient private data with peoples other than clinician or hospital
staff. We have to remove person identification information
from medical data. The medical data without private data
are available after a de-identification process for any research
purposes. In this paper, we introduce an universal automatic
rule-based de-identification application to do all this stuff on an
heterogeneous medical data. A patient private identification is
replaced by an unique identification number, even in burnedin
annotation in pixel data. The identical identification is used
for all patient medical data, so it keeps relationships in a data.
Hospital can take an advantage of a research feedback based
on results.
Abstract: In this paper, we propose a robust face relighting
technique by using spherical space properties. The proposed method
is done for reducing the illumination effects on face recognition.
Given a single 2D face image, we relight the face object by
extracting the nine spherical harmonic bases and the face spherical
illumination coefficients. First, an internal training illumination
database is generated by computing face albedo and face normal
from 2D images under different lighting conditions. Based on the
generated database, we analyze the target face pixels and compare
them with the training bootstrap by using pre-generated tiles. In this
work, practical real time processing speed and small image size were
considered when designing the framework. In contrast to other works,
our technique requires no 3D face models for the training process
and takes a single 2D image as an input. Experimental results on
publicly available databases show that the proposed technique works
well under severe lighting conditions with significant improvements
on the face recognition rates.
Abstract: In this paper, we focus on the fusion of images from
different sources using multiresolution wavelet transforms. Based on
reviews of popular image fusion techniques used in data analysis,
different pixel and energy based methods are experimented. A novel
architecture with a hybrid algorithm is proposed which applies pixel
based maximum selection rule to low frequency approximations and
filter mask based fusion to high frequency details of wavelet
decomposition. The key feature of hybrid architecture is the
combination of advantages of pixel and region based fusion in a
single image which can help the development of sophisticated
algorithms enhancing the edges and structural details. A Graphical
User Interface is developed for image fusion to make the research
outcomes available to the end user. To utilize GUI capabilities for
medical, industrial and commercial activities without MATLAB
installation, a standalone executable application is also developed
using Matlab Compiler Runtime.
Abstract: In this paper, we present an improved fast and robust
search algorithm for copy detection using histogram-based features for
short MPEG video clips from large video database. There are two
types of histogram features used to generate more robust features. The
first one is based on the adjacent pixel intensity difference quantization
(APIDQ) algorithm, which had been reliably applied to human face
recognition previously. An APIDQ histogram is utilized as the feature
vector of the frame image. Another one is ordinal histogram feature
which is robust to color distortion. Furthermore, by Combining with a
temporal division method, the spatial and temporal features of the
video sequence are integrated to realize fast and robust video search
for copy detection. Experimental results show the proposed algorithm
can detect the similar video clip more accurately and robust than
conventional fast video search algorithm.
Abstract: In this paper, we propose an improved fast search
algorithm using combined histogram features and temporal division
method for short MPEG video clips from large video database. There
are two types of histogram features used to generate more robust
features. The first one is based on the adjacent pixel intensity
difference quantization (APIDQ) algorithm, which had been reliably
applied to human face recognition previously. An APIDQ histogram is
utilized as the feature vector of the frame image. Another one is
ordinal feature which is robust to color distortion. Combined with
active search [4], a temporal pruning algorithm, fast and robust video
search can be realized. The proposed search algorithm has been
evaluated by 6 hours of video to search for given 200 MPEG video
clips which each length is 30 seconds. Experimental results show the
proposed algorithm can detect the similar video clip in merely 120ms,
and Equal Error Rate (ERR) of 1% is achieved, which is more
accurately and robust than conventional fast video search algorithm.
Abstract: This paper describes a novel and effective approach to content-based image retrieval (CBIR) that represents each image in the database by a vector of feature values called “Standard deviation of mean vectors of color distribution of rows and columns of images for CBIR". In many areas of commerce, government, academia, and hospitals, large collections of digital images are being created. This paper describes the approach that uses contents as feature vector for retrieval of similar images. There are several classes of features that are used to specify queries: colour, texture, shape, spatial layout. Colour features are often easily obtained directly from the pixel intensities. In this paper feature extraction is done for the texture descriptor that is 'variance' and 'Variance of Variances'. First standard deviation of each row and column mean is calculated for R, G, and B planes. These six values are obtained for one image which acts as a feature vector. Secondly we calculate variance of the row and column of R, G and B planes of an image. Then six standard deviations of these variance sequences are calculated to form a feature vector of dimension six. We applied our approach to a database of 300 BMP images. We have determined the capability of automatic indexing by analyzing image content: color and texture as features and by applying a similarity measure Euclidean distance.
Abstract: This paper presents a novel method for data hiding based on neighborhood pixels information to calculate the number of bits that can be used for substitution and modified Least Significant Bits technique for data embedding. The modified solution is independent of the nature of the data to be hidden and gives correct results along with un-noticeable image degradation. The technique, to find the number of bits that can be used for data hiding, uses the green component of the image as it is less sensitive to human eye and thus it is totally impossible for human eye to predict whether the image is encrypted or not. The application further encrypts the data using a custom designed algorithm before embedding bits into image for further security. The overall process consists of three main modules namely embedding, encryption and extraction cm.
Abstract: In this paper, we construct and implement a new
Steganography algorithm based on learning system to hide a large
amount of information into color BMP image. We have used adaptive
image filtering and adaptive non-uniform image segmentation with
bits replacement on the appropriate pixels. These pixels are selected
randomly rather than sequentially by using new concept defined by
main cases with sub cases for each byte in one pixel. According to
the steps of design, we have been concluded 16 main cases with their
sub cases that covere all aspects of the input information into color
bitmap image. High security layers have been proposed through four
layers of security to make it difficult to break the encryption of the
input information and confuse steganalysis too. Learning system has
been introduces at the fourth layer of security through neural
network. This layer is used to increase the difficulties of the statistical
attacks. Our results against statistical and visual attacks are discussed
before and after using the learning system and we make comparison
with the previous Steganography algorithm. We show that our
algorithm can embed efficiently a large amount of information that
has been reached to 75% of the image size (replace 18 bits for each
pixel as a maximum) with high quality of the output.
Abstract: In this paper, we propose a new image segmentation approach for colour textured images. The proposed method for image segmentation consists of two stages. In the first stage, textural features using gray level co-occurrence matrix(GLCM) are computed for regions of interest (ROI) considered for each class. ROI acts as ground truth for the classes. Ohta model (I1, I2, I3) is the colour model used for segmentation. Statistical mean feature at certain inter pixel distance (IPD) of I2 component was considered to be the optimized textural feature for further segmentation. In the second stage, the feature matrix obtained is assumed to be the degraded version of the image labels and modeled as Markov Random Field (MRF) model to model the unknown image labels. The labels are estimated through maximum a posteriori (MAP) estimation criterion using ICM algorithm. The performance of the proposed approach is compared with that of the existing schemes, JSEG and another scheme which uses GLCM and MRF in RGB colour space. The proposed method is found to be outperforming the existing ones in terms of segmentation accuracy with acceptable rate of convergence. The results are validated with synthetic and real textured images.
Abstract: We introduce an effective approach for automatic offline au- thentication of handwritten samples where the forgeries are skillfully done, i.e., the true and forgery sample appearances are almost alike. Subtle details of temporal information used in online verification are not available offline and are also hard to recover robustly. Thus the spatial dynamic information like the pen-tip pressure characteristics are considered, emphasizing on the extraction of low density pixels. The points result from the ballistic rhythm of a genuine signature which a forgery, however skillful that may be, always lacks. Ten effective features, including these low density points and den- sity ratio, are proposed to make the distinction between a true and a forgery sample. An adaptive decision criteria is also derived for better verification judgements.
Abstract: In MPEG and H.26x standards, to eliminate the
temporal redundancy we use motion estimation. Given that the
motion estimation stage is very complex in terms of computational
effort, a hardware implementation on a re-configurable circuit is
crucial for the requirements of different real time multimedia
applications. In this paper, we present hardware architecture for
motion estimation based on "Full Search Block Matching" (FSBM)
algorithm. This architecture presents minimum latency, maximum
throughput, full utilization of hardware resources such as embedded
memory blocks, and combining both pipelining and parallel
processing techniques. Our design is described in VHDL language,
verified by simulation and implemented in a Stratix II
EP2S130F1020C4 FPGA circuit. The experiment result show that the
optimum operating clock frequency of the proposed design is 89MHz
which achieves 160M pixels/sec.
Abstract: In this paper, we introduce a novel algorithm for object tracking in video sequence. In order to represent the object to be tracked, we propose a spatial color histogram model which encodes both the color distribution and spatial information. The object tracking from frame to frame is accomplished via center voting and back projection method. The center voting method has every pixel in the new frame to cast a vote on whereabouts the object center is. The back projection method segments the object from the background. The segmented foreground provides information on object size and orientation, omitting the need to estimate them separately. We do not put any assumption on camera motion; the proposed algorithm works equally well for object tracking in both static and moving camera videos.
Abstract: Histogram equalization is often used in image enhancement, but it can be also used in auto exposure. However, conventional histogram equalization does not work well when many pixels are concentrated in a narrow luminance range.This paper proposes an auto exposure method based on 2-way histogram equalization. Two cumulative distribution functions are used, where one is from dark to bright and the other is from bright to dark. In this paper, the proposed auto exposure method is also designed and implemented for image signal processors with full-HD images.