Skin Lesion Segmentation Using Color Channel Optimization and Clustering-based Histogram Thresholding

Automatic segmentation of skin lesions is the first step towards the automated analysis of malignant melanoma. Although numerous segmentation methods have been developed, few studies have focused on determining the most effective color space for melanoma application. This paper proposes an automatic segmentation algorithm based on color space analysis and clustering-based histogram thresholding, a process which is able to determine the optimal color channel for detecting the borders in dermoscopy images. The algorithm is tested on a set of 30 high resolution dermoscopy images. A comprehensive evaluation of the results is provided, where borders manually drawn by four dermatologists, are compared to automated borders detected by the proposed algorithm, applying three previously used metrics of accuracy, sensitivity, and specificity and a new metric of similarity. By performing ROC analysis and ranking the metrics, it is demonstrated that the best results are obtained with the X and XoYoR color channels, resulting in an accuracy of approximately 97%. The proposed method is also compared with two state-of-theart skin lesion segmentation methods.

A Universal Model for Content-Based Image Retrieval

In this paper a novel approach for generalized image retrieval based on semantic contents is presented. A combination of three feature extraction methods namely color, texture, and edge histogram descriptor. There is a provision to add new features in future for better retrieval efficiency. Any combination of these methods, which is more appropriate for the application, can be used for retrieval. This is provided through User Interface (UI) in the form of relevance feedback. The image properties analyzed in this work are by using computer vision and image processing algorithms. For color the histogram of images are computed, for texture cooccurrence matrix based entropy, energy, etc, are calculated and for edge density it is Edge Histogram Descriptor (EHD) that is found. For retrieval of images, a novel idea is developed based on greedy strategy to reduce the computational complexity. The entire system was developed using AForge.Imaging (an open source product), MATLAB .NET Builder, C#, and Oracle 10g. The system was tested with Coral Image database containing 1000 natural images and achieved better results.

Abrupt Scene Change Detection

A number of automated shot-change detection methods for indexing a video sequence to facilitate browsing and retrieval have been proposed in recent years. This paper emphasizes on the simulation of video shot boundary detection using one of the methods of the color histogram wherein scaling of the histogram metrics is an added feature. The difference between the histograms of two consecutive frames is evaluated resulting in the metrics. Further scaling of the metrics is performed to avoid ambiguity and to enable the choice of apt threshold for any type of videos which involves minor error due to flashlight, camera motion, etc. Two sample videos are used here with resolution of 352 X 240 pixels using color histogram approach in the uncompressed media. An attempt is made for the retrieval of color video. The simulation is performed for the abrupt change in video which yields 90% recall and precision value.

Scene Adaptive Shadow Detection Algorithm

Robustness is one of the primary performance criteria for an Intelligent Video Surveillance (IVS) system. One of the key factors in enhancing the robustness of dynamic video analysis is,providing accurate and reliable means for shadow detection. If left undetected, shadow pixels may result in incorrect object tracking and classification, as it tends to distort localization and measurement information. Most of the algorithms proposed in literature are computationally expensive; some to the extent of equalling computational requirement of motion detection. In this paper, the homogeneity property of shadows is explored in a novel way for shadow detection. An adaptive division image (which highlights homogeneity property of shadows) analysis followed by a relatively simpler projection histogram analysis for penumbra suppression is the key novelty in our approach.

Matching-Based Cercospora Leaf Spot Detection in Sugar Beet

In this paper, we propose a robust disease detection method, called adaptive orientation code matching (Adaptive OCM), which is developed from a robust image registration algorithm: orientation code matching (OCM), to achieve continuous and site-specific detection of changes in plant disease. We use two-stage framework for realizing our research purpose; in the first stage, adaptive OCM was employed which could not only realize the continuous and site-specific observation of disease development, but also shows its excellent robustness for non-rigid plant object searching in scene illumination, translation, small rotation and occlusion changes and then in the second stage, a machine learning method of support vector machine (SVM) based on a feature of two dimensional (2D) xy-color histogram is further utilized for pixel-wise disease classification and quantification. The indoor experiment results demonstrate the feasibility and potential of our proposed algorithm, which could be implemented in real field situation for better observation of plant disease development.

Resource Leveling in Construction Projects using Re- Modified Minimum Moment Approach

An attempt in this paper proposes a re-modification to the minimum moment approach of resource leveling which is a modified minimum moment approach to the traditional method by Harris. The method is based on critical path method. The new approach suggests the difference between the methods in the selection criteria of activity which needs to be shifted for leveling resource histogram. In traditional method, the improvement factor found first to select the activity for each possible day of shifting. In modified method maximum value of the product of Resources Rate and Free Float was found first and improvement factor is then calculated for that activity which needs to be shifted. In the proposed method the activity to be selected first for shifting is based on the largest value of resource rate. The process is repeated for all the remaining activities for possible shifting to get updated histogram. The proposed method significantly reduces the number of iterations and is easier for manual computations.

Burstiness Reduction of a Doubly Stochastic AR-Modeled Uniform Activity VBR Video

Stochastic modeling of network traffic is an area of significant research activity for current and future broadband communication networks. Multimedia traffic is statistically characterized by a bursty variable bit rate (VBR) profile. In this paper, we develop an improved model for uniform activity level video sources in ATM using a doubly stochastic autoregressive model driven by an underlying spatial point process. We then examine a number of burstiness metrics such as the peak-to-average ratio (PAR), the temporal autocovariance function (ACF) and the traffic measurements histogram. We found that the former measure is most suitable for capturing the burstiness of single scene video traffic. In the last phase of this work, we analyse statistical multiplexing of several constant scene video sources. This proved, expectedly, to be advantageous with respect to reducing the burstiness of the traffic, as long as the sources are statistically independent. We observed that the burstiness was rapidly diminishing, with the largest gain occuring when only around 5 sources are multiplexed. The novel model used in this paper for characterizing uniform activity video was thus found to be an accurate model.

Shannon-Weaver Biodiversity of Neutrophils in Fractal Networks of Immunofluorescence for Medical Diagnostics

We develop new nonlinear methods of immunofluorescence analysis for a sensitive technology of respiratory burst reaction of DNA fluorescence due to oxidative activity in the peripheral blood neutrophils. Histograms in flow cytometry experiments represent a fluorescence flashes frequency as functions of fluorescence intensity. We used the Shannon-Weaver index for definition of neutrophils- biodiversity and Hurst index for definition of fractal-s correlations in immunofluorescence for different donors, as the basic quantitative criteria for medical diagnostics of health status. We analyze frequencies of flashes, information, Shannon entropies and their fractals in immunofluorescence networks due to reduction of histogram range. We found the number of simplest universal correlations for biodiversity, information and Hurst index in diagnostics and classification of pathologies for wide spectra of diseases. In addition is determined the clear criterion of a common immunity and human health status in a form of yes/no answers type. These answers based on peculiarities of information in immunofluorescence networks and biodiversity of neutrophils. Experimental data analysis has shown the existence of homeostasis for information entropy in oxidative activity of DNA in neutrophil nuclei for all donors.

Fast Search for MPEG Video Clips Using Adjacent Pixel Intensity Difference Quantization Histogram Feature

In this paper, we propose a novel fast search algorithm for short MPEG video clips from video database. This algorithm is based on the adjacent pixel intensity difference quantization (APIDQ) algorithm, which had been reliably applied to human face recognition previously. An APIDQ histogram is utilized as the feature vector of the frame image. Instead of fully decompressed video frames, partially decoded data, namely DC images are utilized. Combined with active search [4], a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by 6 hours of video to search for given 200 MPEG video clips which each length is 15 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 80ms, and Equal Error Rate (ERR) of 3 % is achieved, which is more accurately and robust than conventional fast video search algorithm.

Human Detection using Projected Edge Feature

The purpose of this paper is to detect human in images. This paper proposes a method for extracting human body feature descriptors consisting of projected edge component series. The feature descriptor can express appearances and shapes of human with local and global distribution of edges. Our method evaluated with a linear SVM classifier on Daimler-Chrysler pedestrian dataset, and test with various sub-region size. The result shows that the accuracy level of proposed method similar to Histogram of Oriented Gradients(HOG) feature descriptor and feature extraction process is simple and faster than existing methods.

Real-Time Hand Tracking and Gesture Recognition System Using Neural Networks

This paper introduces a hand gesture recognition system to recognize real time gesture in unstrained environments. Efforts should be made to adapt computers to our natural means of communication: Speech and body language. A simple and fast algorithm using orientation histograms will be developed. It will recognize a subset of MAL static hand gestures. A pattern recognition system will be using a transforrn that converts an image into a feature vector, which will be compared with the feature vectors of a training set of gestures. The final system will be Perceptron implementation in MATLAB. This paper includes experiments of 33 hand postures and discusses the results. Experiments shows that the system can achieve a 90% recognition average rate and is suitable for real time applications.

Study of Effect of Removal of Shiftrows and Mixcolumns Stages of AES and AES-KDS on their Encryption Quality and Hence Security

This paper demonstrates the results when either Shiftrows stage or Mixcolumns stage and when both the stages are omitted in the well known block cipher Advanced Encryption Standard(AES) and its modified version AES with Key Dependent S-box(AES-KDS), using avalanche criterion and other tests namely encryption quality, correlation coefficient, histogram analysis and key sensitivity tests.

Low Resolution Single Neural Network Based Face Recognition

This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.

An Adaptive Mammographic Image Enhancement in Orthogonal Polynomials Domain

X-ray mammography is the most effective method for the early detection of breast diseases. However, the typical diagnostic signs such as microcalcifications and masses are difficult to detect because mammograms are of low-contrast and noisy. In this paper, a new algorithm for image denoising and enhancement in Orthogonal Polynomials Transformation (OPT) is proposed for radiologists to screen mammograms. In this method, a set of OPT edge coefficients are scaled to a new set by a scale factor called OPT scale factor. The new set of coefficients is then inverse transformed resulting in contrast improved image. Applications of the proposed method to mammograms with subtle lesions are shown. To validate the effectiveness of the proposed method, we compare the results to those obtained by the Histogram Equalization (HE) and the Unsharp Masking (UM) methods. Our preliminary results strongly suggest that the proposed method offers considerably improved enhancement capability over the HE and UM methods.

Improved Lung Nodule Visualization on Chest Radiographs using Digital Filtering and Contrast Enhancement

Early detection of lung cancer through chest radiography is a widely used method due to its relatively affordable cost. In this paper, an approach to improve lung nodule visualization on chest radiographs is presented. The approach makes use of linear phase high-frequency emphasis filter for digital filtering and histogram equalization for contrast enhancement to achieve improvements. Results obtained indicate that a filtered image can reveal sharper edges and provide more details. Also, contrast enhancement offers a way to further enhance the global (or local) visualization by equalizing the histogram of the pixel values within the whole image (or a region of interest). The work aims to improve lung nodule visualization of chest radiographs to aid detection of lung cancer which is currently the leading cause of cancer deaths worldwide.

Labeling Method in Steganography

In this paper a way of hiding text message (Steganography) in the gray image has been presented. In this method tried to find binary value of each character of text message and then in the next stage, tried to find dark places of gray image (black) by converting the original image to binary image for labeling each object of image by considering on 8 connectivity. Then these images have been converted to RGB image in order to find dark places. Because in this way each sequence of gray color turns into RGB color and dark level of grey image is found by this way if the Gary image is very light the histogram must be changed manually to find just dark places. In the final stage each 8 pixels of dark places has been considered as a byte and binary value of each character has been put in low bit of each byte that was created manually by dark places pixels for increasing security of the main way of steganography (LSB).

Photograph Based Pair-matching Recognition of Human Faces

In this paper, a novel system recognition of human faces without using face different color photographs is proposed. It mainly in face detection, normalization and recognition. Foot method of combination of Haar-like face determined segmentation and region-based histogram stretchi (RHST) is proposed to achieve more accurate perf using Haar. Apart from an effective angle norm side-face (pose) normalization, which is almost a might be important and beneficial for the prepr introduced. Then histogram-based and photom normalization methods are investigated and ada retinex (ASR) is selected for its satisfactory illumin Finally, weighted multi-block local binary pattern with 3 distance measures is applied for pair-mat Experimental results show its advantageous perfo with PCA and multi-block LBP, based on a principle.

An Improved Fast Video Clip Search Algorithm for Copy Detection using Histogram-based Features

In this paper, we present an improved fast and robust search algorithm for copy detection using histogram-based features for short MPEG video clips from large video database. There are two types of histogram features used to generate more robust features. The first one is based on the adjacent pixel intensity difference quantization (APIDQ) algorithm, which had been reliably applied to human face recognition previously. An APIDQ histogram is utilized as the feature vector of the frame image. Another one is ordinal histogram feature which is robust to color distortion. Furthermore, by Combining with a temporal division method, the spatial and temporal features of the video sequence are integrated to realize fast and robust video search for copy detection. Experimental results show the proposed algorithm can detect the similar video clip more accurately and robust than conventional fast video search algorithm.

Fast Search Method for Large Video Database Using Histogram Features and Temporal Division

In this paper, we propose an improved fast search algorithm using combined histogram features and temporal division method for short MPEG video clips from large video database. There are two types of histogram features used to generate more robust features. The first one is based on the adjacent pixel intensity difference quantization (APIDQ) algorithm, which had been reliably applied to human face recognition previously. An APIDQ histogram is utilized as the feature vector of the frame image. Another one is ordinal feature which is robust to color distortion. Combined with active search [4], a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by 6 hours of video to search for given 200 MPEG video clips which each length is 30 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 120ms, and Equal Error Rate (ERR) of 1% is achieved, which is more accurately and robust than conventional fast video search algorithm.

Recognition of Isolated Speech Signals using Simplified Statistical Parameters

We present a novel scheme to recognize isolated speech signals using certain statistical parameters derived from those signals. The determination of the statistical estimates is based on extracted signal information rather than the original signal information in order to reduce the computational complexity. Subtle details of these estimates, after extracting the speech signal from ambience noise, are first exploited to segregate the polysyllabic words from the monosyllabic ones. Precise recognition of each distinct word is then carried out by analyzing the histogram, obtained from these information.