Atomic Force Microscopy (AFM)Topographical Surface Characterization of Multilayer-Coated and Uncoated Carbide Inserts

In recent years, scanning probe atomic force microscopy SPM AFM has gained acceptance over a wide spectrum of research and science applications. Most fields focuses on physical, chemical, biological while less attention is devoted to manufacturing and machining aspects. The purpose of the current study is to assess the possible implementation of the SPM AFM features and its NanoScope software in general machining applications with special attention to the tribological aspects of cutting tool. The surface morphology of coated and uncoated as-received carbide inserts is examined, analyzed, and characterized through the determination of the appropriate scanning setting, the suitable data type imaging techniques and the most representative data analysis parameters using the MultiMode SPM AFM in contact mode. The NanoScope operating software is used to capture realtime three data types images: “Height", “Deflection" and “Friction". Three scan sizes are independently performed: 2, 6, and 12 μm with a 2.5 μm vertical range (Z). Offline mode analysis includes the determination of three functional topographical parameters: surface “Roughness", power spectral density “PSD" and “Section". The 12 μm scan size in association with “Height" imaging is found efficient to capture every tiny features and tribological aspects of the examined surface. Also, “Friction" analysis is found to produce a comprehensive explanation about the lateral characteristics of the scanned surface. Configuration of many surface defects and drawbacks has been precisely detected and analyzed.

Quadratic Pulse Inversion Ultrasonic Imaging(QPI): A Two-Step Procedure for Optimization of Contrast Sensitivity and Specificity

We have previously introduced an ultrasonic imaging approach that combines harmonic-sensitive pulse sequences with a post-beamforming quadratic kernel derived from a second-order Volterra filter (SOVF). This approach is designed to produce images with high sensitivity to nonlinear oscillations from microbubble ultrasound contrast agents (UCA) while maintaining high levels of noise rejection. In this paper, a two-step algorithm for computing the coefficients of the quadratic kernel leading to reduction of tissue component introduced by motion, maximizing the noise rejection and increases the specificity while optimizing the sensitivity to the UCA is presented. In the first step, quadratic kernels from individual singular modes of the PI data matrix are compared in terms of their ability of maximize the contrast to tissue ratio (CTR). In the second step, quadratic kernels resulting in the highest CTR values are convolved. The imaging results indicate that a signal processing approach to this clinical challenge is feasible.

Wood Species Recognition System

The proposed system identifies the species of the wood using the textural features present in its barks. Each species of a wood has its own unique patterns in its bark, which enabled the proposed system to identify it accurately. Automatic wood recognition system has not yet been well established mainly due to lack of research in this area and the difficulty in obtaining the wood database. In our work, a wood recognition system has been designed based on pre-processing techniques, feature extraction and by correlating the features of those wood species for their classification. Texture classification is a problem that has been studied and tested using different methods due to its valuable usage in various pattern recognition problems, such as wood recognition, rock classification. The most popular technique used for the textural classification is Gray-level Co-occurrence Matrices (GLCM). The features from the enhanced images are thus extracted using the GLCM is correlated, which determines the classification between the various wood species. The result thus obtained shows a high rate of recognition accuracy proving that the techniques used in suitable to be implemented for commercial purposes.

Accurate Dimensional Measurement of 3D Round Holes Based on Stereo Vision

This paper present an effective method to accurately reconstruct and measure the 3D curve edges of small industrial parts based on stereo vision. To effectively fit the curve of the measured parts using a series of line segments in the images, a strategy from coarse to fine is employed based on multi-scale curve fitting. After reconstructing the 3D curve of a hole through a curved surface, its axis is adjusted so that it is parallel to the Z axis with least squares error and the dimensions of the hole can be calculated on the XY plane easily. Experimental results show that the presented method can accurately measure the dimensions of round holes through a curved surface.

An Optimal Feature Subset Selection for Leaf Analysis

This paper describes an optimal approach for feature subset selection to classify the leaves based on Genetic Algorithm (GA) and Kernel Based Principle Component Analysis (KPCA). Due to high complexity in the selection of the optimal features, the classification has become a critical task to analyse the leaf image data. Initially the shape, texture and colour features are extracted from the leaf images. These extracted features are optimized through the separate functioning of GA and KPCA. This approach performs an intersection operation over the subsets obtained from the optimization process. Finally, the most common matching subset is forwarded to train the Support Vector Machine (SVM). Our experimental results successfully prove that the application of GA and KPCA for feature subset selection using SVM as a classifier is computationally effective and improves the accuracy of the classifier.

Research on Hypermediated Images in Asian Films

In films, visual effects have played the role of expressing realities more realistically or describing imaginations as if they are real. Such images are immediated images representing realism, and the logic of immediation for the reality of images has been perceived dominant in visual effects. In order for immediation to have an identity as immediation, there should be the opposite concept hypermediation. In the mid 2000s, hypermediated images were settled as a code of mass culture in Asia. Thus, among Asian films highly popular in those days, this study selected five displaying hypermediated images – 2 Korean, 2 Japanese, and 1 Thailand movies – and examined the semiotic meanings of such images using Roland Barthes- directional and implicated meaning analysis and Metz-s paradigmatic analysis method, focusing on how hypermediated images work in the general context of the films, how they are associated with spaces, and what meanings they try to carry.

Automatic Visualization Pipeline Formation for Medical Datasets on Grid Computing Environment

Distance visualization of large datasets often takes the direction of remote viewing and zooming techniques of stored static images. However, the continuous increase in the size of datasets and visualization operation causes insufficient performance with traditional desktop computers. Additionally, the visualization techniques such as Isosurface depend on the available resources of the running machine and the size of datasets. Moreover, the continuous demand for powerful computing powers and continuous increase in the size of datasets results an urgent need for a grid computing infrastructure. However, some issues arise in current grid such as resources availability at the client machines which are not sufficient enough to process large datasets. On top of that, different output devices and different network bandwidth between the visualization pipeline components often result output suitable for one machine and not suitable for another. In this paper we investigate how the grid services could be used to support remote visualization of large datasets and to break the constraint of physical co-location of the resources by applying the grid computing technologies. We show our grid enabled architecture to visualize large medical datasets (circa 5 million polygons) for remote interactive visualization on modest resources clients.

A New Method for Detection of Artificial Objects and Materials from Long Distance Environmental Images

The article presents a new method for detection of artificial objects and materials from images of the environmental (non-urban) terrain. Our approach uses the hue and saturation (or Cb and Cr) components of the image as the input to the segmentation module that uses the mean shift method. The clusters obtained as the output of this stage have been processed by the decision-making module in order to find the regions of the image with the significant possibility of representing human. Although this method will detect various non-natural objects, it is primarily intended and optimized for detection of humans; i.e. for search and rescue purposes in non-urban terrain where, in normal circumstances, non-natural objects shouldn-t be present. Real world images are used for the evaluation of the method.

Detection of Diabetic Symptoms in Retina Images Using Analog Algorithms

In this paper a class of analog algorithms based on the concept of Cellular Neural Network (CNN) is applied in some processing operations of some important medical images, namely retina images, for detecting various symptoms connected with diabetic retinopathy. Some specific processing tasks like morphological operations, linear filtering and thresholding are proposed, the corresponding template values are given and simulations on real retina images are provided.

Conceptualization of the Attractive Work Environment and Organizational Activity for Humans in Future Deep Mines

The purpose of this paper is to conceptualize a futureoriented human work environment and organizational activity in deep mines that entails a vision of good and safe workplace. Futureoriented technological challenges and mental images required for modern work organization design were appraised. It is argued that an intelligent-deep-mine covering the entire value chain, including environmental issues and with work organization that supports good working and social conditions towards increased human productivity could be designed. With such intelligent system and work organization in place, the mining industry could be seen as a place where cooperation, skills development and gender equality are key components. By this perspective, both the youth and women might view mining activity as an attractive job and the work environment as a safe, and this could go a long way in breaking the unequal gender balance that exists in most mines today.

Performance Evaluation of Wavelet Based Coders on Brain MRI Volumetric Medical Datasets for Storage and Wireless Transmission

In this paper, we evaluate the performance of some wavelet based coding algorithms such as 3D QT-L, 3D SPIHT and JPEG2K. In the first step we achieve an objective comparison between three coders, namely 3D SPIHT, 3D QT-L and JPEG2K. For this purpose, eight MRI head scan test sets of 256 x 256x124 voxels have been used. Results show superior performance of 3D SPIHT algorithm, whereas 3D QT-L outperforms JPEG2K. The second step consists of evaluating the robustness of 3D SPIHT and JPEG2K coding algorithm over wireless transmission. Compressed dataset images are then transmitted over AWGN wireless channel or over Rayleigh wireless channel. Results show the superiority of JPEG2K over these two models. In fact, it has been deduced that JPEG2K is more robust regarding coding errors. Thus we may conclude the necessity of using corrector codes in order to protect the transmitted medical information.

Metal Streak Analysis with different Acquisition Settings in Postoperative Spine Imaging: A Phantom Study

CT assessment of postoperative spine is challenging in the presence of metal streak artifacts that could deteriorate the quality of CT images. In this paper, we studied the influence of different acquisition parameters on the magnitude of metal streaking. A water-bath phantom was constructed with metal insertion similar with postoperative spine assessment. The phantom was scanned with different acquisition settings and acquired data were reconstructed using various reconstruction settings. Standardized ROIs were defined within streaking region for image analysis. The result shows increased kVp and mAs enhanced SNR values by reducing image noise. Sharper kernel enhanced image quality compared to smooth kernel, but produced more noise in the images with higher CT fluctuation. The noise between both kernels were significantly different (P

Real-Time Image Analysis of Capsule Endoscopy for Bleeding Discrimination in Embedded System Platform

Image processing for capsule endoscopy requires large memory and it takes hours for diagnosis since operation time is normally more than 8 hours. A real-time analysis algorithm of capsule images can be clinically very useful. It can differentiate abnormal tissue from health structure and provide with correlation information among the images. Bleeding is our interest in this regard and we propose a method of detecting frames with potential bleeding in real-time. Our detection algorithm is based on statistical analysis and the shapes of bleeding spots. We tested our algorithm with 30 cases of capsule endoscopy in the digestive track. Results were excellent where a sensitivity of 99% and a specificity of 97% were achieved in detecting the image frames with bleeding spots.

Implementation of Sprite Animation for Multimedia Application

Animation is simply defined as the sequencing of a series of static images to generate the illusion of movement. Most people believe that actual drawings or creation of the individual images is the animation, when in actuality it is the arrangement of those static images that conveys the motion. To become an animator, it is often assumed that needed the ability to quickly design masterpiece after masterpiece. Although some semblance of artistic skill is a necessity for the job, the real key to becoming a great animator is in the comprehension of timing. This paper will use a combination of sprite animation, frame animation, and some other techniques to cause a group of multi-colored static images to slither around in the bounded area. In addition to slithering, the images will also change the color of different parts of their body, much like the real world creatures that have this amazing ability to change the colors on their bodies do. This paper was implemented by using Java 2 Standard Edition (J2SE). It is both time-consuming and expensive to create animations, regardless if they are created by hand or by using motion-capture equipment. If the animators could reuse old animations and even blend different animations together, a lot of work would be saved in the process. The main objective of this paper is to examine a method for blending several animations together in real time. This paper presents and analyses a solution using Weighted Skeleton Animation (WSA) resulting in limited CPU time and memory waste as well as saving time for the animators. The idea presented is described in detail and implemented. In this paper, text animation, vertex animation, sprite part animation and whole sprite animation were tested. In this research paper, the resolution, smoothness and movement of animated images will be carried out from the parameters, which will be obtained from the experimental research of implementing this paper.

Generalized Morphological 3D Shape Decomposition Grayscale Interframe Interpolation Method

One of the main image representations in Mathematical Morphology is the 3D Shape Decomposition Representation, useful for Image Compression and Representation,and Pattern Recognition. The 3D Morphological Shape Decomposition representation can be generalized a number of times,to extend the scope of its algebraic characteristics as much as possible. With these generalizations, the Morphological Shape Decomposition 's role to serve as an efficient image decomposition tool is extended to grayscale images.This work follows the above line, and further develops it. Anew evolutionary branch is added to the 3D Morphological Shape Decomposition's development, by the introduction of a 3D Multi Structuring Element Morphological Shape Decomposition, which permits 3D Morphological Shape Decomposition of 3D binary images (grayscale images) into "multiparameter" families of elements. At the beginning, 3D Morphological Shape Decomposition representations are based only on "1 parameter" families of elements for image decomposition.This paper addresses the gray scale inter frame interpolation by means of mathematical morphology. The new interframe interpolation method is based on generalized morphological 3D Shape Decomposition. This article will present the theoretical background of the morphological interframe interpolation, deduce the new representation and show some application examples.Computer simulations could illustrate results.

Image Mapping with Cumulative Distribution Function for Quick Convergence of Counter Propagation Neural Networks in Image Compression

In general the images used for compression are of different types like dark image, high intensity image etc. When these images are compressed using Counter Propagation Neural Network, it takes longer time to converge. The reason for this is that the given image may contain a number of distinct gray levels with narrow difference with their neighborhood pixels. If the gray levels of the pixels in an image and their neighbors are mapped in such a way that the difference in the gray levels of the neighbor with the pixel is minimum, then compression ratio as well as the convergence of the network can be improved. To achieve this, a Cumulative Distribution Function is estimated for the image and it is used to map the image pixels. When the mapped image pixels are used the Counter Propagation Neural Network yield high compression ratio as well as it converges quickly.

Extended Study on Removing Gaussian Noise in Mechanical Engineering Drawing Images using Median Filters

In this paper, an extended study is performed on the effect of different factors on the quality of vector data based on a previous study. In the noise factor, one kind of noise that appears in document images namely Gaussian noise is studied while the previous study involved only salt-and-pepper noise. High and low levels of noise are studied. For the noise cleaning methods, algorithms that were not covered in the previous study are used namely Median filters and its variants. For the vectorization factor, one of the best available commercial raster to vector software namely VPstudio is used to convert raster images into vector format. The performance of line detection will be judged based on objective performance evaluation method. The output of the performance evaluation is then analyzed statistically to highlight the factors that affect vector quality.

Comparison of Pore Space Features by Thin Sections and X-Ray Microtomography

Microtomographic images and thin section (TS) images were analyzed and compared against some parameters of geological interest such as porosity and its distribution along the samples. The results show that microtomography (CT) analysis, although limited by its resolution, have some interesting information about the distribution of porosity (homogeneous or not) and can also quantify the connected and non-connected pores, i.e., total porosity. TS have no limitations concerning resolution, but are limited by the experimental data available in regards to a few glass sheets for analysis and also can give only information about the connected pores, i.e., effective porosity. Those two methods have their own virtues and flaws but when paired together they are able to complement one another, making for a more reliable and complete analysis.

Segmentation of Lungs from CT Scan Images for Early Diagnosis of Lung Cancer

Segmentation is an important step in medical image analysis and classification for radiological evaluation or computer aided diagnosis. The CAD (Computer Aided Diagnosis ) of lung CT generally first segment the area of interest (lung) and then analyze the separately obtained area for nodule detection in order to diagnosis the disease. For normal lung, segmentation can be performed by making use of excellent contrast between air and surrounding tissues. However this approach fails when lung is affected by high density pathology. Dense pathologies are present in approximately a fifth of clinical scans, and for computer analysis such as detection and quantification of abnormal areas it is vital that the entire and perfectly lung part of the image is provided and no part, as present in the original image be eradicated. In this paper we have proposed a lung segmentation technique which accurately segment the lung parenchyma from lung CT Scan images. The algorithm was tested against the 25 datasets of different patients received from Ackron Univeristy, USA and AGA Khan Medical University, Karachi, Pakistan.

Application of Biometrics to Obtain High Entropy Cryptographic Keys

In this paper, a two factor scheme is proposed to generate cryptographic keys directly from biometric data, which unlike passwords, are strongly bound to the user. Hash value of the reference iris code is used as a cryptographic key and its length depends only on the hash function, being independent of any other parameter. The entropy of such keys is 94 bits, which is much higher than any other comparable system. The most important and distinct feature of this scheme is that it regenerates the reference iris code by providing a genuine iris sample and the correct user password. Since iris codes obtained from two images of the same eye are not exactly the same, error correcting codes (Hadamard code and Reed-Solomon code) are used to deal with the variability. The scheme proposed here can be used to provide keys for a cryptographic system and/or for user authentication. The performance of this system is evaluated on two publicly available databases for iris biometrics namely CBS and ICE databases. The operating point of the system (values of False Acceptance Rate (FAR) and False Rejection Rate (FRR)) can be set by properly selecting the error correction capacity (ts) of the Reed- Solomon codes, e.g., on the ICE database, at ts = 15, FAR is 0.096% and FRR is 0.76%.