Abstract: Currently, in the field of object posture estimation, there is research on estimating the position and angle of an object by storing a 3D model of the object to be estimated in advance in a computer and matching it with the model. However, in this research, we have succeeded in creating a module that is much simpler, smaller in scale, and faster in operation. Our 6D pose estimation model consists of two different networks – a classification network and a regression network. From a single RGB image, the trained model estimates the class of the object in the image, the coordinates of the object, and its rotation angle in 3D space. In addition, we compared the estimation accuracy of each camera position, i.e., the angle from which the object was captured. The highest accuracy was recorded when the camera position was 75°, the accuracy of the classification was about 87.3%, and that of regression was about 98.9%.
Abstract: Most of traditional visual indoor navigation algorithms
and methods only consider the localization in ordinary daytime, while
we focus on the indoor re-localization in low light in the paper. As
RGB images are degraded in low light, less discriminative infrared
and depth image pairs are taken, as the input, by RGB-D cameras, the
most similar candidates, as the output, are searched from databases
which is built in the bag-of-word framework. Epipolar constraints can
be used to relocalize the query infrared and depth image sequence.
We evaluate our method in two datasets captured by Kinect2. The
results demonstrate very promising re-localization results for indoor
navigation system in low light environments.
Abstract: Document Image Analysis recognizes text and graphics in documents acquired as images. An approach without Optical Character Recognition (OCR) for degraded document image analysis has been adopted in this paper. The technique involves document imaging methods such as Image Fusing and Speeded Up Robust Features (SURF) Detection to identify and extract the degraded regions from a set of document images to obtain an original document with complete information. In case, degraded document image captured is skewed, it has to be straightened (deskew) to perform further process. A special format of image storing known as YCbCr is used as a tool to convert the Grayscale image to RGB image format. The presented algorithm is tested on various types of degraded documents such as printed documents, handwritten documents, old script documents and handwritten image sketches in documents. The purpose of this research is to obtain an original document for a given set of degraded documents of the same source.
Abstract: Artificial neural networks have gained a lot of interest
as empirical models for their powerful representational capacity,
multi input and output mapping characteristics. In fact, most feedforward
networks with nonlinear nodal functions have been proved to
be universal approximates. In this paper, we propose a new
supervised method for color image classification based on selforganizing
feature maps (SOFM). This algorithm is based on
competitive learning. The method partitions the input space using
self-organizing feature maps to introduce the concept of local
neighborhoods. Our image classification system entered into RGB
image. Experiments with simulated data showed that separability of
classes increased when increasing training time. In additional, the
result shows proposed algorithms are effective for color image
classification.
Abstract: The system is designed to show images which are
related to the query image. Extracting color, texture, and shape
features from an image plays a vital role in content-based image
retrieval (CBIR). Initially RGB image is converted into HSV color
space due to its perceptual uniformity. From the HSV image, Color
features are extracted using block color histogram, texture features
using Haar transform and shape feature using Fuzzy C-means
Algorithm. Then, the characteristics of the global and local color
histogram, texture features through co-occurrence matrix and Haar
wavelet transform and shape are compared and analyzed for CBIR.
Finally, the best method of each feature is fused during similarity
measure to improve image retrieval effectiveness and accuracy.
Abstract: When we prefer to make the data secure from various attacks and fore integrity of data, we must encrypt the data before it is transmitted or stored. This paper introduces a new effective and lossless image encryption algorithm using a natural logarithmic function. The new algorithm encrypts an image through a three stage process. In the first stage, a reference natural logarithmic function is generated as the foundation for the encryption image. The image numeral matrix is then analyzed to five integer numbers, and then the numbers’ positions are transformed to matrices. The advantages of this method is useful for efficiently encrypting a variety of digital images, such as binary images, gray images, and RGB images without any quality loss. The principles of the presented scheme could be applied to provide complexity and then security for a variety of data systems such as image and others.
Abstract: Steganography meaning covered writing. Steganography includes the concealment of information within computer files [1]. In other words, it is the Secret communication by hiding the existence of message. In this paper, we will refer to cover image, to indicate the images that do not yet contain a secret message, while we will refer to stego images, to indicate an image with an embedded secret message. Moreover, we will refer to the secret message as stego-message or hidden message. In this paper, we proposed a technique called RGB intensity based steganography model as RGB model is the technique used in this field to hide the data. The methods used here are based on the manipulation of the least significant bits of pixel values [3][4] or the rearrangement of colors to create least significant bit or parity bit patterns, which correspond to the message being hidden. The proposed technique attempts to overcome the problem of the sequential fashion and the use of stego-key to select the pixels.
Abstract: Least Development Countries (LDC) like
Bangladesh, whose 25% revenue earning is achieved from Textile
export, requires producing less defective textile for minimizing
production cost and time. Inspection processes done on these
industries are mostly manual and time consuming. To reduce error
on identifying fabric defects requires more automotive and
accurate inspection process. Considering this lacking, this research
implements a Textile Defect Recognizer which uses computer
vision methodology with the combination of multi-layer neural
networks to identify four classifications of textile defects. The
recognizer, suitable for LDC countries, identifies the fabric defects
within economical cost and produces less error prone inspection
system in real time. In order to generate input set for the neural
network, primarily the recognizer captures digital fabric images by
image acquisition device and converts the RGB images into binary
images by restoration process and local threshold techniques.
Later, the output of the processed image, the area of the faulty
portion, the number of objects of the image and the sharp factor of
the image, are feed backed as an input layer to the neural network
which uses back propagation algorithm to compute the weighted
factors and generates the desired classifications of defects as an
output.
Abstract: Recent developments in automotive technology are focused on economy, comfort and safety. Vehicle tracking and collision detection systems are attracting attention of many investigators focused on safety of driving in the field of automotive mechatronics. In this paper, a vision-based vehicle detection system is presented. Developed system is intended to be used in collision detection and driver alert. The system uses RGB images captured by a camera in a car driven in the highway. Images captured by the moving camera are used to detect the moving vehicles in the image. A vehicle ahead of the camera is detected in daylight conditions. The proposed method detects moving vehicles by subtracting successive images. Plate height of the vehicle is determined by using a plate recognition algorithm. Distance of the moving object is calculated by using the plate height. After determination of the distance of the moving vehicle relative speed of the vehicle and Time-to-Collision are calculated by using distances measured in successive images. Results obtained in road tests are discussed in order to validate the use of the proposed method.
Abstract: In this paper a way of hiding text message (Steganography) in the gray image has been presented. In this method tried to find binary value of each character of text message and then in the next stage, tried to find dark places of gray image (black) by converting the original image to binary image for labeling each object of image by considering on 8 connectivity. Then these images have been converted to RGB image in order to find dark places. Because in this way each sequence of gray color turns into RGB color and dark level of grey image is found by this way if the Gary image is very light the histogram must be changed manually to find just dark places. In the final stage each 8 pixels of dark places has been considered as a byte and binary value of each character has been put in low bit of each byte that was created manually by dark places pixels for increasing security of the main way of steganography (LSB).
Abstract: Breast carcinoma is the most common form of cancer
in women. Multicolour fluorescent in-situ hybridisation (m-FISH) is
a common method for staging breast carcinoma. The interpretation
of m-FISH images is complicated due to two effects: (i) Spectral
overlap in the emission spectra of fluorochrome marked DNA probes
and (ii) tissue autofluorescence. In this paper hyper-spectral images of
m-FISH samples are used and spectral unmixing is applied to produce
false colour images with higher contrast and better information
content than standard RGB images. The spectral unmixing is realised
by combinations of: Orthogonal Projection Analysis (OPA), Alterating
Least Squares (ALS), Simple-to-use Interactive Self-Modeling
Mixture Analysis (SIMPLISMA) and VARIMAX. These are applied
on the data to reduce tissue autofluorescence and resolve the spectral
overlap in the emission spectra. The results show that spectral unmixing
methods reduce the intensity caused by tissue autofluorescence by
up to 78% and enhance image contrast by algorithmically reducing
the overlap of the emission spectra.