Abstract: This paper presents a constrained valley detection
algorithm. The intent is to find valleys in the map for the path planning
that enables a robot or a vehicle to move safely. The constraint to the
valley is a desired width and a desired depth to ensure the space for
movement when a vehicle passes through the valley. We propose an
algorithm to find valleys satisfying these 2 dimensional constraints.
The merit of our algorithm is that the pre-processing and the
post-processing are not necessary to eliminate undesired small valleys.
The algorithm is validated through simulation using digitized
elevation data.
Abstract: The development of many measurement and inspection systems of products based on real-time image processing can not be carried out totally in a laboratory due to the size or the temperature of the manufactured products. Those systems must be developed in successive phases. Firstly, the system is installed in the production line with only an operational service to acquire images of the products and other complementary signals. Next, a recording service of the image and signals must be developed and integrated in the system. Only after a large set of images of products is available, the development of the real-time image processing algorithms for measurement or inspection of the products can be accomplished under realistic conditions. Finally, the recording service is turned off or eliminated and the system operates only with the real-time services for the acquisition and processing of the images. This article presents a systematic performance evaluation of the image compression algorithms currently available to implement a real-time recording service. The results allow establishing a trade off between the reduction or compression of the image size and the CPU time required to get that compression level.
Abstract: Stochastic resonance (SR) is a phenomenon whereby
the signal transmission or signal processing through certain nonlinear
systems can be improved by adding noise. This paper discusses SR in
nonlinear signal detection by a simple test statistic, which can be
computed from multiple noisy data in a binary decision problem based
on a maximum a posteriori probability criterion. The performance of
detection is assessed by the probability of detection error Per . When
the input signal is subthreshold signal, we establish that benefit from
noise can be gained for different noises and confirm further that the
subthreshold SR exists in nonlinear signal detection. The efficacy of
SR is significantly improved and the minimum of Per can
dramatically approach to zero as the sample number increases. These
results show the robustness of SR in signal detection and extend the
applicability of SR in signal processing.
Abstract: For the communication between human and computer
in an interactive computing environment, the gesture recognition is
studied vigorously. Therefore, a lot of studies have proposed efficient
methods about the recognition algorithm using 2D camera captured
images. However, there is a limitation to these methods, such as the
extracted features cannot fully represent the object in real world.
Although many studies used 3D features instead of 2D features for
more accurate gesture recognition, the problem, such as the processing
time to generate 3D objects, is still unsolved in related researches.
Therefore we propose a method to extract the 3D features combined
with the 3D object reconstruction. This method uses the modified
GPU-based visual hull generation algorithm which disables unnecessary
processes, such as the texture calculation to generate three kinds
of 3D projection maps as the 3D feature: a nearest boundary, a farthest
boundary, and a thickness of the object projected on the base-plane. In
the section of experimental results, we present results of proposed
method on eight human postures: T shape, both hands up, right hand
up, left hand up, hands front, stand, sit and bend, and compare the
computational time of the proposed method with that of the previous
methods.
Abstract: The present work consecutively on synthesis and
characterization of composites, Al/Al alloy A 384.1 as matrix in
which the main ingredient as Al/Al-5% MgO alloy based metal
matrix composite. As practical implications the low cost processing
route for the fabrication of Al alloy A 384.1 and operational
difficulties of presently available manufacturing processes based in
liquid manipulation methods. As all new developments, complete
understanding of the influence of processing variables upon the final
quality of the product. And the composite is applied comprehensively
to the acquaintance for achieving superiority of information
concerning the specific heat measurement of a material through the
aid of thermographs. Products are evaluated concerning relative
particle size and mechanical behavior under tensile strength.
Furthermore, Taguchi technique was employed to examine the
experimental optimum results are achieved, owing to effectiveness of
this approach.
Abstract: this paper gives a novel approach towards real-time speed estimation of multiple traffic vehicles using fuzzy logic and image processing techniques with proper arrangement of camera parameters. The described algorithm consists of several important steps. First, the background is estimated by computing median over time window of specific frames. Second, the foreground is extracted using fuzzy similarity approach (FSA) between estimated background pixels and the current frame pixels containing foreground and background. Third, the traffic lanes are divided into two parts for both direction vehicles for parallel processing. Finally, the speeds of vehicles are estimated by Maximum a Posterior Probability (MAP) estimator. True ground speed is determined by utilizing infrared sensors for three different vehicles and the results are compared to the proposed algorithm with an accuracy of ± 0.74 kmph.
Abstract: The purpose of this research is to compare the original
intra-oral digital dental radiograph images with images that are
enhanced using a combination of image processing algorithms. Intraoral
digital dental radiograph images are often noisy, blur edges and
low in contrast. A combination of sharpening and enhancement
method are used to overcome these problems. Three types of
proposed compound algorithms used are Sharp Adaptive Histogram
Equalization (SAHE), Sharp Median Adaptive Histogram
Equalization (SMAHE) and Sharp Contrast adaptive histogram
equalization (SCLAHE). This paper presents an initial study of the
perception of six dentists on the details of abnormal pathologies and
improvement of image quality in ten intra-oral radiographs. The
research focus on the detection of only three types of pathology
which is periapical radiolucency, widen periodontal ligament space
and loss of lamina dura. The overall result shows that SCLAHE-s
slightly improve the appearance of dental abnormalities- over the
original image and also outperform the other two proposed
compound algorithms.
Abstract: Image watermarking has proven to be quite an
efficient tool for the purpose of copyright protection and
authentication over the last few years. In this paper, a novel image
watermarking technique in the wavelet domain is suggested and
tested. To achieve more security and robustness, the proposed
techniques relies on using two nested watermarks that are embedded
into the image to be watermarked. A primary watermark in form of a
PN sequence is first embedded into an image (the secondary
watermark) before being embedded into the host image. The
technique is implemented using Daubechies mother wavelets where
an arbitrary embedding factor α is introduced to improve the
invisibility and robustness. The proposed technique has been applied
on several gray scale images where a PSNR of about 60 dB was
achieved.
Abstract: The role of knowledge is a determinative factor in the
life of economy and society. To determine knowledge is not an easy
task yet the real task is to determine the right knowledge. From this
view knowledge is a sum of experience, ideas and cognitions which
can help companies to remain in markets and to realize a maximum
profit. At the same time changes of circumstances project in advance
that contents and demands of the right knowledge are changing. In
this paper we will analyse a special segment on the basis of an
empirical survey. We investigated the behaviour and strategies of
small and medium sized enterprises (SMEs) in the area of
knowledge-handling. This survey was realized by questionnaires and
wide range statistical methods were used during processing. As a
result we will show how these companies are prepared to operate in a
knowledge-based economy and in which areas they have prominent
deficiencies.
Abstract: Embedded systems need to respect stringent real
time constraints. Various hardware components included in such
systems such as cache memories exhibit variability and therefore
affect execution time. Indeed, a cache memory access from an
embedded microprocessor might result in a cache hit where the
data is available or a cache miss and the data need to be fetched
with an additional delay from an external memory. It is therefore
highly desirable to predict future memory accesses during
execution in order to appropriately prefetch data without incurring
delays. In this paper, we evaluate the potential of several artificial
neural networks for the prediction of instruction memory
addresses. Neural network have the potential to tackle the nonlinear
behavior observed in memory accesses during program
execution and their demonstrated numerous hardware
implementation emphasize this choice over traditional forecasting
techniques for their inclusion in embedded systems. However,
embedded applications execute millions of instructions and
therefore millions of addresses to be predicted. This very
challenging problem of neural network based prediction of large
time series is approached in this paper by evaluating various neural
network architectures based on the recurrent neural network
paradigm with pre-processing based on the Self Organizing Map
(SOM) classification technique.
Abstract: Echocardiography imaging is one of the most common diagnostic tests that are widely used for assessing the abnormalities of the regional heart ventricle function. The main goal of the image enhancement task in 2D-echocardiography (2DE) is to solve two major anatomical structure problems; speckle noise and low quality. Therefore, speckle noise reduction is one of the important steps that used as a pre-processing to reduce the distortion effects in 2DE image segmentation. In this paper, we present the common filters that based on some form of low-pass spatial smoothing filters such as Mean, Gaussian, and Median. The Laplacian filter was used as a high-pass sharpening filter. A comparative analysis was presented to test the effectiveness of these filters after being applied to original 2DE images of 4-chamber and 2-chamber views. Three statistical quantity measures: root mean square error (RMSE), peak signal-to-ratio (PSNR) and signal-tonoise ratio (SNR) are used to evaluate the filter performance quantitatively on the output enhanced image.
Abstract: In modern human computer interaction systems
(HCI), emotion recognition is becoming an imperative characteristic.
The quest for effective and reliable emotion recognition in HCI has
resulted in a need for better face detection, feature extraction and
classification. In this paper we present results of feature space analysis
after briefly explaining our fully automatic vision based emotion
recognition method. We demonstrate the compactness of the feature
space and show how the 2d/3d based method achieves superior features
for the purpose of emotion classification. Also it is exposed that
through feature normalization a widely person independent feature
space is created. As a consequence, the classifier architecture has
only a minor influence on the classification result. This is particularly
elucidated with the help of confusion matrices. For this purpose
advanced classification algorithms, such as Support Vector Machines
and Artificial Neural Networks are employed, as well as the simple k-
Nearest Neighbor classifier.
Abstract: Research in quantum computation is looking for the consequences of having information encoding, processing and communication exploit the laws of quantum physics, i.e. the laws which govern the ultimate knowledge that we have, today, of the foreign world of elementary particles, as described by quantum mechanics. This paper starts with a short survey of the principles which underlie quantum computing, and of some of the major breakthroughs brought by the first ten to fifteen years of research in this domain; quantum algorithms and quantum teleportation are very biefly presented. The next sections are devoted to one among the many directions of current research in the quantum computation paradigm, namely quantum programming languages and their semantics. A few other hot topics and open problems in quantum information processing and communication are mentionned in few words in the concluding remarks, the most difficult of them being the physical implementation of a quantum computer. The interested reader will find a list of useful references at the end of the paper.
Abstract: Although Face detection is not a recent activity in the
field of image processing, it is still an open area for research. The
greatest step in this field is the work reported by Viola and its recent
analogous is Huang et al. Both of them use similar features and also
similar training process. The former is just for detecting upright
faces, but the latter can detect multi-view faces in still grayscale
images using new features called 'sparse feature'. Finding these
features is very time consuming and inefficient by proposed methods.
Here, we propose a new approach for finding sparse features using a
genetic algorithm system. This method requires less computational
cost and gets more effective features in learning process for face
detection that causes more accuracy.
Abstract: This paper demonstrates design and construction of
microcontroller-based telephone exchange system and the aims of
this paper is to study telecommunication, connection with
PIC16F877A and DTMF MT8870D. In microcontroller system, PIC
16F877 microcontroller is used to control the call processing. Dial
tone, busy tone and ring tone are provided during call progress.
Instead of using ready made tone generator IC, oscillator based tone
generator is used. The results of this telephone exchange system are
perfect for homes and small businesses needing the extensions. It
requires the phone operation control system, the analog interface
circuit and the switching circuit. This exchange design will contain
eight channels.
It is the best low cost, good quality telephone exchange for today-s
telecommunication needs. It offers the features available in much
more expensive PBX units without using high-priced phones. It is for
long distance telephone services.
Abstract: This paper presents various classifiers results from a system that can automatically recognize four different static human body postures in video sequences. The considered postures are standing, sitting, squatting, and lying. The three classifiers considered are a naïve one and two based on the belief theory. The belief theory-based classifiers use either a classic or restricted plausibility criterion to make a decision after data fusion. The data come from the people 2D segmentation and from their face localization. Measurements consist in distances relative to a reference posture. The efficiency and the limits of the different classifiers on the recognition system are highlighted thanks to the analysis of a great number of results. This system allows real-time processing.
Abstract: The article deals with the relation between rainfall in selected months and subsequent weed infestation of spring barley. The field experiment was performed at Mendel University agricultural enterprise in Žabčice, Czech Republic. Weed infestation was measured in spring barley vegetation in years 2004 to 2012. Barley was grown in three tillage variants: conventional tillage technology (CT), minimization tillage technology (MT), and no tillage (NT). Precipitation was recorded in one-day intervals. Monthly precipitation was calculated from the measured values in the months of October through to April. The technique of canonical correspondence analysis was applied for further statistical processing. 41 different species of weeds were found in the course of the 9-year monitoring period. The results clearly show that precipitation affects the incidence of most weed species in the selected months, but acts differently in the monitored variants of tillage technologies.
Abstract: This paper presented a new approach for centralized
monitoring and self-protected against fiber fault in fiber-to-the-home
(FTTH) access network by using Smart Access Network Testing,
Analyzing and Database (SANTAD). SANTAD will be installed
with optical line terminal (OLT) at central office (CO) for in-service
transmission surveillance and fiber fault localization within FTTH
with point-to-multipoint (P2MP) configuration downwardly from CO
towards customer residential locations based on the graphical user
interface (GUI) processing capabilities of MATLAB software.
SANTAD is able to detect any fiber fault as well as identify the
failure location in the network system. SANTAD enable the status of
each optical network unit (ONU) connected line is displayed onto
one screen with capability to configure the attenuation and detect the
failure simultaneously. The analysis results and information will be
delivered to the field engineer for promptly actions, meanwhile the
failure line will be diverted to protection line to ensure the traffic
flow continuously. This approach has a bright prospect to improve
the survivability and reliability as well as increase the efficiency and
monitoring capabilities in FTTH.
Abstract: The motivation of this work was to find a suitable 3D
scanner for human body parts digitalization in the field of prosthetics
and orthotics. The main project objective is to compare the three
hand-held portable scanners (two optical and one laser) and two
optical tripod scanners. The comparison was made with respect of
scanning detail, simplicity of operation and ability to scan directly on
the human body. Testing was carried out on a plaster cast of the
upper limb and directly on a few volunteers. The objective monitored
parameters were time of digitizing and post-processing of 3D data
and resulting visual data quality. Subjectively, it was considered level
of usage and handling of the scanner. The new tripod was developed
to improve the face scanning conditions. The results provide an
overview of the suitability of different types of scanners.
Abstract: Interactive push VOD system is a new kind of system
that incorporates push technology and interactive technique. It can
push movies to users at high speeds at off-peak hours for optimal
network usage so as to save bandwidth. This paper presents effective
software-based solution for processing mass downstream data at
terminals of interactive push VOD system, where the service can
download movie according to a viewer-s selection. The downstream
data is divided into two catalogs: (1) the carousel data delivered
according to DSM-CC protocol; (2) IP data delivered according to
Euro-DOCSIS protocol. In order to accelerate download speed and
reduce data loss rate at terminals, this software strategy introduces
caching, multi-thread and resuming mechanisms. The experiments
demonstrate advantages of the software-based solution.