Abstract: In this paper a class of analog algorithms based on the
concept of Cellular Neural Network (CNN) is applied in some
processing operations of some important medical images, namely
retina images, for detecting various symptoms connected with
diabetic retinopathy. Some specific processing tasks like
morphological operations, linear filtering and thresholding are
proposed, the corresponding template values are given and
simulations on real retina images are provided.
Abstract: Different methods containing biometric algorithms are
presented for the representation of eigenfaces detection including
face recognition, are identification and verification. Our theme of this
research is to manage the critical processing stages (accuracy, speed,
security and monitoring) of face activities with the flexibility of
searching and edit the secure authorized database. In this paper we
implement different techniques such as eigenfaces vector reduction
by using texture and shape vector phenomenon for complexity
removal, while density matching score with Face Boundary Fixation
(FBF) extracted the most likelihood characteristics in this media
processing contents. We examine the development and performance
efficiency of the database by applying our creative algorithms in both
recognition and detection phenomenon. Our results show the
performance accuracy and security gain with better achievement than
a number of previous approaches in all the above processes in an
encouraging mode.
Abstract: The fact that traditional food safety system in the
absence of food safety culture is inadequate has recently become a
cause of concern for food safety professionals and other stakeholders.
Focusing on implementation of traditional food safety system i.e
HACCP prerequisite program and HACCP without the presence of
food safety culture in the food industry has led to the processing,
marketing and distribution of contaminated foods. The results of this
are regular out breaks of food borne illnesses and recalls of foods
from retail outlets with serious consequences to the consumers and
manufacturers alike. This article will consider the importance of food
safety culture, the cases of outbreaks and recalls that occurred when
companies did not make food safety culture a priority. Most
importantly, the food safety cultures of some food industries in South
Africa were assessed from responses to questionnaires from food
safety/food industry professionals in Durban South Africa. The
article was concluded by recommending that both food
industry employees and employers alike take food safety culture
seriously.
Abstract: This paper presents the communication network for
machine vision system to implement to control systems and logistics
applications in industrial environment. The real-time distributed over
the network is very important for communication among vision node,
image processing and control as well as the distributed I/O node. A
robust implementation both with respect to camera packaging and
data transmission has been accounted. This network consists of a
gigabit Ethernet network and a switch with integrated fire-wall is
used to distribute the data and provide connection to the imaging
control station and IEC-61131 conform signal integration comprising
the Modbus TCP protocol. The real-time and delay time properties
each part on the network were considered and worked out in this
paper.
Abstract: Image processing for capsule endoscopy requires large
memory and it takes hours for diagnosis since operation time is
normally more than 8 hours. A real-time analysis algorithm of capsule
images can be clinically very useful. It can differentiate abnormal
tissue from health structure and provide with correlation information
among the images. Bleeding is our interest in this regard and we
propose a method of detecting frames with potential bleeding in
real-time. Our detection algorithm is based on statistical analysis and
the shapes of bleeding spots. We tested our algorithm with 30 cases of
capsule endoscopy in the digestive track. Results were excellent where
a sensitivity of 99% and a specificity of 97% were achieved in
detecting the image frames with bleeding spots.
Abstract: The application of Neural Network for disease
diagnosis has made great progress and is widely used by physicians.
An Electrocardiogram carries vital information about heart activity and physicians use this signal for cardiac disease diagnosis which
was the great motivation towards our study. In our work, tachycardia
features obtained are used for the training and testing of a Neural
Network. In this study we are using Fuzzy Probabilistic Neural
Networks as an automatic technique for ECG signal analysis. As
every real signal recorded by the equipment can have different
artifacts, we needed to do some preprocessing steps before feeding it
to our system. Wavelet transform is used for extracting the
morphological parameters of the ECG signal. The outcome of the
approach for the variety of arrhythmias shows the represented
approach is superior than prior presented algorithms with an average
accuracy of about %95 for more than 7 tachy arrhythmias.
Abstract: The H.264/AVC standard is a highly efficient video
codec providing high-quality videos at low bit-rates. As employing
advanced techniques, the computational complexity has been
increased. The complexity brings about the major problem in the
implementation of a real-time encoder and decoder. Parallelism is the
one of approaches which can be implemented by multi-core system.
We analyze macroblock-level parallelism which ensures the same bit
rate with high concurrency of processors. In order to reduce the
encoding time, dynamic data partition based on macroblock region is
proposed. The data partition has the advantages in load balancing and
data communication overhead. Using the data partition, the encoder
obtains more than 3.59x speed-up on a four-processor system. This
work can be applied to other multimedia processing applications.
Abstract: Emerging Bio-engineering fields such as Brain
Computer Interfaces, neuroprothesis devices and modeling and
simulation of neural networks have led to increased research activity
in algorithms for the detection, isolation and classification of Action
Potentials (AP) from noisy data trains. Current techniques in the field
of 'unsupervised no-prior knowledge' biosignal processing include
energy operators, wavelet detection and adaptive thresholding. These
tend to bias towards larger AP waveforms, AP may be missed due to
deviations in spike shape and frequency and correlated noise
spectrums can cause false detection. Also, such algorithms tend to
suffer from large computational expense.
A new signal detection technique based upon the ideas of phasespace
diagrams and trajectories is proposed based upon the use of a
delayed copy of the AP to highlight discontinuities relative to
background noise. This idea has been used to create algorithms that
are computationally inexpensive and address the above problems.
Distinct AP have been picked out and manually classified from
real physiological data recorded from a cockroach. To facilitate
testing of the new technique, an Auto Regressive Moving Average
(ARMA) noise model has been constructed bases upon background
noise of the recordings. Along with the AP classification means this
model enables generation of realistic neuronal data sets at arbitrary
signal to noise ratio (SNR).
Abstract: Grid computing is a high performance computing
environment to solve larger scale computational applications. Grid
computing contains resource management, job scheduling, security
problems, information management and so on. Job scheduling is a
fundamental and important issue in achieving high performance in
grid computing systems. However, it is a big challenge to design an
efficient scheduler and its implementation. In Grid Computing, there
is a need of further improvement in Job Scheduling algorithm to
schedule the light-weight or small jobs into a coarse-grained or
group of jobs, which will reduce the communication time,
processing time and enhance resource utilization. This Grouping
strategy considers the processing power, memory-size and
bandwidth requirements of each job to realize the real grid system.
The experimental results demonstrate that the proposed scheduling
algorithm efficiently reduces the processing time of jobs in
comparison to others.
Abstract: A theory for optimal filtering of infinite sets of random
signals is presented. There are several new distinctive features of the
proposed approach. First, a single optimal filter for processing any
signal from a given infinite signal set is provided. Second, the filter is
presented in the special form of a sum with p terms where each term
is represented as a combination of three operations. Each operation
is a special stage of the filtering aimed at facilitating the associated
numerical work. Third, an iterative scheme is implemented into the
filter structure to provide an improvement in the filter performance at
each step of the scheme. The final step of the scheme concerns signal
compression and decompression. This step is based on the solution of
a new rank-constrained matrix approximation problem. The solution
to the matrix problem is described in this paper. A rigorous error
analysis is given for the new filter.
Abstract: In DMVC, we have more than one options of sources available for construction of side information. The newer techniques make use of both the techniques simultaneously by constructing a bitmask that determines the source of every block or pixel of the side information. A lot of computation is done to determine each bit in the bitmask. In this paper, we have tried to define areas that can only be well predicted by temporal interpolation and not by multiview interpolation or synthesis. We predict that all such areas that are not covered by two cameras cannot be appropriately predicted by multiview synthesis and if we can identify such areas in the first place, we don-t need to go through the script of computations for all the pixels that lie in those areas. Moreover, this paper also defines a technique based on KLT to mark the above mentioned areas before any other processing is done on the side view.
Abstract: METIS is the Multi Element Telescope for Imaging
and Spectroscopy, a Coronagraph aboard the European Space
Agency-s Solar Orbiter Mission aimed at the observation of the solar
corona via both VIS and UV/EUV narrow-band imaging and spectroscopy. METIS, with its multi-wavelength capabilities, will
study in detail the physical processes responsible for the corona heating and the origin and properties of the slow and fast solar wind.
METIS electronics will collect and process scientific data thanks to its detectors proximity electronics, the digital front-end subsystem
electronics and the MPPU, the Main Power and Processing Unit,
hosting a space-qualified processor, memories and some rad-hard
FPGAs acting as digital controllers.This paper reports on the overall
METIS electronics architecture and data processing capabilities
conceived to address all the scientific issues as a trade-off solution between requirements and allocated resources, just before the
Preliminary Design Review as an ESA milestone in April 2012.
Abstract: Segmentation is an important step in medical image
analysis and classification for radiological evaluation or computer
aided diagnosis. The CAD (Computer Aided Diagnosis ) of lung CT
generally first segment the area of interest (lung) and then analyze
the separately obtained area for nodule detection in order to
diagnosis the disease. For normal lung, segmentation can be
performed by making use of excellent contrast between air and
surrounding tissues. However this approach fails when lung is
affected by high density pathology. Dense pathologies are present in
approximately a fifth of clinical scans, and for computer analysis
such as detection and quantification of abnormal areas it is vital that
the entire and perfectly lung part of the image is provided and no
part, as present in the original image be eradicated. In this paper we
have proposed a lung segmentation technique which accurately
segment the lung parenchyma from lung CT Scan images. The
algorithm was tested against the 25 datasets of different patients
received from Ackron Univeristy, USA and AGA Khan Medical
University, Karachi, Pakistan.
Abstract: Local Linear Neuro-Fuzzy Models (LLNFM) like other neuro- fuzzy systems are adaptive networks and provide robust learning capabilities and are widely utilized in various applications such as pattern recognition, system identification, image processing and prediction. Local linear model tree (LOLIMOT) is a type of Takagi-Sugeno-Kang neuro fuzzy algorithm which has proven its efficiency compared with other neuro fuzzy networks in learning the nonlinear systems and pattern recognition. In this paper, a dedicated reconfigurable and parallel processing hardware for LOLIMOT algorithm and its applications are presented. This hardware realizes on-chip learning which gives it the capability to work as a standalone device in a system. The synthesis results on FPGA platforms show its potential to improve the speed at least 250 of times faster than software implemented algorithms.
Abstract: Automatic reading of handwritten cheque is a computationally
complex process and it plays an important role in financial
risk management. Machine vision and learning provide a viable
solution to this problem. Research effort has mostly been focused
on recognizing diverse pitches of cheques and demand drafts with an
identical outline. However most of these methods employ templatematching
to localize the pitches and such schemes could potentially
fail when applied to different types of outline maintained by the
bank. In this paper, the so-called outline problem is resolved by
a cheque information tree (CIT), which generalizes the localizing
method to extract active-region-of-entities. In addition, the weight
based density plot (WBDP) is performed to isolate text entities and
read complete pitches. Recognition is based on texture features using
neural classifiers. Legal amount is subsequently recognized by both
texture and perceptual features. A post-processing phase is invoked
to detect the incorrect readings by Type-2 grammar using the Turing
machine. The performance of the proposed system was evaluated
using cheque and demand drafts of 22 different banks. The test data
consists of a collection of 1540 leafs obtained from 10 different
account holders from each bank. Results show that this approach
can easily be deployed without significant design amendments.
Abstract: Many experimental results suggest that more precise spike timing is significant in neural information processing. We construct a self-organization model using the spatiotemporal pat-terns, where Spike-Timing Dependent Plasticity (STDP) tunes the conduction delays between neurons. We show that, for highly syn-chronized inputs, the fluctuation of conduction delays causes globally continuous and locally distributed firing patterns through the self-organization.
Abstract: A data warehouse (DW) is a system which has value and role for decision-making by querying. Queries to DW are critical regarding to their complexity and length. They often access millions of tuples, and involve joins between relations and aggregations. Materialized views are able to provide the better performance for DW queries. However, these views have maintenance cost, so materialization of all views is not possible. An important challenge of DW environment is materialized view selection because we have to realize the trade-off between performance and view maintenance cost. Therefore, in this paper, we introduce a new approach aimed at solve this challenge based on Two-Phase Optimization (2PO), which is a combination of Simulated Annealing (SA) and Iterative Improvement (II), with the use of Multiple View Processing Plan (MVPP). Our experiments show that our method provides a further improvement in term of query processing cost and view maintenance cost.
Abstract: In digital signal processing it is important to
approximate multi-dimensional data by the method called rank
reduction, in which we reduce the rank of multi-dimensional data from
higher to lower. For 2-dimennsional data, singular value
decomposition (SVD) is one of the most known rank reduction
techniques. Additional, outer product expansion expanded from SVD
was proposed and implemented for multi-dimensional data, which has
been widely applied to image processing and pattern recognition.
However, the multi-dimensional outer product expansion has behavior
of great computation complex and has not orthogonally between the
expansion terms. Therefore we have proposed an alterative method,
Third-order Orthogonal Tensor Product Expansion short for 3-OTPE.
3-OTPE uses the power method instead of nonlinear optimization
method for decreasing at computing time. At the same time the group
of B. D. Lathauwer proposed Higher-Order SVD (HOSVD) that is
also developed with SVD extensions for multi-dimensional data.
3-OTPE and HOSVD are similarly on the rank reduction of
multi-dimensional data. Using these two methods we can obtain
computation results respectively, some ones are the same while some
ones are slight different. In this paper, we compare 3-OTPE to
HOSVD in accuracy of calculation and computing time of resolution,
and clarify the difference between these two methods.
Abstract: XML has become a popular standard for information exchange via web. Each XML document can be presented as a rooted, ordered, labeled tree. The Node label shows the exact position of a node in the original document. Region and Dewey encoding are two famous methods of labeling trees. In this paper, we propose a new insert friendly labeling method named IFDewey based on recently proposed scheme, called Extended Dewey. In Extended Dewey many labels must be modified when a new node is inserted into the XML tree. Our method eliminates this problem by reserving even numbers for future insertion. Numbers generated by Extended Dewey may be even or odd. IFDewey modifies Extended Dewey so that only odd numbers are generated and even numbers can then be used for a much easier insertion of nodes.
Abstract: Toughening of polyamide 6 (PA6)/ Nanoclay (NC) nanocomposites with styrene-ethylene/butadiene-styrene copolymer (SEBS) using maleated styrene-ethylene/butadiene-styrene copolymer (mSEBS)/ as a compatibilizer were investigated by blending them in a co-rotating twin-screw extruder. Response surface method of experimental design was used for optimizing the material and processing parameters. Effect of four factors, including SEBS, mSEBS and NC contents as material variables and order of mixing as a processing factor, on toughness of hybrid nanocomposites were studied. All the prepared samples showed ductile behavior and low temperature Izod impact toughness of some of the hybrid nanocomposites demonstrated 900% improvement compared to the PA6 matrix while the modulus showed maximum enhancement of 20% compared to the pristine PA6 resin.