Abstract: We propose a new perspective on speech
communication using blind source separation. The original speech is
mixed with key signals which consist of the mixing matrix, chaotic
signals and a random noise. However, parts of the keys (the mixing
matrix and the random noise) are not necessary in decryption. In
practice implement, one can encrypt the speech by changing the noise
signal every time. Hence, the present scheme obtains the advantages
of a One Time Pad encryption while avoiding its drawbacks in key
exchange. It is demonstrated that the proposed scheme is immune
against traditional attacks.
Abstract: In this paper, we propose an improved fast search
algorithm using combined histogram features and temporal division
method for short MPEG video clips from large video database. There
are two types of histogram features used to generate more robust
features. The first one is based on the adjacent pixel intensity
difference quantization (APIDQ) algorithm, which had been reliably
applied to human face recognition previously. An APIDQ histogram is
utilized as the feature vector of the frame image. Another one is
ordinal feature which is robust to color distortion. Combined with
active search [4], a temporal pruning algorithm, fast and robust video
search can be realized. The proposed search algorithm has been
evaluated by 6 hours of video to search for given 200 MPEG video
clips which each length is 30 seconds. Experimental results show the
proposed algorithm can detect the similar video clip in merely 120ms,
and Equal Error Rate (ERR) of 1% is achieved, which is more
accurately and robust than conventional fast video search algorithm.
Abstract: A novel algorithm for construct a seamless video mosaic of the entire panorama continuously by automatically analyzing and managing feature points, including management of quantity and quality, from the sequence is presented. Since a video contains significant redundancy, so that not all consecutive video images are required to create a mosaic. Only some key images need to be selected. Meanwhile, feature-based methods for mosaicing rely on correction of feature points? correspondence deeply, and if the key images have large frame interval, the mosaic will often be interrupted by the scarcity of corresponding feature points. A unique character of the method is its ability to handle all the problems above in video mosaicing. Experiments have been performed under various conditions, the results show that our method could achieve fast and accurate video mosaic construction. Keywords?video mosaic, feature points management, homography estimation.
Abstract: We summarize information that facilitates choosing an ontology language for knowledge intensive applications. This paper is a short version of the ontology language state-of-the-art and evolution analysis carried out for choosing an ontology language in the IST Esperonto project. At first, we analyze changes and evolution that took place in the filed of Semantic Web languages during the last years, in particular, around the ontology languages of the RDF/S and OWL family. Second, we present current trends in development of Semantic Web languages, in particular, rule support extensions for Semantic Web languages and emerging ontology languages such as WSMO languages.
Abstract: A complex valued neural network is a neural network, which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in image and vision processing. In Neural networks, radial basis functions are often used for interpolation in multidimensional space. A Radial Basis function is a function, which has built into it a distance criterion with respect to a centre. Radial basis functions have often been applied in the area of neural networks where they may be used as a replacement for the sigmoid hidden layer transfer characteristic in multi-layer perceptron. This paper aims to present exhaustive results of using RBF units in a complex-valued neural network model that uses the back-propagation algorithm (called 'Complex-BP') for learning. Our experiments results demonstrate the effectiveness of a Radial basis function in a complex valued neural network in image recognition over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error on a neural network model with RBF units. Some inherent properties of this complex back propagation algorithm are also studied and discussed.
Abstract: In this paper, we introduce a novel platform
encryption method, which modify its keys and random number
generators step by step during encryption algorithms. According to
complexity of the proposed algorithm, it was safer than any other
method.
Abstract: We present a general comparison of punctual kriging based image restoration for different neighbourhood sizes. The formulation of the technique under consideration is based on punctual kriging and fuzzy concepts for image restoration in spatial domain. Three different neighbourhood windows are considered to estimate the semivariance at different lags for studying its effect in reduction of negative weights resulted in punctual kriging, consequently restoration of degraded images. Our results show that effect of neighbourhood size higher than 5x5 on reduction in negative weights is insignificant. In addition, image quality measures, such as structure similarity indices, peak signal to noise ratios and the new variogram based quality measures; show that 3x3 window size gives better performance as compared with larger window sizes.
Abstract: An image texture analysis and target recognition approach of using an improved image texture feature coding method (TFCM) and Support Vector Machine (SVM) for target detection is presented. With our proposed target detection framework, targets of interest can be detected accurately. Cascade-Sliding-Window technique was also developed for automated target localization. Application to mammogram showed that over 88% of normal mammograms and 80% of abnormal mammograms can be correctly identified. The approach was also successfully applied to Synthetic Aperture Radar (SAR) and Ground Penetrating Radar (GPR) images for target detection.
Abstract: An effective approach for realizing the binary tree structure, representing a combinational logic functionality with enhanced throughput, is discussed in this paper. The optimization in maximum operating frequency was achieved through delay minimization, which in turn was possible by means of reducing the depth of the binary network. The proposed synthesis methodology has been validated by experimentation with FPGA as the target technology. Though our proposal is technology independent, yet the heuristic enables better optimization in throughput even after technology mapping for such Boolean functionality; whose reduced CNF form is associated with a lesser literal cost than its reduced DNF form at the Boolean equation level. For cases otherwise, our method converges to similar results as that of [12]. The practical results obtained for a variety of case studies demonstrate an improvement in the maximum throughput rate for Spartan IIE (XC2S50E-7FT256) and Spartan 3 (XC3S50-4PQ144) FPGA logic families by 10.49% and 13.68% respectively. With respect to the LUTs and IOBUFs required for physical implementation of the requisite non-regenerative logic functionality, the proposed method enabled savings to the tune of 44.35% and 44.67% respectively, over the existing efficient method available in literature [12].
Abstract: Data hiding into text documents itself involves pretty
complexities due to the nature of text documents. A robust text
watermarking scheme targeting an object based environment is
presented in this research. The heart of the proposed solution
describes the concept of watermarking an object based text document
where each and every text string is entertained as a separate object
having its own set of properties. Taking advantage of the z-ordering
of objects watermark is applied with the z-axis letting zero fidelity
disturbances to the text. Watermark sequence of bits generated
against user key is hashed with selected properties of given
document, to determine the bit sequence to embed. Bits are
embedded along z-axis and the document has no fidelity issues when
printed, scanned or photocopied.
Abstract: This paper presents an architecture of current filesystem
implementations as well as our new filesystem SpadFS and operating
system Spad with rewritten VFS layer targeted at high performance
I/O applications. The paper presents microbenchmarks and real-world
benchmarks of different filesystems on the same kernel as well as
benchmarks of the same filesystem on different kernels – enabling
the reader to make conclusion how much is the performance of
various tasks affected by operating system and how much by physical
layout of data on disk. The paper describes our novel features–most
notably continuous allocation of directories and cross-file readahead
– and shows their impact on performance.
Abstract: In this article, a formal specification and verification of the Rabin public-key scheme in a formal proof system is presented. The idea is to use the two views of cryptographic verification: the computational approach relying on the vocabulary of probability theory and complexity theory and the formal approach based on ideas and techniques from logic and programming languages. A major objective of this article is the presentation of the first computer-proved implementation of the Rabin public-key scheme in Isabelle/HOL. Moreover, we explicate a (computer-proven) formalization of correctness as well as a computer verification of security properties using a straight-forward computation model in Isabelle/HOL. The analysis uses a given database to prove formal properties of our implemented functions with computer support. The main task in designing a practical formalization of correctness as well as efficient computer proofs of security properties is to cope with the complexity of cryptographic proving. We reduce this complexity by exploring a light-weight formalization that enables both appropriate formal definitions as well as efficient formal proofs. Consequently, we get reliable proofs with a minimal error rate augmenting the used database, what provides a formal basis for more computer proof constructions in this area.
Abstract: This paper attempts to discuss the evolution of the
retrieval techniques focusing on development, challenges and trends
of the image retrieval. It highlights both the already addressed and
outstanding issues. The explosive growth of image data leads to the
need of research and development of Image Retrieval. However,
Image retrieval researches are moving from keyword, to low level
features and to semantic features. Drive towards semantic features is
due to the problem of the keywords which can be very subjective and
time consuming while low level features cannot always describe high
level concepts in the users- mind.
Abstract: On-line (near infrared) spectroscopy is widely used to support the operation of complex process systems. Information extracted from spectral database can be used to estimate unmeasured product properties and monitor the operation of the process. These techniques are based on looking for similar spectra by nearest neighborhood algorithms and distance based searching methods. Search for nearest neighbors in the spectral space is an NP-hard problem, the computational complexity increases by the number of points in the discrete spectrum and the number of samples in the database. To reduce the calculation time some kind of indexing could be used. The main idea presented in this paper is to combine indexing and visualization techniques to reduce the computational requirement of estimation algorithms by providing a two dimensional indexing that can also be used to visualize the structure of the spectral database. This 2D visualization of spectral database does not only support application of distance and similarity based techniques but enables the utilization of advanced clustering and prediction algorithms based on the Delaunay tessellation of the mapped spectral space. This means the prediction has not to use the high dimension space but can be based on the mapped space too. The results illustrate that the proposed method is able to segment (cluster) spectral databases and detect outliers that are not suitable for instance based learning algorithms.
Abstract: In this paper, we focus on the fusion of images from
different sources using multiresolution wavelet transforms. Based on
reviews of popular image fusion techniques used in data analysis,
different pixel and energy based methods are experimented. A novel
architecture with a hybrid algorithm is proposed which applies pixel
based maximum selection rule to low frequency approximations and
filter mask based fusion to high frequency details of wavelet
decomposition. The key feature of hybrid architecture is the
combination of advantages of pixel and region based fusion in a
single image which can help the development of sophisticated
algorithms enhancing the edges and structural details. A Graphical
User Interface is developed for image fusion to make the research
outcomes available to the end user. To utilize GUI capabilities for
medical, industrial and commercial activities without MATLAB
installation, a standalone executable application is also developed
using Matlab Compiler Runtime.
Abstract: Genetic Algorithms (GAs) are direct searching
methods which require little information from design space. This
characteristic beside robustness of these algorithms makes them to be
very popular in recent decades. On the other hand, while this method
is employed, there is no guarantee to achieve optimum results. This
obliged designer to run such algorithms more than one time to
achieve more reliable results. There are many attempts to modify the
algorithms to make them more efficient. In this paper, by application
of fractal dimension (particularly, Box Counting Method), the
complexity of design space are established for determination of
mutation and crossover probabilities (Pm and Pc). This methodology
is followed by a numerical example for more clarification. It is
concluded that this modification will improve efficiency of GAs and
make them to bring about more reliable results especially for design
space with higher fractal dimensions.
Abstract: Continuously growing needs for Internet applications
that transmit massive amount of data have led to the emergence of
high speed network. Data transfer must take place without any
congestion and hence feedback parameters must be transferred from
the receiver end to the sender end so as to restrict the sending rate in
order to avoid congestion. Even though TCP tries to avoid
congestion by restricting the sending rate and window size, it never
announces the sender about the capacity of the data to be sent and
also it reduces the window size by half at the time of congestion
therefore resulting in the decrease of throughput, low utilization of
the bandwidth and maximum delay. In this paper, XCP protocol is
used and feedback parameters are calculated based on arrival rate,
service rate, traffic rate and queue size and hence the receiver
informs the sender about the throughput, capacity of the data to be
sent and window size adjustment, resulting in no drastic decrease in
window size, better increase in sending rate because of which there is
a continuous flow of data without congestion. Therefore as a result of
this, there is a maximum increase in throughput, high utilization of
the bandwidth and minimum delay. The result of the proposed work
is presented as a graph based on throughput, delay and window size.
Thus in this paper, XCP protocol is well illustrated and the various
parameters are thoroughly analyzed and adequately presented.
Abstract: In this paper, a novel system
recognition of human faces without using face
different color photographs is proposed. It mainly in
face detection, normalization and recognition. Foot
method of combination of Haar-like face determined
segmentation and region-based histogram stretchi
(RHST) is proposed to achieve more accurate perf
using Haar. Apart from an effective angle norm
side-face (pose) normalization, which is almost a might be important and beneficial for the prepr
introduced. Then histogram-based and photom
normalization methods are investigated and ada
retinex (ASR) is selected for its satisfactory illumin
Finally, weighted multi-block local binary pattern
with 3 distance measures is applied for pair-mat
Experimental results show its advantageous perfo
with PCA and multi-block LBP, based on a principle.
Abstract: In this paper, we propose a robust face relighting
technique by using spherical space properties. The proposed method
is done for reducing the illumination effects on face recognition.
Given a single 2D face image, we relight the face object by
extracting the nine spherical harmonic bases and the face spherical
illumination coefficients. First, an internal training illumination
database is generated by computing face albedo and face normal
from 2D images under different lighting conditions. Based on the
generated database, we analyze the target face pixels and compare
them with the training bootstrap by using pre-generated tiles. In this
work, practical real time processing speed and small image size were
considered when designing the framework. In contrast to other works,
our technique requires no 3D face models for the training process
and takes a single 2D image as an input. Experimental results on
publicly available databases show that the proposed technique works
well under severe lighting conditions with significant improvements
on the face recognition rates.
Abstract: Our Medicine-oriented research is based on a medical
data set of real patients. It is a security problem to share
patient private data with peoples other than clinician or hospital
staff. We have to remove person identification information
from medical data. The medical data without private data
are available after a de-identification process for any research
purposes. In this paper, we introduce an universal automatic
rule-based de-identification application to do all this stuff on an
heterogeneous medical data. A patient private identification is
replaced by an unique identification number, even in burnedin
annotation in pixel data. The identical identification is used
for all patient medical data, so it keeps relationships in a data.
Hospital can take an advantage of a research feedback based
on results.