Abstract: Digital watermarking is a way to provide the facility of secure multimedia data communication besides its copyright protection approach. The Spread Spectrum modulation principle is widely used in digital watermarking to satisfy the robustness of multimedia signals against various signal-processing operations. Several SS watermarking algorithms have been proposed for multimedia signals but very few works have discussed on the issues responsible for secure data communication and its robustness improvement. The current paper has critically analyzed few such factors namely properties of spreading codes, proper signal decomposition suitable for data embedding, security provided by the key, successive bit cancellation method applied at decoder which have greater impact on the detection reliability, secure communication of significant signal under camouflage of insignificant signals etc. Based on the analysis, robust SS watermarking scheme for secure data communication is proposed in wavelet domain and improvement in secure communication and robustness performance is reported through experimental results. The reported result also shows improvement in visual and statistical invisibility of the hidden data.
Abstract: A new decomposition form is introduced in this report
to establish a criterion for the bi-partite separability of Bell diagonal
states. A such criterion takes a quadratic inequality of the coefficients
of a given Bell diagonal states and can be derived via a simple
algorithmic calculation of its invariants. In addition, the criterion can
be extended to a quantum system of higher dimension.
Abstract: This paper discusses the effectiveness of the EEG signal
for human identification using four or less of channels of two different
types of EEG recordings. Studies have shown that the EEG signal
has biometric potential because signal varies from person to person
and impossible to replicate and steal. Data were collected from 10
male subjects while resting with eyes open and eyes closed in 5
separate sessions conducted over a course of two weeks. Features
were extracted using the wavelet packet decomposition and analyzed
to obtain the feature vectors. Subsequently, the neural networks
algorithm was used to classify the feature vectors. Results show that,
whether or not the subjects- eyes were open are insignificant for a 4–
channel biometrics system with a classification rate of 81%. However,
for a 2–channel system, the P4 channel should not be included if data
is acquired with the subjects- eyes open. It was observed that for 2–
channel system using only the C3 and C4 channels, a classification
rate of 71% was achieved.
Abstract: Potassium monopersulfate has been decomposed in aqueous solution in the presence of Co(II). The process has been simulated by means of a mechanism based on elementary reactions. Rate constants have been taken from literature reports or, alternatively, assimilated to analogous reactions occurring in Fenton's chemistry. Several operating conditions have been successfully applied.
Abstract: In this paper we present a technique to speed up
ICA based on the idea of reducing the dimensionality of the data
set preserving the quality of the results. In particular we refer to
FastICA algorithm which uses the Kurtosis as statistical property
to be maximized. By performing a particular Johnson-Lindenstrauss
like projection of the data set, we find the minimum dimensionality
reduction rate ¤ü, defined as the ratio between the size k of the reduced
space and the original one d, which guarantees a narrow confidence
interval of such estimator with high confidence level. The derived
dimensionality reduction rate depends on a system control parameter
β easily computed a priori on the basis of the observations only.
Extensive simulations have been done on different sets of real world
signals. They show that actually the dimensionality reduction is very
high, it preserves the quality of the decomposition and impressively
speeds up FastICA. On the other hand, a set of signals, on which the
estimated reduction rate is greater than 1, exhibits bad decomposition
results if reduced, thus validating the reliability of the parameter β.
We are confident that our method will lead to a better approach to
real time applications.
Abstract: Sol-gel method has been used to fabricate
nanocomposite films on glass substrates composed halloysite clay
mineral and nanocrystalline TiO2. The methodology for the synthesis
involves a simple chemistry method utilized nonionic surfactant
molecule as pore directing agent along with the acetic acid-based solgel
route with the absence of water molecules. The thermal treatment
of composite films at 450oC ensures elimination of organic material
and lead to the formation of TiO2 nanoparticles onto the surface of
the halloysite nanotubes. Microscopy techniques and porosimetry
methods used in order to delineate the structural characteristics of the
materials. The nanocomposite films produced have no cracks and
active anatase crystal phase with small crystallite size were deposited
on halloysite nanotubes. The photocatalytic properties for the new
materials were examined for the decomposition of the Basic Blue 41
azo dye in solution. These, nanotechnology based composite films
show high efficiency for dye’s discoloration in spite of different
halloysite quantities and small amount of halloysite/TiO2 catalyst
immobilized onto glass substrates. Moreover, we examined the
modification of the halloysite/TiO2 films with silver particles in order
to improve the photocatalytic properties of the films. Indeed, the
presence of silver nanoparticles enhances the discoloration rate of the
Basic Blue 41 compared to the efficiencies obtained for unmodified
films.
Abstract: In this article two algorithms, one based on variation iteration method and the other on Adomian's decomposition method, are developed to find the numerical solution of an initial value problem involving the non linear integro differantial equation where R is a nonlinear operator that contains partial derivatives with respect to x. Special cases of the integro-differential equation are solved using the algorithms. The numerical solutions are compared with analytical solutions. The results show that these two methods are efficient and accurate with only two or three iterations
Abstract: The current speech interfaces in many military
applications may be adequate for native speakers. However,
the recognition rate drops quite a lot for non-native speakers
(people with foreign accents). This is mainly because the nonnative
speakers have large temporal and intra-phoneme
variations when they pronounce the same words. This
problem is also complicated by the presence of large
environmental noise such as tank noise, helicopter noise, etc.
In this paper, we proposed a novel continuous acoustic feature
adaptation algorithm for on-line accent and environmental
adaptation. Implemented by incremental singular value
decomposition (SVD), the algorithm captures local acoustic
variation and runs in real-time. This feature-based adaptation
method is then integrated with conventional model-based
maximum likelihood linear regression (MLLR) algorithm.
Extensive experiments have been performed on the NATO
non-native speech corpus with baseline acoustic model trained
on native American English. The proposed feature-based
adaptation algorithm improved the average recognition
accuracy by 15%, while the MLLR model based adaptation
achieved 11% improvement. The corresponding word error
rate (WER) reduction was 25.8% and 2.73%, as compared to
that without adaptation. The combined adaptation achieved
overall recognition accuracy improvement of 29.5%, and
WER reduction of 31.8%, as compared to that without
adaptation.
Abstract: Potassium monopersulfate has been decomposed in
aqueous solution in the presence of Co(II). The effect of the main
operating variables has been assessed. Minimum variations in pH
exert a considerable influence on the process kinetics. Thus, when no
pH adjustment is considered, the actual effect of variables like initial
monopersulfate and/or catalyst concentration may be hindered. As
expected, temperature enhances the monopersulfate decomposition
rate by following the Arrhenius law. The activation energy in the
proximity of 85 kJ/mol has been obtained. Amongst the different
solids tested in the monopersulfate decomposition, only the
perovskite LaTi0.15Cu0.85O3 has shown a significant catalytic activity.
Abstract: The model-based approach to user interface design relies on developing separate models that are capturing various aspects about users, tasks, application domain, presentation and dialog representations. This paper presents a task modeling approach for user interface design and aims at exploring the mappings between task, domain and presentation models. The basic idea of our approach is to identify typical configurations in task and domain models and to investigate how they relate each other. A special emphasis is put on application-specific functions and mappings between domain objects and operational task structures. In this respect, we will distinguish between three layers in the task decomposition: a functional layer, a planning layer, and an operational layer.
Abstract: In this paper, we propose a Perceptually Optimized Embedded ZeroTree Image Coder (POEZIC) that introduces a perceptual weighting to wavelet transform coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to the coding quality obtained using the SPIHT algorithm only. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEZIC quality assessment. Our POEZIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) luminance masking and Contrast masking, 2) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting, 3) the Wavelet Error Sensitivity WES used to reduce the perceptual quantization errors. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.
Abstract: In this paper, we present a comparative study between two computer vision systems for objects recognition and tracking, these algorithms describe two different approach based on regions constituted by a set of pixels which parameterized objects in shot sequences. For the image segmentation and objects detection, the FCM technique is used, the overlapping between cluster's distribution is minimized by the use of suitable color space (other that the RGB one). The first technique takes into account a priori probabilities governing the computation of various clusters to track objects. A Parzen kernel method is described and allows identifying the players in each frame, we also show the importance of standard deviation value research of the Gaussian probability density function. Region matching is carried out by an algorithm that operates on the Mahalanobis distance between region descriptors in two subsequent frames and uses singular value decomposition to compute a set of correspondences satisfying both the principle of proximity and the principle of exclusion.
Abstract: The join dependency provides the basis for obtaining
lossless join decomposition in a classical relational schema. The
existence of Join dependency shows that that the tables always
represent the correct data after being joined. Since the classical
relational databases cannot handle imprecise data, they were
extended to fuzzy relational databases so that uncertain, ambiguous,
imprecise and partially known information can also be stored in
databases in a formal way. However like classical databases, the
fuzzy relational databases also undergoes decomposition during
normalization, the issue of joining the decomposed fuzzy relations
remains intact. Our effort in the present paper is to emphasize on this
issue. In this paper we define fuzzy join dependency in the
framework of type-1 fuzzy relational databases & type-2 fuzzy
relational databases using the concept of fuzzy equality which is
defined using fuzzy functions. We use the fuzzy equi-join operator
for computing the fuzzy equality of two attribute values. We also
discuss the dependency preservation property on execution of this
fuzzy equi- join and derive the necessary condition for the fuzzy
functional dependencies to be preserved on joining the decomposed
fuzzy relations. We also derive the conditions for fuzzy join
dependency to exist in context of both type-1 and type-2 fuzzy
relational databases. We find that unlike the classical relational
databases even the existence of a trivial join dependency does not
ensure lossless join decomposition in type-2 fuzzy relational
databases. Finally we derive the conditions for the fuzzy equality to
be non zero and the qualification of an attribute for fuzzy key.
Abstract: In this paper we present a method for gene ranking
from DNA microarray data. More precisely, we calculate the correlation
networks, which are unweighted and undirected graphs, from
microarray data of cervical cancer whereas each network represents
a tissue of a certain tumor stage and each node in the network
represents a gene. From these networks we extract one tree for
each gene by a local decomposition of the correlation network. The
interpretation of a tree is that it represents the n-nearest neighbor
genes on the n-th level of a tree, measured by the Dijkstra distance,
and, hence, gives the local embedding of a gene within the correlation
network. For the obtained trees we measure the pairwise similarity
between trees rooted by the same gene from normal to cancerous
tissues. This evaluates the modification of the tree topology due to
progression of the tumor. Finally, we rank the obtained similarity
values from all tissue comparisons and select the top ranked genes.
For these genes the local neighborhood in the correlation networks
changes most between normal and cancerous tissues. As a result
we find that the top ranked genes are candidates suspected to be
involved in tumor growth and, hence, indicates that our method
captures essential information from the underlying DNA microarray
data of cervical cancer.
Abstract: In this research, an aerobic composting method is
studied to reuse organic waste from rubber factory waste as soil fertilizer and to study the effect of cellulolytic microbial activator
(CMA) as the activator in the rubber factory waste composting. The
performance of the composting process was monitored as a function
of carbon and organic matter decomposition rate, temperature and
moisture content. The results indicate that the rubber factory waste is best composted with water hyacinth and sludge than composted
alone. In addition, the CMA is more affective when mixed with the rubber factory waste, water hyacinth and sludge since a good fertilizer is achieved. When adding CMA into the rubber factory
waste composted alone, the finished product does not achieve a
standard of fertilizer, especially the C/N ratio.
Finally, the finished products of composting rubber factory waste and water hyacinth and sludge (both CMA and without CMA), can be an environmental friendly alternative to solve the disposal problems of rubber factory waste. Since the C/N ratio, pH, moisture
content, temperature, and nutrients of the finished products are acceptable for agriculture use.
Abstract: Electrocardiogram (ECG) is considered to be the
backbone of cardiology. ECG is composed of P, QRS & T waves and
information related to cardiac diseases can be extracted from the
intervals and amplitudes of these waves. The first step in extracting
ECG features starts from the accurate detection of R peaks in the
QRS complex. We have developed a robust R wave detector using
wavelets. The wavelets used for detection are Daubechies and
Symmetric. The method does not require any preprocessing therefore,
only needs the ECG correct recordings while implementing the
detection. The database has been collected from MIT-BIH arrhythmia
database and the signals from Lead-II have been analyzed. MatLab
7.0 has been used to develop the algorithm. The ECG signal under
test has been decomposed to the required level using the selected
wavelet and the selection of detail coefficient d4 has been done based
on energy, frequency and cross-correlation analysis of decomposition
structure of ECG signal. The robustness of the method is apparent
from the obtained results.
Abstract: This paper presents a new method of analog fault diagnosis based on back-propagation neural networks (BPNNs) using wavelet decomposition and fractal dimension as preprocessors. The proposed method has the capability to detect and identify faulty components in an analog electronic circuit with tolerance by analyzing its impulse response. Using wavelet decomposition to preprocess the impulse response drastically de-noises the inputs to the neural network. The second preprocessing by fractal dimension can extract unique features, which are the fed to a neural network as inputs for further classification. A comparison of our work with [1] and [6], which also employs back-propagation (BP) neural networks, reveals that our system requires a much smaller network and performs significantly better in fault diagnosis of analog circuits due to our proposed preprocessing techniques.
Abstract: In this paper, a new technique for fast painting with
different colors is presented. The idea of painting relies on applying
masks with different colors to the background. Fast painting is
achieved by applying these masks in the frequency domain instead of
spatial (time) domain. New colors can be generated automatically as a
result from the cross correlation operation. This idea was applied
successfully for faster specific data (face, object, pattern, and code)
detection using neural algorithms. Here, instead of performing cross
correlation between the input input data (e.g., image, or a stream of
sequential data) and the weights of neural networks, the cross
correlation is performed between the colored masks and the
background. Furthermore, this approach is developed to reduce the
computation steps required by the painting operation. The principle of
divide and conquer strategy is applied through background
decomposition. Each background is divided into small in size subbackgrounds
and then each sub-background is processed separately by
using a single faster painting algorithm. Moreover, the fastest painting
is achieved by using parallel processing techniques to paint the
resulting sub-backgrounds using the same number of faster painting
algorithms. In contrast to using only faster painting algorithm, the
speed up ratio is increased with the size of the background when using
faster painting algorithm and background decomposition. Simulation
results show that painting in the frequency domain is faster than that in
the spatial domain.
Abstract: Artificial Neural Network (ANN) has been
extensively used for classification of heart sounds for its
discriminative training ability and easy implementation. However, it
suffers from overparameterization if the number of nodes is not
chosen properly. In such cases, when the dataset has redundancy
within it, ANN is trained along with this redundant information that
results in poor validation. Also a larger network means more
computational expense resulting more hardware and time related
cost. Therefore, an optimum design of neural network is needed
towards real-time detection of pathological patterns, if any from heart
sound signal. The aims of this work are to (i) select a set of input
features that are effective for identification of heart sound signals and
(ii) make certain optimum selection of nodes in the hidden layer for a
more effective ANN structure. Here, we present an optimization
technique that involves Singular Value Decomposition (SVD) and
QR factorization with column pivoting (QRcp) methodology to
optimize empirically chosen over-parameterized ANN structure.
Input nodes present in ANN structure is optimized by SVD followed
by QRcp while only SVD is required to prune undesirable hidden
nodes. The result is presented for classifying 12 common
pathological cases and normal heart sound.
Abstract: In this paper, hybrid FDMA-TDMA access technique in a cooperative distributive fashion introducing and implementing a modified protocol introduced in [1] is analyzed termed as Power and Cooperation Diversity Gain Protocol (PCDGP). A wireless network consists of two users terminal , two relays and a destination terminal equipped with two antennas. The relays are operating in amplify-and-forward (AF) mode with a fixed gain. Two operating modes: cooperation-gain mode and powergain mode are exploited from source terminals to relays, as it is working in a best channel selection scheme. Vertical BLAST (Bell Laboratories Layered Space Time) or V-BLAST with minimum mean square error (MMSE) nulling is used at the relays to perfectly detect the joint signals from multiple source terminals. The performance is analyzed using binary phase shift keying (BPSK) modulation scheme and investigated over independent and identical (i.i.d) Rayleigh, Ricean-K and Nakagami-m fading environments. Subsequently, simulation results show that the proposed scheme can provide better signal quality of uplink users in a cooperative communication system using hybrid FDMATDMA technique.