Abstract: Entropy is a key measure in studies related to information theory and its many applications. Campbell for the first time recognized that the exponential of the Shannon’s entropy is just the size of the sample space, when distribution is uniform. Here is the idea to study exponentials of Shannon’s and those other entropy generalizations that involve logarithmic function for a probability distribution in general. In this paper, we introduce a measure of sample space, called ‘entropic measure of a sample space’, with respect to the underlying distribution. It is shown in both discrete and continuous cases that this new measure depends on the parameters of the distribution on the sample space - same sample space having different ‘entropic measures’ depending on the distributions defined on it. It was noted that Campbell’s idea applied for R`enyi’s parametric entropy of a given order also. Knowing that parameters play a role in providing suitable choices and extended applications, paper studies parametric entropic measures of sample spaces also. Exponential entropies related to Shannon’s and those generalizations that have logarithmic functions, i.e. are additive have been studies for wider understanding and applications. We propose and study exponential entropies corresponding to non additive entropies of type (α, β), which include Havard and Charvˆat entropy as a special case.
Abstract: Cubic equations of state (EoS), popular due to their simple mathematical form, ease of use, semi-theoretical nature and reasonable accuracy, are normally fitted to vapor-liquid equilibrium P-v-T data. As a result, they often show poor accuracy in the region near and above the critical point. In this study, the performance of the renowned Peng-Robinson (PR) and Patel-Teja (PT) EoS’s around the critical area has been examined against the P-v-T data of water. Both of them display large deviations at critical point. For instance, PR-EoS exhibits discrepancies as high as 47% for the specific volume, 28% for the enthalpy departure and 43% for the entropy departure at critical point. It is shown that incorporating P-v-T data of the supercritical region into the retuning of a cubic EoS can improve its performance at and above the critical point dramatically. Adopting a retuned acentric factor of 0.5491 instead of its genuine value of 0.344 for water in PR-EoS and a new F of 0.8854 instead of its original value of 0.6898 for water in PT-EoS reduces the discrepancies to about one third or less.
Abstract: This paper proposes a method of remotely controlling robots with arm gestures using surface electromyography (EMG) and accelerometer sensors attached to the operator’s wrists. The EMG and accelerometer sensors receive signals from the arm gestures of the operator and infer the corresponding movements to execute the command to control the robot. The movements of the robot include moving forward and backward and turning left and right. The accuracy is over 99% and movements can be controlled in real time.
Abstract: The Maximum entropy principle in spectral analysis
was used as an estimator of Direction of Arrival (DoA) of
electromagnetic or acoustic sources impinging on an array of sensors,
indeed the maximum entropy operator is very efficient when the
signals of the radiating sources are ergodic and complex zero mean
random processes which is the case for cosmic sources. In this paper,
we present basic review of the maximum entropy method (MEM)
which consists of rank one operator but not a projector, and we
elaborate a new operator which is full rank and sum of all possible
projectors. Two dimensional Simulation results based on Monte
Carlo trials prove the resolution power of the new operator where the
MEM presents some erroneous fluctuations.
Abstract: Discrete search path planning in time-constrained uncertain environment relying upon imperfect sensors is known to be hard, and current problem-solving techniques proposed so far to compute near real-time efficient path plans are mainly bounded to provide a few move solutions. A new information-theoretic –based open-loop decision model explicitly incorporating false alarm sensor readings, to solve a single agent military logistics search-and-delivery path planning problem with anticipated feedback is presented. The decision model consists in minimizing expected entropy considering anticipated possible observation outcomes over a given time horizon. The model captures uncertainty associated with observation events for all possible scenarios. Entropy represents a measure of uncertainty about the searched target location. Feedback information resulting from possible sensor observations outcomes along the projected path plan is exploited to update anticipated unit target occupancy beliefs. For the first time, a compact belief update formulation is generalized to explicitly include false positive observation events that may occur during plan execution. A novel genetic algorithm is then proposed to efficiently solve search path planning, providing near-optimal solutions for practical realistic problem instances. Given the run-time performance of the algorithm, natural extension to a closed-loop environment to progressively integrate real visit outcomes on a rolling time horizon can be easily envisioned. Computational results show the value of the approach in comparison to alternate heuristics.
Abstract: The increase popularity of multimedia application especially in image processing places a great demand on efficient data storage and transmission techniques. Network communication such as wireless network can easily be intercepted and cause of confidential information leaked. Unfortunately, conventional compression and encryption methods are too slow; it is impossible to carry out real time secure image processing. In this research, Embedded Zerotree Wavelet (EZW) encoder which specially designs for wavelet compression is examined. With this algorithm, three methods are proposed to reduce the processing time, space and security protection that will be secured enough to protect the data.
Abstract: This paper used a fuzzy kohonen neural network for medical image segmentation. Image segmentation plays a important role in the many of medical imaging applications by automating or facilitating the diagnostic. The paper analyses the tumor by extraction of the features of (area, entropy, means and standard deviation).These measurements gives a description for a tumor.
Abstract: Analysis of heart rate variability (HRV) has become a
popular non-invasive tool for assessing the activities of autonomic
nervous system. Most of the methods were hired from techniques
used for time series analysis. Currently used methods are time
domain, frequency domain, geometrical and fractal methods. A new
technique, which searches for pattern repeatability in a time series, is
proposed for quantifying heart rate (HR) time series. These set of
indices, which are termed as pattern repeatability measure and
pattern repeatability ratio are able to distinguish HR data clearly
from noise and electroencephalogram (EEG). The results of analysis
using these measures give an insight into the fundamental difference
between the composition of HR time series with respect to EEG and
noise.
Abstract: Deep Brain Stimulation or DBS is a surgical treatment for Parkinson-s Disease with three stimulation parameters: frequency, pulse width, and voltage. The parameters should be selected appropriately to achieve effective treatment. This selection now, performs clinically. The aim of this research is to study chaotic behavior of recorded tremor of patients under DBS in order to present a computational method to recognize stimulation optimum voltage. We obtained some chaotic features of tremor signal, and discovered embedding space of it has an attractor, and its largest Lyapunov exponent is positive, which show tremor signal has chaotic behavior, also we found out, in optimal voltage, entropy and embedding space variance of tremor signal have minimum values in comparison with other voltages. These differences can help neurologists recognize optimal voltage numerically, which leads to reduce patients' role and discomfort in optimizing stimulation parameters and to do treatment with high accuracy.
Abstract: Decision tree algorithms have very important place at
classification model of data mining. In literature, algorithms use
entropy concept or gini index to form the tree. The shape of the
classes and their closeness to each other some of the factors that
affect the performance of the algorithm. In this paper we introduce a
new decision tree algorithm which employs data (attribute) folding
method and variation of the class variables over the branches to be
created. A comparative performance analysis has been held between
the proposed algorithm and C4.5.
Abstract: Determining depth of anesthesia is a challenging problem
in the context of biomedical signal processing. Various methods
have been suggested to determine a quantitative index as depth of
anesthesia, but most of these methods suffer from high sensitivity
during the surgery. A novel method based on energy scattering of
samples in the wavelet domain is suggested to represent the basic
content of electroencephalogram (EEG) signal. In this method, first
EEG signal is decomposed into different sub-bands, then samples
are squared and energy of samples sequence is constructed through
each scale and time, which is normalized and finally entropy of the
resulted sequences is suggested as a reliable index. Empirical Results
showed that applying the proposed method to the EEG signals can
classify the awake, moderate and deep anesthesia states similar to
BIS.
Abstract: The purpose of this paper is to assess the value of neural networks for classification of cancer and noncancer prostate cells. Gauss Markov Random Fields, Fourier entropy and wavelet average deviation features are calculated from 80 noncancer and 80 cancer prostate cell nuclei. For classification, artificial neural network techniques which are multilayer perceptron, radial basis function and learning vector quantization are used. Two methods are utilized for multilayer perceptron. First method has single hidden layer and between 3-15 nodes, second method has two hidden layer and each layer has between 3-15 nodes. Overall classification rate of 86.88% is achieved.
Abstract: In H.264/AVC video encoding, rate-distortion
optimization for mode selection plays a significant role to achieve
outstanding performance in compression efficiency and video quality.
However, this mode selection process also makes the encoding
process extremely complex, especially in the computation of the ratedistortion
cost function, which includes the computations of the sum
of squared difference (SSD) between the original and reconstructed
image blocks and context-based entropy coding of the block. In this
paper, a transform-domain rate-distortion optimization accelerator
based on fast SSD (FSSD) and VLC-based rate estimation algorithm
is proposed. This algorithm could significantly simplify the hardware
architecture for the rate-distortion cost computation with only
ignorable performance degradation. An efficient hardware structure
for implementing the proposed transform-domain rate-distortion
optimization accelerator is also proposed. Simulation results
demonstrated that the proposed algorithm reduces about 47% of total
encoding time with negligible degradation of coding performance.
The proposed method can be easily applied to many mobile video
application areas such as a digital camera and a DMB (Digital
Multimedia Broadcasting) phone.
Abstract: This paper proposes method of diagnosing ball screw
preload loss through the Hilbert-Huang Transform (HHT) and
Multiscale entropy (MSE) process. The proposed method can
diagnose ball screw preload loss through vibration signals when the
machine tool is in operation. Maximum dynamic preload of 2 %, 4 %,
and 6 % ball screws were predesigned, manufactured, and tested
experimentally. Signal patterns are discussed and revealed using
Empirical Mode Decomposition(EMD)with the Hilbert Spectrum.
Different preload features are extracted and discriminated using HHT.
The irregularity development of a ball screw with preload loss is
determined and abstracted using MSE based on complexity
perception. Experiment results show that the proposed method can
predict the status of ball screw preload loss. Smart sensing for the
health of the ball screw is also possible based on a comparative
evaluation of MSE by the signal processing and pattern matching of
EMD/HHT. This diagnosis method realizes the purposes of prognostic
effectiveness on knowing the preload loss and utilizing convenience.
Abstract: This paper proposes a neural network weights and
topology optimization using genetic evolution and the
backpropagation training algorithm. The proposed crossover and
mutation operators aims to adapt the networks architectures and
weights during the evolution process. Through a specific inheritance
procedure, the weights are transmitted from the parents to their
offsprings, which allows re-exploitation of the already trained
networks and hence the acceleration of the global convergence of the
algorithm. In the preprocessing phase, a new feature extraction
method is proposed based on Legendre moments with the Maximum
entropy principle MEP as a selection criterion. This allows a global
search space reduction in the design of the networks. The proposed
method has been applied and tested on the well known MNIST
database of handwritten digits.
Abstract: We develop a three-step fuzzy logic-based algorithm for clustering categorical attributes, and we apply it to analyze cultural data. In the first step the algorithm employs an entropy-based clustering scheme, which initializes the cluster centers. In the second step we apply the fuzzy c-modes algorithm to obtain a fuzzy partition of the data set, and the third step introduces a novel cluster validity index, which decides the final number of clusters.
Abstract: Stochastic comparison has been an important
direction of research in various area. This can be done by the use of
the notion of stochastic ordering which gives qualitatitive rather than
purely quantitative estimation of the system under study. In this
paper we present applications of comparison based uncertainty
related to entropy in Reliability analysis, for example to design
better systems. These results can be used as a priori information in
simulation studies.
Abstract: There are three possible effects of Special Theory of
Relativity (STR) on a thermodynamic system. Planck and Einstein
looked upon this process as isobaric; on the other hand Ott saw it as
an adiabatic process. However plenty of logical reasons show that the
process is isotherm. Our phenomenological consideration
demonstrates that the temperature is invariant with Lorenz
transformation. In that case process is isotherm, so volume and
pressure are Lorentz covariant. If the process is isotherm the Boyles
law is Lorentz invariant. Also equilibrium constant and Gibbs energy,
activation energy, enthalpy entropy and extent of the reaction became
Lorentz invariant.
Abstract: A parametric study of a mixed-compression
supersonic inlet is performed and reported. The effects of inlet Mach
Numbers, varying from 4 to 10, and angle of attack, varying from 0
to 10, are reported for a constant inlet dynamic pressure. The paper
looked at the variations of mass flow rates through the inlet, gain in
entropy through the inlet, and the angles of the external oblique
shocks. The mass flow rates were found to decrease monotonically
with Mach numbers and increase with angle of attacks. On the other
hand the entropy gain through the inlet increased with increasing
Mach number and angle of attack. The variation in static pressure
was found to be identical from the inlet throat to the exit for Mach
number values higher than 6.
Abstract: A perfect secret-sharing scheme is a method to distribute a secret among a set of participants in such a way that only qualified subsets of participants can recover the secret and the joint share of participants in any unqualified subset is statistically independent of the secret. The collection of all qualified subsets is called the access structure of the perfect secret-sharing scheme. In a graph-based access structure, each vertex of a graph G represents a participant and each edge of G represents a minimal qualified subset. The average information ratio of a perfect secret-sharing scheme realizing the access structure based on G is defined as AR = (Pv2V (G) H(v))/(|V (G)|H(s)), where s is the secret and v is the share of v, both are random variables from and H is the Shannon entropy. The infimum of the average information ratio of all possible perfect secret-sharing schemes realizing a given access structure is called the optimal average information ratio of that access structure. Most known results about the optimal average information ratio give upper bounds or lower bounds on it. In this present structures based on bipartite graphs and determine the exact values of the optimal average information ratio of some infinite classes of them.