Abstract: In an era of knowledge explosion, the growth of data
increases rapidly day by day. Since data storage is a limited resource,
how to reduce the data space in the process becomes a challenge issue.
Data compression provides a good solution which can lower the
required space. Data mining has many useful applications in recent
years because it can help users discover interesting knowledge in large
databases. However, existing compression algorithms are not
appropriate for data mining. In [1, 2], two different approaches were
proposed to compress databases and then perform the data mining
process. However, they all lack the ability to decompress the data to
their original state and improve the data mining performance. In this
research a new approach called Mining Merged Transactions with the
Quantification Table (M2TQT) was proposed to solve these problems.
M2TQT uses the relationship of transactions to merge related
transactions and builds a quantification table to prune the candidate
itemsets which are impossible to become frequent in order to improve
the performance of mining association rules. The experiments show
that M2TQT performs better than existing approaches.
Abstract: In Orthogonal Frequency Division Multiplexing (OFDM) systems, the peak to average power ratio (PAR) is much high. The clipping signal scheme is a useful method to reduce PAR. Clipping the OFDM signal, however, increases the overall noise level by introducing clipping noise. It is necessary to recover the figure of the original signal at receiver in order to reduce the clipping noise. Considering the continuity of the signal and the figure of the peak, we obtain a certain conic function curve to replace the clipped signal module within the clipping time. The results of simulation show that the proposed scheme can reduce the systems? BER (bit-error rate) 10 times when signal-to-interference-and noise-ratio (SINR) equals to 12dB. And the BER performance of the proposed scheme is superior to that of kim's scheme, too.
Abstract: The selection for plantation of a particular type of
mustard plant depending on its productivity (pod yield) at the stage
of maturity. The growth of mustard plant dependent on some
parameters of that plant, these are shoot length, number of leaves,
number of roots and roots length etc. As the plant is growing, some
leaves may be fall down and some new leaves may come, so it can
not gives the idea to develop the relationship with the seeds weight at
mature stage of that plant. It is not possible to find the number of
roots and root length of mustard plant at growing stage that will be
harmful of this plant as roots goes deeper to deeper inside the land.
Only the value of shoot length which increases in course of time can
be measured at different time instances. Weather parameters are
maximum and minimum humidity, rain fall, maximum and minimum
temperature may effect the growth of the plant. The parameters of
pollution, water, soil, distance and crop management may be
dominant factors of growth of plant and its productivity. Considering
all parameters, the growth of the plant is very uncertain, fuzzy
environment can be considered for the prediction of shoot length at
maturity of the plant. Fuzzification plays a greater role for
fuzzification of data, which is based on certain membership
functions. Here an effort has been made to fuzzify the original data
based on gaussian function, triangular function, s-function,
Trapezoidal and L –function. After that all fuzzified data are
defuzzified to get normal form. Finally the error analysis
(calculation of forecasting error and average error) indicates the
membership function appropriate for fuzzification of data and use to
predict the shoot length at maturity. The result is also verified using
residual (Absolute Residual, Maximum of Absolute Residual, Mean
Absolute Residual, Mean of Mean Absolute Residual, Median of
Absolute Residual and Standard Deviation) analysis.
Abstract: Image compression is one of the most important
applications Digital Image Processing. Advanced medical imaging
requires storage of large quantities of digitized clinical data. Due to
the constrained bandwidth and storage capacity, however, a medical
image must be compressed before transmission and storage. There
are two types of compression methods, lossless and lossy. In Lossless
compression method the original image is retrieved without any
distortion. In lossy compression method, the reconstructed images
contain some distortion. Direct Cosine Transform (DCT) and Fractal
Image Compression (FIC) are types of lossy compression methods.
This work shows that lossy compression methods can be chosen for
medical image compression without significant degradation of the
image quality. In this work DCT and Fractal Compression using
Partitioned Iterated Function Systems (PIFS) are applied on different
modalities of images like CT Scan, Ultrasound, Angiogram, X-ray
and mammogram. Approximately 20 images are considered in each
modality and the average values of compression ratio and Peak
Signal to Noise Ratio (PSNR) are computed and studied. The quality
of the reconstructed image is arrived by the PSNR values. Based on
the results it can be concluded that the DCT has higher PSNR values
and FIC has higher compression ratio. Hence in medical image
compression, DCT can be used wherever picture quality is preferred
and FIC is used wherever compression of images for storage and
transmission is the priority, without loosing picture quality
diagnostically.
Abstract: This paper presents a new heuristic algorithm for the classical symmetric traveling salesman problem (TSP). The idea of the algorithm is to cut a TSP tour into overlapped blocks and then each block is improved separately. It is conjectured that the chance of improving a good solution by moving a node to a position far away from its original one is small. By doing intensive search in each block, it is possible to further improve a TSP tour that cannot be improved by other local search methods. To test the performance of the proposed algorithm, computational experiments are carried out based on benchmark problem instances. The computational results show that algorithm proposed in this paper is efficient for solving the TSPs.
Abstract: In this paper we investigate the watermarking authentication when applied to medical imagery field. We first give an overview of watermarking technology by paying attention to fragile watermarking since it is the usual scheme for authentication.We then analyze the requirements for image authentication and integrity in medical imagery, and we show finally that invertible schemes are the best suited for this particular field. A well known authentication method is studied. This technique is then adapted here for interleaving patient information and message authentication code with medical images in a reversible manner, that is using lossless compression. The resulting scheme enables on a side the exact recovery of the original image that can be unambiguously authenticated, and on the other side, the patient information to be saved or transmitted in a confidential way. To ensure greater security the patient information is encrypted before being embedded into images.
Abstract: The myoelectric signal (MES) is one of the Biosignals
utilized in helping humans to control equipments. Recent approaches
in MES classification to control prosthetic devices employing pattern
recognition techniques revealed two problems, first, the classification
performance of the system starts degrading when the number of
motion classes to be classified increases, second, in order to solve the
first problem, additional complicated methods were utilized which
increase the computational cost of a multifunction myoelectric
control system. In an effort to solve these problems and to achieve a
feasible design for real time implementation with high overall
accuracy, this paper presents a new method for feature extraction in
MES recognition systems. The method works by extracting features
using Wavelet Packet Transform (WPT) applied on the MES from
multiple channels, and then employs Fuzzy c-means (FCM)
algorithm to generate a measure that judges on features suitability for
classification. Finally, Principle Component Analysis (PCA) is
utilized to reduce the size of the data before computing the
classification accuracy with a multilayer perceptron neural network.
The proposed system produces powerful classification results (99%
accuracy) by using only a small portion of the original feature set.
Abstract: In this paper we are to find the optimum
multiwavelet for compression of electrocardiogram (ECG)
signals. At present, it is not well known which multiwavelet is
the best choice for optimum compression of ECG. In this
work, we examine different multiwavelets on 24 sets of ECG
data with entirely different characteristics, selected from MITBIH
database. For assessing the functionality of the different
multiwavelets in compressing ECG signals, in addition to
known factors such as Compression Ratio (CR), Percent Root
Difference (PRD), Distortion (D), Root Mean Square Error
(RMSE) in compression literature, we also employed the
Cross Correlation (CC) criterion for studying the
morphological relations between the reconstructed and the
original ECG signal and Signal to reconstruction Noise Ratio
(SNR). The simulation results show that the cardbal2 by the
means of identity (Id) prefiltering method to be the best
effective transformation.
Abstract: Heavy rains are one of the features of arid and semi
arid climates which result in flood. This kind of rainfall originates
from environmental and synoptic conditions. Mediterranean cyclones
are the major factor in heavy rainfall in Iran, but these cyclones do
not happen in some parts of Iran such as Southern and Southeastern
areas. In this study, it has been tried to pinpoint the synoptic reasons
of heavy rainfall in Isfahan through the analysis of the relationship
between this rainfall in Isfahan and atmospheric system over Iran and
the areas around it. The findings of this study show that the major
factor have is the arrival of Sudanese low pressure system in this
region from the southwest, of course if the ascent local conditions
such as heat occur, the heaviest rains happen in Isfahan. In fact this
kind of rainfall in Isfahan has a Sudanese origin and if it is
accompanied by Mediterranean system, heavier rain falls.
Abstract: Order reduction of linear-time invariant systems employing two methods; one using the advantages of Routh approximation and other by an evolutionary technique is presented in this paper. In Routh approximation method the denominator of the reduced order model is obtained using Routh approximation while the numerator of the reduced order model is determined using the indirect approach of retaining the time moments and/or Markov parameters of original system. By this method the reduced order model guarantees stability if the original high order model is stable. In the second method Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical examples.
Abstract: This paper proposes a new version of the Particle
Swarm Optimization (PSO) namely, Modified PSO (MPSO) for
model order formulation of Single Input Single Output (SISO) linear
time invariant continuous systems. In the General PSO, the
movement of a particle is governed by three behaviors namely
inertia, cognitive and social. The cognitive behavior helps the
particle to remember its previous visited best position. In Modified
PSO technique split the cognitive behavior into two sections like
previous visited best position and also previous visited worst
position. This modification helps the particle to search the target very
effectively. MPSO approach is proposed to formulate the higher
order model. The method based on the minimization of error
between the transient responses of original higher order model and
the reduced order model pertaining to the unit step input. The results
obtained are compared with the earlier techniques utilized, to validate
its ease of computation. The proposed method is illustrated through
numerical example from literature.
Abstract: Global approximation using metamodel for complex
mathematical function or computer model over a large variable
domain is often needed in sensibility analysis, computer simulation,
optimal control, and global design optimization of complex, multiphysics
systems. To overcome the limitations of the existing
response surface (RS), surrogate or metamodel modeling methods for
complex models over large variable domain, a new adaptive and
regressive RS modeling method using quadratic functions and local
area model improvement schemes is introduced. The method applies
an iterative and Latin hypercube sampling based RS update process,
divides the entire domain of design variables into multiple cells,
identifies rougher cells with large modeling error, and further divides
these cells along the roughest dimension direction. A small number
of additional sampling points from the original, expensive model are
added over the small and isolated rough cells to improve the RS
model locally until the model accuracy criteria are satisfied. The
method then combines local RS cells to regenerate the global RS
model with satisfactory accuracy. An effective RS cells sorting
algorithm is also introduced to improve the efficiency of model
evaluation. Benchmark tests are presented and use of the new
metamodeling method to replace complex hybrid electrical vehicle
powertrain performance model in vehicle design optimization and
optimal control are discussed.
Abstract: We are proposing a simple watermarking method
based on visual cryptography. The method is based on selection of
specific pixels from the original image instead of random selection of
pixels as per Hwang [1] paper. Verification information is generated
which will be used to verify the ownership of the image without the
need to embed the watermark pattern into the original digital data.
Experimental results show the proposed method can recover the
watermark pattern from the marked data even if some changes are
made to the original digital data.
Abstract: Embedding and extraction of a secret information as
well as the restoration of the original un-watermarked image is
highly desirable in sensitive applications like military, medical, and
law enforcement imaging. This paper presents a novel reversible
data-hiding method for digital images using integer to integer
wavelet transform and companding technique which can embed and
recover the secret information as well as can restore the image to its
pristine state. The novel method takes advantage of block based
watermarking and iterative optimization of threshold for companding
which avoids histogram pre and post-processing. Consequently, it
reduces the associated overhead usually required in most of the
reversible watermarking techniques. As a result, it keeps the
distortion small between the marked and the original images.
Experimental results show that the proposed method outperforms the
existing reversible data hiding schemes reported in the literature.
Abstract: Given a large sparse signal, great wishes are to
reconstruct the signal precisely and accurately from lease number of
measurements as possible as it could. Although this seems possible
by theory, the difficulty is in built an algorithm to perform the
accuracy and efficiency of reconstructing. This paper proposes a new
proved method to reconstruct sparse signal depend on using new
method called Least Support Matching Pursuit (LS-OMP) merge it
with the theory of Partial Knowing Support (PSK) given new method
called Partially Knowing of Least Support Orthogonal Matching
Pursuit (PKLS-OMP).
The new methods depend on the greedy algorithm to compute the
support which depends on the number of iterations. So to make it
faster, the PKLS-OMP adds the idea of partial knowing support of its
algorithm. It shows the efficiency, simplicity, and accuracy to get
back the original signal if the sampling matrix satisfies the Restricted
Isometry Property (RIP).
Simulation results also show that it outperforms many algorithms
especially for compressible signals.
Abstract: XML data consists of a very flexible tree-structure
which makes it difficult to support the storing and retrieving of XML
data. The node numbering scheme is one of the most popular
approaches to store XML in relational databases. Together with the
node numbering storage scheme, structural joins can be used to
efficiently process the hierarchical relationships in XML. However, in
order to process a tree-structured XPath query containing several
hierarchical relationships and conditional sentences on XML data,
many structural joins need to be carried out, which results in a high
query execution cost. This paper introduces mechanisms to reduce the
XPath queries including branch nodes into a much more efficient form
with less numbers of structural joins. A two step approach is proposed.
The first step merges duplicate nodes in the tree-structured query and
the second step divides the query into sub-queries, shortens the paths
and then merges the sub-queries back together. The proposed
approach can highly contribute to the efficient execution of XML
queries. Experimental results show that the proposed scheme can
reduce the query execution cost by up to an order of magnitude of the
original execution cost.
Abstract: Beta-spline is built on G2 continuity which guarantees
smoothness of generated curves and surfaces using it. This curve is
preferred to be used in object design rather than reconstruction. This
study however, employs the Beta-spline in reconstructing a 3-
dimensional G2 image of the Stanford Rabbit. The original data
consists of multi-slice binary images of the rabbit. The result is then
compared with related works using other techniques.
Abstract: When a lightning strike falls near an overhead power
line, the intense electromagnetic field radiated by the current of the
lightning return stroke coupled with power lines and there induced
transient overvoltages, which can cause a back-flashover in electrical
network. The indirect lightning represents a major danger owing to
the fact that it is more frequent than that which results from the direct
strikes.
In this paper we present an analysis of the electromagnetic
coupling between an external electromagnetic field generated by the
lightning and an electrical overhead lines, so we give an important
and original contribution: We are based on our experimental
measurements which we carried in the high voltage laboratories of
EPFL in Switzerland during the last trimester of 2005, on the recent
works of other authors and with our mathematical improvement a
new particular analytical expression of the electromagnetic field
generated by the lightning return stroke was developed and presented
in this paper. The results obtained by this new electromagnetic field
formulation were compared with experimental results and give a
reasonable approach.
Abstract: Knowledge is the foundation for growth and development. Investment in knowledge improves new method for originate knowledge society and knowledge economy. Investment in knowledge embraces expenditure on education and R&D and software. Measuring of investment in knowledge is characteristically complicated. We examine the influence of investment in knowledge in multifactor productivity growth and numbers of patent. We analyze the annual growth of investment in knowledge and we estimate portion of each country intended for produce total investment in knowledge on the whole OECD. We determine the relative efficiency of average patent numbers with average investment in knowledge and we compare GDP growth rates and growth of knowledge investment. The main purpose in this paper is to study to evaluate different aspect, influence and output of investment in knowledge in OECD countries.
Abstract: Epstein-Barr virus (EBV) is implicated in the pathogenesis of the endemic Burkitt-s lymphoma (BL). The EBVpositive BL-derived cell lines initially maintain the original tumor phenotype of EBV infection (latency I, LatI), but most of them drift toward a lymphoblast phenotype of EBV latency III (LatIII) during in vitro culturing. The aim of the present work was to characterize the B-cell subsets in EBV-positive BL cell lines and to verify whether a particular cell subset correlates with the type of EBV infection. The phenotype analysis of two EBV-negative and eleven EBV-positive (three of LatI and eight of LatIII) BL cell lines was performed by polychromatic flow cytomery, based on expression pattern of CD19, CD10, CD38, CD27, and CD5 markers. Two cell subsets, CD19+CD10+ and CD19+CD10-, were defined in LatIII BL cell lines. In both subsets, the CD27 and CD5 cell surface expression was detected in a proportion of the cells.