Abstract: The authors present an algorithm for order reduction of linear time invariant dynamic systems using the combined advantages of the eigen spectrum analysis and the error minimization by particle swarm optimization technique. Pole centroid and system stiffness of both original and reduced order systems remain same in this method to determine the poles, whereas zeros are synthesized by minimizing the integral square error in between the transient responses of original and reduced order models using particle swarm optimization technique, pertaining to a unit step input. It is shown that the algorithm has several advantages, e.g. the reduced order models retain the steady-state value and stability of the original system. The algorithm is illustrated with the help of two numerical examples and the results are compared with the other existing techniques.
Abstract: In this paper, a novel multipurpose audio watermarking
algorithm is proposed based on Vector Quantization (VQ) in Discrete
Cosine Transform (DCT) domain using the codeword labeling and
index-bit constrained method. By using this algorithm, it can fulfill the
requirements of both the copyright protection and content integrity
authentication at the same time for the multimedia artworks. The
robust watermark is embedded in the middle frequency coefficients of
the DCT transform during the labeled codeword vector quantization
procedure. The fragile watermark is embedded into the indices of the
high frequency coefficients of the DCT transform by using the
constrained index vector quantization method for the purpose of
integrity authentication of the original audio signals. Both the robust
and the fragile watermarks can be extracted without the original audio
signals, and the simulation results show that our algorithm is effective
with regard to the transparency, robustness and the authentication
requirements
Abstract: Digital signature is a useful primitive to attain the integrity and authenticity in various wire or wireless communications. Proxy signature is one type of the digital signatures. It helps the proxy signer to sign messages on behalf of the original signer. It is very useful when the original signer (e.g. the president of a company) is not available to sign a specific document. If the original signer can not forge valid proxy signatures through impersonating the proxy signer, it will be robust in a virtual environment; thus the original signer can not shift any illegal action initiated by herself to the proxy signer. In this paper, we propose a new proxy signature scheme. The new scheme can prevent the original signer from impersonating the proxy signer to sign messages. The proposed scheme is based on the regular ElGamal signature. In addition, the fair privacy of the proxy signer is maintained. That means, the privacy of the proxy signer is preserved; and the privacy can be revealed when it is necessary.
Abstract: Interpolated contour maps drawn for aluminum,
copper and molybdenum in downstream monitoring boreholes of
water dam in Miduk Copper Complex and the values of pH, redox
potential (Eh) and distance from water dam indicate different trends
of variation and behavior of these three elements in downward
groundwater resources. As these maps exhibit, aluminum is dominant
in the most alkaline (pH = 9-11) borehole (MB5) to water dam. The
highest concentration of molybdenum is found in the nearest
borehole (MB6) to water dam. Main concentration of copper is
observed in the most oxidized borehole (MB3 with Eh=293.2mV).
The spatial difference among sampling stations can be attributed to
the existence of faults and diaclases in the geologic structure of
Miduk region which causes the groundwater sampling sites to be
impressed by different contamination sources (toe seepage and upper
seepage water originated from different zones of tailings dump).
Abstract: We describe an effective method for image encryption
which employs magnitude and phase manipulation using carrier
images. Although it involves traditional methods like magnitude and
phase encryptions, the novelty of this work lies in deploying the
concept of carrier images for encryption purpose. To this end, a
carrier image is randomly chosen from a set of stored images. One
dimensional (1-D) discrete Fourier transform (DFT) is then carried
out on the original image to be encrypted along with the carrier
image. Row wise spectral addition and scaling is performed between
the magnitude spectra of the original and carrier images by randomly
selecting the rows. Similarly, row wise phase addition and scaling is
performed between the original and carrier images phase spectra by
randomly selecting the rows. The encrypted image obtained by these
two operations is further subjected to one more level of magnitude
and phase manipulation using another randomly chosen carrier image
by 1-D DFT along the columns. The resulting encrypted image is
found to be fully distorted, resulting in increasing the robustness
of the proposed work. Further, applying the reverse process at the
receiver, the decrypted image is found to be distortionless.
Abstract: In this study, a novel approach of image embedding is introduced. The proposed method consists of three main steps. First, the edge of the image is detected using Sobel mask filters. Second, the least significant bit LSB of each pixel is used. Finally, a gray level connectivity is applied using a fuzzy approach and the ASCII code is used for information hiding. The prior bit of the LSB represents the edged image after gray level connectivity, and the remaining six bits represent the original image with very little difference in contrast. The proposed method embeds three images in one image and includes, as a special case of data embedding, information hiding, identifying and authenticating text embedded within the digital images. Image embedding method is considered to be one of the good compression methods, in terms of reserving memory space. Moreover, information hiding within digital image can be used for security information transfer. The creation and extraction of three embedded images, and hiding text information is discussed and illustrated, in the following sections.
Abstract: Use of microemulsion in enhanced oil recovery has become more attractive in recent years because of its high level of extraction efficiency. Experimental investigations have been made on characterization of microemulsions of oil-brinesurfactant/ cosurfactant system for its use in enhanced oil recovery (EOR). Sodium dodecyl sulfate, propan-1-ol and heptane were selected as surfactant, cosurfactant and oil respectively for preparation of microemulsion. The effects of salinity on the relative phase volumes and solubilization parameters have also been studied. As salinity changes from low to high value, phase transition takes place from Winsor I to Winsor II via Winsor III. Suitable microemulsion composition has been selected based on its stability and ability to reduce interfacial tension. A series of flooding experiments have been performed using the selected microemulsion. The flooding experiments were performed in a core flooding apparatus using uniform sand pack. The core holder was tightly packed with uniform sands (60-100 mesh) and saturated with brines of different salinities. It was flooded with the brine at 25 psig and the absolute permeability was calculated from the flow rate of the through sand pack. The sand pack was then flooded with the crude oil at 800 psig to irreducible water saturation. The initial water saturation was determined on the basis of mass balance. Waterflooding was conducted by placing the coreholder horizontally at a constant injection pressure at 200 pisg. After water flooding, when water-cut reached above 95%, around 0.5 pore volume (PV) of the above microemulsion slug was injected followed by chasing water. The experiments were repeated using different composition of microemulsion slug. The additional recoveries were calculated by material balance. Encouraging results with additional recovery more than 20% of original oil in place above the conventional water flooding have been observed.
Abstract: The aim of the study is to investigate a number of characteristics of Corporate Social Responsibility (CSR) indicators that should be adopted by CSR assessment methodologies. For the purpose of this paper, a survey among the Greek companies that belong to FTSE 20 in Athens Exchange (FTSE/Athex-20) has been conducted, as these companies are expected to pioneer in the field of CSR. The results show consensus as regards the characteristics of indicators such as the need for the adoption of general and specific sector indicators, financial and non-financial indicators, the origin and the weight rate. However, the results are contradictory concerning the appropriate number of indicators for the assessment of CSR and the unit of measurement. Finally, the company-s sector is a more important dimension of CSR than the size and the country where the company operates. The purpose of this paper is to standardize the main characteristics of CSR indicators.
Abstract: BEAMnrc was used to calculate the spectrum and
HVL for X-ray Beam during low energy X-ray radiation using tube model: SRO 33/100 /ROT 350 Philips. The results of BEAMnrc
simulation and measurements were compared to the IPEM report
number 78 and SpekCalc software. Three energies 127, 103 and 84
Kv were used. In these simulation a tungsten anode with 1.2 mm for
Be window were used as source. HVLs were calculated from
BEAMnrc spectrum with air Kerma method for four different filters.
For BEAMnrc one billion particles were used as original particles for
all simulations. The results show that for 127 kV, there was
maximum 5.2 % difference between BEAMnrc and Measurements
and minimum was 0.7% .the maximum 9.1% difference between
BEAMnrc and IPEM and minimum was 2.3% .The maximum
difference was 3.2% between BEAMnrc and SpekCal and minimum
was 2.8%. The result show BEAMnrc was able to satisfactory predict
the quantities of Low energy Beam as well as high energy X-ray
radiation.
Abstract: Frequently a group of people jointly decide and authorize
a specific person as a representative in some business/poitical
occasions, e.g., the board of a company authorizes the chief executive
officer to close a multi-billion acquisition deal. In this paper, an
integrated proxy multi-signature scheme that allows anonymously
vetoable delegation is proposed. This protocol integrates mechanisms
of private veto, distributed proxy key generation, secure transmission
of proxy key, and existentially unforgeable proxy multi-signature
scheme. First, a provably secure Guillou-Quisquater proxy signature
scheme is presented, then the “zero-sharing" protocol is extended
over a composite modulus multiplicative group, and finally the above
two are combined to realize the GQ proxy multi-signature with
anonymously vetoable delegation. As a proxy signature scheme, this
protocol protects both the original signers and the proxy signer.
The modular design allows simplified implementation with less
communication overheads and better computation performance than
a general secure multi-party protocol.
Abstract: This paper presents an evaluation for a wavelet-based
digital watermarking technique used in estimating the quality of
video sequences transmitted over Additive White Gaussian Noise
(AWGN) channel in terms of a classical objective metric, such as
Peak Signal-to-Noise Ratio (PSNR) without the need of the original
video. In this method, a watermark is embedded into the Discrete
Wavelet Transform (DWT) domain of the original video frames
using a quantization method. The degradation of the extracted
watermark can be used to estimate the video quality in terms of
PSNR with good accuracy. We calculated PSNR for video frames
contaminated with AWGN and compared the values with those
estimated using the Watermarking-DWT based approach. It is found
that the calculated and estimated quality measures of the video
frames are highly correlated, suggesting that this method can provide
a good quality measure for video frames transmitted over AWGN
channel without the need of the original video.
Abstract: Dynamic bandwidth allocation in EPONs can be
generally separated into inter-ONU scheduling and intra-ONU scheduling. In our previous work, the active intra-ONU scheduling
(AS) utilizes multiple queue reports (QRs) in each report message to cooperate with the inter-ONU scheduling and makes the granted
bandwidth fully utilized without leaving unused slot remainder (USR).
This scheme successfully solves the USR problem originating from the
inseparability of Ethernet frame. However, without proper setting of
threshold value in AS, the number of QRs constrained by the IEEE
802.3ah standard is not enough, especially in the unbalanced traffic
environment. This limitation may be solved by enlarging the threshold
value. The large threshold implies the large gap between the adjacent QRs, thus resulting in the large difference between the best granted bandwidth and the real granted bandwidth. In this paper, we integrate
AS with a cooperative prediction mechanism and distribute multiple
QRs to reduce the penalty brought by the prediction error.
Furthermore, to improve the QoS and save the usage of queue reports,
the highest priority (EF) traffic which comes during the waiting time is
granted automatically by OLT and is not considered in the requested
bandwidth of ONU. The simulation results show that the proposed
scheme has better performance metrics in terms of bandwidth
utilization and average delay for different classes of packets.
Abstract: Compression algorithms reduce the redundancy in
data representation to decrease the storage required for that data.
Lossless compression researchers have developed highly
sophisticated approaches, such as Huffman encoding, arithmetic
encoding, the Lempel-Ziv (LZ) family, Dynamic Markov
Compression (DMC), Prediction by Partial Matching (PPM), and
Burrows-Wheeler Transform (BWT) based algorithms.
Decompression is also required to retrieve the original data by
lossless means. A compression scheme for text files coupled with
the principle of dynamic decompression, which decompresses only
the section of the compressed text file required by the user instead of
decompressing the entire text file. Dynamic decompressed files offer
better disk space utilization due to higher compression ratios
compared to most of the currently available text file formats.
Abstract: In H.264/AVC video encoding, rate-distortion
optimization for mode selection plays a significant role to achieve
outstanding performance in compression efficiency and video quality.
However, this mode selection process also makes the encoding
process extremely complex, especially in the computation of the ratedistortion
cost function, which includes the computations of the sum
of squared difference (SSD) between the original and reconstructed
image blocks and context-based entropy coding of the block. In this
paper, a transform-domain rate-distortion optimization accelerator
based on fast SSD (FSSD) and VLC-based rate estimation algorithm
is proposed. This algorithm could significantly simplify the hardware
architecture for the rate-distortion cost computation with only
ignorable performance degradation. An efficient hardware structure
for implementing the proposed transform-domain rate-distortion
optimization accelerator is also proposed. Simulation results
demonstrated that the proposed algorithm reduces about 47% of total
encoding time with negligible degradation of coding performance.
The proposed method can be easily applied to many mobile video
application areas such as a digital camera and a DMB (Digital
Multimedia Broadcasting) phone.
Abstract: Polymer-like organic thin films were deposited on both
aluminum alloy type 6061 and glass substrates at room temperature by
Plasma Enhanced Chemical Vapor Deposition (PECVD) methodusing
benzene and hexamethyldisiloxane (HMDSO) as precursor materials.
The surface and physical properties of plasma-polymerized organic
thin films were investigated at different r.f. powers. The effects of
benzene/argon ratio on the properties of plasma polymerized benzene
films were also investigated. It is found that using benzene alone
results in a non-coherent and non-adherent powdery deposited
material. The chemical structure and surface properties of the asgrown
plasma polymerized thin films were analyzed on glass
substrates with FTIR and contact angle measurements. FTIR spectra
of benzene deposited film indicated that the benzene rings are
preserved when increasing benzene ratio and/or decreasing r.f.
powers. FTIR spectra of HMDSO deposited films indicated an
increase of the hydrogen concentration and a decrease of the oxygen
concentration with the increase of r.f. power. The contact angle (θ) of
the films prepared from benzene was found to increase by about 43%
as benzene ratio increases from 10% to 20%. θ was then found to
decrease to the original value (51°) when the benzene ratio increases
to 100%. The contact angle, θ, for both benzene and HMDSO
deposited films were found to increase with r.f. power. This signifies
that the plasma polymerized organic films have substantially low
surface energy as the r.f power increases. The corrosion resistance of
aluminum alloy substrate both bare and covered with plasma
polymerized thin films was carried out by potentiodynamic
polarization measurements in standard 3.5 wt. % NaCl solution at
room temperature. The results indicate that the benzene and HMDSO
deposited films are suitable for protection of the aluminum substrate
against corrosion. The changes in the processing parameters seem to
have a strong influence on the film protective ability. Surface
roughness of films deposited on aluminum alloy substrate was
investigated using scanning electron microscopy (SEM). The SEM
images indicate that the surface roughness of benzene deposited films
increase with decreasing the benzene ratio. SEM images of benzene
and HMDSO deposited films indicate that the surface roughness
decreases with increasing r.f. power. Studying the above parameters
indicate that the films produced are suitable for specific practical
applications.
Abstract: One of the essential components of much of DSP
application is noise cancellation. Changes in real time signals are
quite rapid and swift. In noise cancellation, a reference signal which
is an approximation of noise signal (that corrupts the original
information signal) is obtained and then subtracted from the noise
bearing signal to obtain a noise free signal. This approximation of
noise signal is obtained through adaptive filters which are self
adjusting. As the changes in real time signals are abrupt, this needs
adaptive algorithm that converges fast and is stable. Least mean
square (LMS) and normalized LMS (NLMS) are two widely used
algorithms because of their plainness in calculations and
implementation. But their convergence rates are small. Adaptive
averaging filters (AFA) are also used because they have high
convergence, but they are less stable. This paper provides the
comparative study of LMS and Normalized NLMS, AFA and new
enhanced average adaptive (Average NLMS-ANLMS) filters for noise
cancelling application using speech signals.
Abstract: Text Mining is around applying knowledge discovery
techniques to unstructured text is termed knowledge discovery in text
(KDT), or Text data mining or Text Mining. In decision tree
approach is most useful in classification problem. With this
technique, tree is constructed to model the classification process.
There are two basic steps in the technique: building the tree and
applying the tree to the database. This paper describes a proposed
C5.0 classifier that performs rulesets, cross validation and boosting
for original C5.0 in order to reduce the optimization of error ratio.
The feasibility and the benefits of the proposed approach are
demonstrated by means of medial data set like hypothyroid. It is
shown that, the performance of a classifier on the training cases from
which it was constructed gives a poor estimate by sampling or using a
separate test file, either way, the classifier is evaluated on cases that
were not used to build and evaluate the classifier are both are large. If
the cases in hypothyroid.data and hypothyroid.test were to be
shuffled and divided into a new 2772 case training set and a 1000
case test set, C5.0 might construct a different classifier with a lower
or higher error rate on the test cases. An important feature of see5 is
its ability to classifiers called rulesets. The ruleset has an error rate
0.5 % on the test cases. The standard errors of the means provide an
estimate of the variability of results. One way to get a more reliable
estimate of predictive is by f-fold –cross- validation. The error rate of
a classifier produced from all the cases is estimated as the ratio of the
total number of errors on the hold-out cases to the total number of
cases. The Boost option with x trials instructs See5 to construct up to
x classifiers in this manner. Trials over numerous datasets, large and
small, show that on average 10-classifier boosting reduces the error
rate for test cases by about 25%.
Abstract: The argument that self-disclosure will change the
psychoanalytic process into a socio-cultural niche distorting the
therapeutic alliance and compromise therapeutic effectiveness is still
the widely held belief amongst many psychotherapists. This paper
considers the issues surrounding culture, disclosure and concealment
since they remain largely untheorized and clinically problematic. The
first part of the paper will critically examine the theory and practice
of psychoanalysis across cultures, and explore the reasons for
culturally diverse patients to conceal rather than disclose their
feelings and thoughts in the transference. This is followed by a
discussion on how immigrant analysts- anonymity is difficult to
maintain since diverse nationalities, language and accents provide
clues to the therapist-s and patient-s origins. Through personal
clinical examples of one the author-s (who is an immigrant) the paper
analyses the transference-countertransference paradigm and how it
reflects in the analyst-s self-revelation.
Abstract: In this paper we are to find the optimum multiwavelet for compression of electrocardiogram (ECG) signals and then, selecting it for using with SPIHT codec. At present, it is not well known which multiwavelet is the best choice for optimum compression of ECG. In this work, we examine different multiwavelets on 24 sets of ECG data with entirely different characteristics, selected from MIT-BIH database. For assessing the functionality of the different multiwavelets in compressing ECG signals, in addition to known factors such as Compression Ratio (CR), Percent Root Difference (PRD), Distortion (D), Root Mean Square Error (RMSE) in compression literature, we also employed the Cross Correlation (CC) criterion for studying the morphological relations between the reconstructed and the original ECG signal and Signal to reconstruction Noise Ratio (SNR). The simulation results show that the Cardinal Balanced Multiwavelet (cardbal2) by the means of identity (Id) prefiltering method to be the best effective transformation. After finding the most efficient multiwavelet, we apply SPIHT coding algorithm on the transformed signal by this multiwavelet.
Abstract: A theoretical study is conducted to design and explore
the effect of different parameters such as heat loads, the tube size of
piping system, wick thickness, porosity and hole size on the
performance and capability of a Loop Heat Pipe(LHP). This paper
presents a steady state model that describes the different phenomena
inside a LHP. Loop Heat Pipes(LHPs) are two-phase heat transfer
devices with capillary pumping of a working fluid. By their original
design comparing with heat pipes and special properties of the
capillary structure, they-re capable of transferring heat efficiency for
distances up to several meters at any orientation in the gravity field,
or to several meters in a horizontal position. This theoretical model is
described by different relations to satisfy important limits such as
capillary and nucleate boiling. An algorithm is developed to predict
the size of the LHP satisfying the limitations mentioned above for a
wide range of applied loads. Finally, to assess and evaluate the
algorithm and all the relations considered, we have used to design a
new kind of LHP to recover the heat from the exhaust of an actual
Gas Turbine. By finding the results, it showed that we can use the
LHP as a very high efficient device to recover the heat even in high
amount of loads(exhaust of a gas turbine). The sizes of all parts of the
LHP were obtained using the developed algorithm.