Abstract: Digital watermarking is one of the techniques for
copyright protection. In this paper, a normalization-based robust
image watermarking scheme which encompasses singular value
decomposition (SVD) and discrete cosine transform (DCT)
techniques is proposed. For the proposed scheme, the host image is
first normalized to a standard form and divided into non-overlapping
image blocks. SVD is applied to each block. By concatenating the
first singular values (SV) of adjacent blocks of the normalized image,
a SV block is obtained. DCT is then carried out on the SV blocks to
produce SVD-DCT blocks. A watermark bit is embedded in the highfrequency
band of a SVD-DCT block by imposing a particular
relationship between two pseudo-randomly selected DCT
coefficients. An adaptive frequency mask is used to adjust local
watermark embedding strength. Watermark extraction involves
mainly the inverse process. The watermark extracting method is blind
and efficient. Experimental results show that the quality degradation
of watermarked image caused by the embedded watermark is visually
transparent. Results also show that the proposed scheme is robust
against various image processing operations and geometric attacks.
Abstract: In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.
Abstract: The feasibility of applying a simple and cost effective sliding friction testing apparatus to study the friction behaviour of a clutch facing material, effected by the variation of temperature and contact pressure, was investigated. It was found that the method used in this work was able to give a convenient and cost effective measurement of friction coefficients and their transitions of a clutch facing material. The obtained results will be useful for the development process of new facing materials.
Abstract: The vast amount of information on the World Wide
Web is created and published by many different types of providers.
Unlike books and journals, most of this information is not subject to
editing or peer review by experts. This lack of quality control and the
explosion of web sites make the task of finding quality information
on the web especially critical. Meanwhile new facilities for
producing web pages such as Blogs make this issue more significant
because Blogs have simple content management tools enabling nonexperts
to build easily updatable web diaries or online journals. On
the other hand despite a decade of active research in information
quality (IQ) there is no framework for measuring information quality
on the Blogs yet. This paper presents a novel experimental
framework for ranking quality of information on the Weblog. The
results of data analysis revealed seven IQ dimensions for the Weblog.
For each dimension, variables and related coefficients were
calculated so that presented framework is able to assess IQ of
Weblogs automatically.
Abstract: To model the human visual system (HVS) in the region of interest, we propose a new objective metric evaluation adapted to wavelet foveation-based image compression quality measurement, which exploits a foveation setup filter implementation technique in the DWT domain, based especially on the point and region of fixation of the human eye. This model is then used to predict the visible divergences between an original and compressed image with respect to this region field and yields an adapted and local measure error by removing all peripheral errors. The technique, which we call foveation wavelet visible difference prediction (FWVDP), is demonstrated on a number of noisy images all of which have the same local peak signal to noise ratio (PSNR), but visibly different errors. We show that the FWVDP reliably predicts the fixation areas of interest where error is masked, due to high image contrast, and the areas where the error is visible, due to low image contrast. The paper also suggests ways in which the FWVDP can be used to determine a visually optimal quantization strategy for foveation-based wavelet coefficients and to produce a quantitative local measure of image quality.
Abstract: In this paper, a robust watermarking algorithm using
the wavelet transform and edge detection is presented. The efficiency
of an image watermarking technique depends on the preservation of
visually significant information. This is attained by embedding the
watermark transparently with the maximum possible strength. The
watermark embedding process is carried over the subband
coefficients that lie on edges, where distortions are less noticeable,
with a subband level dependent strength. Also, the watermark is
embedded to selected coefficients around edges, using a different
scale factor for watermark strength, that are captured by a
morphological dilation operation. The experimental evaluation of the
proposed method shows very good results in terms of robustness and
transparency to various attacks such as median filtering, Gaussian
noise, JPEG compression and geometrical transformations.
Abstract: A method based on the power series solution is proposed to solve the natural frequency of flapping vibration for the rotating inclined Euler beam with constant angular velocity. The vibration of the rotating beam is measured from the position of the corresponding steady state axial deformation. In this paper the governing equations for linear vibration of a rotating Euler beam are derived by the d'Alembert principle, the virtual work principle and the consistent linearization of the fully geometrically nonlinear beam theory in a rotating coordinate system. The governing equation for flapping vibration of the rotating inclined Euler beam is linear ordinary differential equation with variable coefficients and is solved by a power series with four independent coefficients. Substituting the power series solution into the corresponding boundary conditions at two end nodes of the rotating beam, a set of homogeneous equations can be obtained. The natural frequencies may be determined by solving the homogeneous equations using the bisection method. Numerical examples are studied to investigate the effect of inclination angle on the natural frequency of flapping vibration for rotating inclined Euler beams with different angular velocity and slenderness ratio.
Abstract: Numerical calculations of flow around a square cylinder are presented using the multi-relaxation-time lattice Boltzmann method at Reynolds number 150. The effects of upstream locations, downstream locations and blockage are investigated systematically. A detail analysis are given in terms of time-trace analysis of drag and lift coefficients, power spectra analysis of lift coefficient, vorticity contours visualizations and phase diagrams. A number of physical quantities mean drag coefficient, drag coefficient, Strouhal number and root-mean-square values of drag and lift coefficients are calculated and compared with the well resolved experimental data and numerical results available in open literature. The results had shown that the upstream, downstream and height of the computational domain are at least 7.5, 37.5 and 12 diameters of the cylinder, respectively.
Abstract: The purpose of this article is to study the effects of
plants cover on overland flow and, therefore, its influences on the
amount of eroded and transported soil. In this investigation, all the
experiments were conducted in the LEGHYD laboratory using a
rainfall simulator and a soil tray. The experiments were conducted
using an experimental plot (soil tray) which is 2m long, 0.5 m wide
and 0.15 m deep. The soil used is an agricultural sandy soil (62,08%
coarse sand, 19,14% fine sand, 11,57% silt and 7,21% clay). Plastic
rods (4 mm in diameter) were used to simulate the plants at different
densities: 0 stem/m2 (bared soil), 126 stems/m², 203 stems/m², 461
stems/m² and 2500 stems/m²). The used rainfall intensity is 73mm/h
and the soil tray slope is fixed to 3°. The results have shown that the
overland flow velocities decreased with increasing stems density, and
the density cover has a great effect on sediment concentration.
Darcy–Weisbach and Manning friction coefficients of overland flow
increased when the stems density increased. Froude and Reynolds
numbers decreased with increasing stems density and, consequently,
the flow regime of all treatments was laminar and subcritical. From
these findings, we conclude that increasing the plants cover can
efficiently reduce soil loss and avoid denuding the roots plants.
Abstract: The common practice of operating S-rotor is in an
open environment; however there are times when the rotor is
installed in a bounded environment and there might be changes in the
performance of the rotor. This paper presents the changes in the
performance of S-rotor when operated in bounded flows. The
investigation was conducted experimentally to compare the
performance of the rotors in bounded environment against open
environment. Three different rotors models were designed, fabricated
and subjected to experimental measurements. All of the three models
were having 600 mm height and 300 mm Diameter. They were tested
in three different flow environments; namely: partially bounded
environment, fully bounded environment and open environment.
Rotors were found to have better starting up capabilities when
operated in bounded environment. Apart from that, all rotors manage
to achieve higher Power and Torque Coefficients at a higher Tip
Speed Ratio as compared to the open environment.
Abstract: This paper presents an ESN-based Arabic phoneme
recognition system trained with supervised, forced and combined
supervised/forced supervised learning algorithms. Mel-Frequency
Cepstrum Coefficients (MFCCs) and Linear Predictive Code (LPC)
techniques are used and compared as the input feature extraction
technique. The system is evaluated using 6 speakers from the King
Abdulaziz Arabic Phonetics Database (KAPD) for Saudi Arabia
dialectic and 34 speakers from the Center for Spoken Language
Understanding (CSLU2002) database of speakers with different
dialectics from 12 Arabic countries. Results for the KAPD and
CSLU2002 Arabic databases show phoneme recognition
performances of 72.31% and 38.20% respectively.
Abstract: In wavelet regression, choosing threshold value is a crucial issue. A too large value cuts too many coefficients resulting in over smoothing. Conversely, a too small threshold value allows many coefficients to be included in reconstruction, giving a wiggly estimate which result in under smoothing. However, the proper choice of threshold can be considered as a careful balance of these principles. This paper gives a very brief introduction to some thresholding selection methods. These methods include: Universal, Sure, Ebays, Two fold cross validation and level dependent cross validation. A simulation study on a variety of sample sizes, test functions, signal-to-noise ratios is conducted to compare their numerical performances using three different noise structures. For Gaussian noise, EBayes outperforms in all cases for all used functions while Two fold cross validation provides the best results in the case of long tail noise. For large values of signal-to-noise ratios, level dependent cross validation works well under correlated noises case. As expected, increasing both sample size and level of signal to noise ratio, increases estimation efficiency.
Abstract: Beginning from the creator of integro-differential
equations Volterra, many scientists have investigated these
equations. Classic method for solving integro-differential
equations is the quadratures method that is successfully applied up
today. Unlike these methods, Makroglou applied hybrid methods
that are modified and generalized in this paper and applied to the
numerical solution of Volterra integro-differential equations. The
way for defining the coefficients of the suggested method is also
given.
Abstract: We present new finite element methods for Helmholtz and Maxwell equations on general three-dimensional polyhedral meshes, based on domain decomposition with boundary elements on the surfaces of the polyhedral volume elements. The methods use the lowest-order polynomial spaces and produce sparse, symmetric linear systems despite the use of boundary elements. Moreover, piecewise constant coefficients are admissible. The resulting approximation on the element surfaces can be extended throughout the domain via representation formulas. Numerical experiments confirm that the convergence behavior on tetrahedral meshes is comparable to that of standard finite element methods, and equally good performance is attained on more general meshes.
Abstract: From the perspective of industrial structure
coordination and based on an explicit definition for the connotation of
industrial structure coordination, the synergetic coefficients are used
to measure the coordination degree between three industries' input
structure and output structure, and then the efficacy function method is
employed to comprehensively evaluate the level of China-s industrial
structure optimization. It is showed that Chinese industrial structure
presented a "v-shaped" variation tendency between 1996 and 2008,
and its industrial structure adjustment got obvious achievements after
2003, with the industrial structure optimization level increasing
continuously. However in 2009, the level of China-s industrial
structure optimization declined sharply due to the decreasing
contribution degree of value added structure and energy structure
coordination and the lower coordination degree of value added
structure and capital structure.
Abstract: Solution of some practical problems is reduced to the
solution of the integro-differential equations. But for the numerical
solution of such equations basically quadrature methods or its
combination with multistep or one-step methods are used. The
quadrature methods basically is applied to calculation of the integral
participating in right hand side of integro-differential equations. As
this integral is of Volterra type, it is obvious that at replacement with
its integrated sum the upper limit of the sum depends on a current
point in which values of the integral are defined. Thus we receive the
integrated sum with variable boundary, to work with is hardly.
Therefore multistep method with the constant coefficients, which is
free from noted lack and gives the way for finding it-s coefficients is
present.
Abstract: Segmentation of a color image composed of different
kinds of regions can be a hard problem, namely to compute for an
exact texture fields. The decision of the optimum number of
segmentation areas in an image when it contains similar and/or un
stationary texture fields. A novel neighborhood-based segmentation
approach is proposed. A genetic algorithm is used in the proposed
segment-pass optimization process. In this pass, an energy function,
which is defined based on Markov Random Fields, is minimized. In
this paper we use an adaptive threshold estimation method for image
thresholding in the wavelet domain based on the generalized
Gaussian distribution (GGD) modeling of sub band coefficients. This
method called Normal Shrink is computationally more efficient and
adaptive because the parameters required for estimating the threshold
depend on sub band data energy that used in the pre-stage of
segmentation. A quad tree is employed to implement the multi
resolution framework, which enables the use of different strategies at
different resolution levels, and hence, the computation can be
accelerated. The experimental results using the proposed
segmentation approach are very encouraging.
Abstract: The binary phase-only filter digital watermarking
embeds the phase information of the discrete Fourier transform of the
image into the corresponding magnitudes for better image authentication.
The paper proposed an approach of how to implement watermark
embedding by quantizing the magnitude, with discussing how to
regulate the quantization steps based on the frequencies of the magnitude
coefficients of the embedded watermark, and how to embed the
watermark at low frequency quantization. The theoretical analysis and
simulation results show that algorithm flexibility, security, watermark
imperceptibility and detection performance of the binary phase-only
filter digital watermarking can be effectively improved with quantization
based watermark embedding, and the robustness against JPEG
compression will also be increased to some extent.
Abstract: The purpose of this study is to investigate the capacity
of natural Turkish zeolite for NH4-N removal from landfill leachate.
The effects of modification and initial concentration on the removal
of NH4-N from leachate were also investigated. The kinetics of
adsorption of NH4-N has been discussed using three kinetic models,
i.e., the pseudo-second order model, the Elovich equation, the
intraparticle diffuion model. Kinetic parameters and correlation
coefficients were determined. Equilibrium isotherms for the
adsorption of NH4-N were analyzed by Langmuir, Freundlich and
Tempkin isotherm models. Langmuir isotherm model was found to
best represent the data for NH4-N.
Abstract: In this paper, we show that the stability can not be
achieved with current stabilizing MPC methods for some unstable
processes. Hence we present a new method for stabilizing these
processes. The main idea is to use a new time varying weighted cost
function for traditional GPC. This stabilizes the closed loop system
without adding soft or hard constraint in optimization problem. By
studying different examples it is shown that using the proposed
method, the closed-loop stability of unstable nonminimum phase
process is achieved.