A Robust Watermarking using Blind Source Separation

In this paper, we present a robust and secure algorithm for watermarking, the watermark is first transformed into the frequency domain using the discrete wavelet transform (DWT). Then the entire DWT coefficient except the LL (Band) discarded, these coefficients are permuted and encrypted by specific mixing. The encrypted coefficients are inserted into the most significant spectral components of the stego-image using a chaotic system. This technique makes our watermark non-vulnerable to the attack (like compression, and geometric distortion) of an active intruder, or due to noise in the transmission link.

Cost Based Warranty Optimisation Using Genetic Algorithm

Warranty is a powerful marketing tool for the manufacturer and a good protection for both the manufacturer and the customer. However, warranty always involves additional costs to the manufacturer, which depend on product reliability characteristics and warranty parameters. This paper presents an approach to optimisation of warranty parameters for known product failure distribution to reduce the warranty costs to the manufacturer while retaining the promotional function of the warranty. Combination free replacement and pro-rata warranty policy is chosen as a model and the length of free replacement period and pro-rata policy period are varied, as well as the coefficients that define the pro-rata cost function. Multiparametric warranty optimisation is done by using genetic algorithm. Obtained results are guideline for the manufacturer to choose the warranty policy that minimises the costs and maximises the profit.

Estimation of Time -Varying Linear Regression with Unknown Time -Volatility via Continuous Generalization of the Akaike Information Criterion

The problem of estimating time-varying regression is inevitably concerned with the necessity to choose the appropriate level of model volatility - ranging from the full stationarity of instant regression models to their absolute independence of each other. In the stationary case the number of regression coefficients to be estimated equals that of regressors, whereas the absence of any smoothness assumptions augments the dimension of the unknown vector by the factor of the time-series length. The Akaike Information Criterion is a commonly adopted means of adjusting a model to the given data set within a succession of nested parametric model classes, but its crucial restriction is that the classes are rigidly defined by the growing integer-valued dimension of the unknown vector. To make the Kullback information maximization principle underlying the classical AIC applicable to the problem of time-varying regression estimation, we extend it onto a wider class of data models in which the dimension of the parameter is fixed, but the freedom of its values is softly constrained by a family of continuously nested a priori probability distributions.

Hybrid Method Using Wavelets and Predictive Method for Compression of Speech Signal

The development of the signal compression algorithms is having compressive progress. These algorithms are continuously improved by new tools and aim to reduce, an average, the number of bits necessary to the signal representation by means of minimizing the reconstruction error. The following article proposes the compression of Arabic speech signal by a hybrid method combining the wavelet transform and the linear prediction. The adopted approach rests, on one hand, on the original signal decomposition by ways of analysis filters, which is followed by the compression stage, and on the other hand, on the application of the order 5, as well as, the compression signal coefficients. The aim of this approach is the estimation of the predicted error, which will be coded and transmitted. The decoding operation is then used to reconstitute the original signal. Thus, the adequate choice of the bench of filters is useful to the transform in necessary to increase the compression rate and induce an impercevable distortion from an auditive point of view.

Approximate Range-Sum Queries over Data Cubes Using Cosine Transform

In this research, we propose to use the discrete cosine transform to approximate the cumulative distributions of data cube cells- values. The cosine transform is known to have a good energy compaction property and thus can approximate data distribution functions easily with small number of coefficients. The derived estimator is accurate and easy to update. We perform experiments to compare its performance with a well-known technique - the (Haar) wavelet. The experimental results show that the cosine transform performs much better than the wavelet in estimation accuracy, speed, space efficiency, and update easiness.

Increasing The Speed of Convergence of an Artificial Neural Network based ARMA Coefficients Determination Technique

In this paper, novel techniques in increasing the accuracy and speed of convergence of a Feed forward Back propagation Artificial Neural Network (FFBPNN) with polynomial activation function reported in literature is presented. These technique was subsequently used to determine the coefficients of Autoregressive Moving Average (ARMA) and Autoregressive (AR) system. The results obtained by introducing sequential and batch method of weight initialization, batch method of weight and coefficient update, adaptive momentum and learning rate technique gives more accurate result and significant reduction in convergence time when compared t the traditional method of back propagation algorithm, thereby making FFBPNN an appropriate technique for online ARMA coefficient determination.

Spectral Analysis of Speech: A New Technique

ICA which is generally used for blind source separation problem has been tested for feature extraction in Speech recognition system to replace the phoneme based approach of MFCC. Applying the Cepstral coefficients generated to ICA as preprocessing has developed a new signal processing approach. This gives much better results against MFCC and ICA separately, both for word and speaker recognition. The mixing matrix A is different before and after MFCC as expected. As Mel is a nonlinear scale. However, cepstrals generated from Linear Predictive Coefficient being independent prove to be the right candidate for ICA. Matlab is the tool used for all comparisons. The database used is samples of ISOLET.

Flexural Strength and Ductility Improvement of NSC beams

In order to calculate the flexural strength of normal-strength concrete (NSC) beams, the nonlinear actual concrete stress distribution within the compression zone is normally replaced by an equivalent rectangular stress block, with two coefficients of α and β to regulate the intensity and depth of the equivalent stress respectively. For NSC beams design, α and β are usually assumed constant as 0.85 and 0.80 in reinforced concrete (RC) codes. From an earlier investigation of the authors, α is not a constant but significantly affected by flexural strain gradient, and increases with the increasing of strain gradient till a maximum value. It indicates that larger concrete stress can be developed in flexure than that stipulated by design codes. As an extension and application of the authors- previous study, the modified equivalent concrete stress block is used here to produce a series of design charts showing the maximum design limits of flexural strength and ductility of singly- and doubly- NSC beams, through which both strength and ductility design limits are improved by taking into account strain gradient effect.

Heat Transfer Coefficients for Particulate Airflow in Shell and Coiled Tube Heat Exchangers

In this work, we experimentally study heat transfer from exhaust particulate air of detergent spray drying tower to water by using coiled tube heat exchanger. Water flows in the coiled tubes, where air loaded with detergent particles of 43 micrometers in diameter flows within the shell. Four coiled tubes with different coil pitches are used in a counter-current flow configuration. We investigate heat transfer coefficients of inside and outside the heat transfer surfaces through 400 experiments. The correlations between Nusselt number and Reynolds number, Prandtl number, mass flow rate of particulates to mass flow rate of air ratio and coiled tube pitch parameter are proposed. The correlations procured can be used to predicted heat transfer between tube and shell of the heat exchanger.

Computational Prediction of Complicated Atmospheric Motion for Spinning or non- Spinning Projectiles

A full six degrees of freedom (6-DOF) flight dynamics model is proposed for the accurate prediction of short and long-range trajectories of high spin and fin-stabilized projectiles via atmospheric flight to final impact point. The projectiles is assumed to be both rigid (non-flexible), and rotationally symmetric about its spin axis launched at low and high pitch angles. The mathematical model is based on the full equations of motion set up in the no-roll body reference frame and is integrated numerically from given initial conditions at the firing site. The projectiles maneuvering motion depends on the most significant force and moment variations, in addition to wind and gravity. The computational flight analysis takes into consideration the Mach number and total angle of attack effects by means of the variable aerodynamic coefficients. For the purposes of the present work, linear interpolation has been applied from the tabulated database of McCoy-s book. The developed computational method gives satisfactory agreement with published data of verified experiments and computational codes on atmospheric projectile trajectory analysis for various initial firing flight conditions.

A Methodology to Analyze Technology Convergence: Patent-Citation Based Technology Input-Output Analysis

This research proposes a methodology for patent-citation-based technology input-output analysis by applying the patent information to input-output analysis developed for the dependencies among different industries. For this analysis, a technology relationship matrix and its components, as well as input and technology inducement coefficients, are constructed using patent information. Then, a technology inducement coefficient is calculated by normalizing the degree of citation from certain IPCs to the different IPCs (International patent classification) or to the same IPCs. Finally, we construct a Dependency Structure Matrix (DSM) based on the technology inducement coefficient to suggest a useful application for this methodology.

Quadratic Pulse Inversion Ultrasonic Imaging(QPI): A Two-Step Procedure for Optimization of Contrast Sensitivity and Specificity

We have previously introduced an ultrasonic imaging approach that combines harmonic-sensitive pulse sequences with a post-beamforming quadratic kernel derived from a second-order Volterra filter (SOVF). This approach is designed to produce images with high sensitivity to nonlinear oscillations from microbubble ultrasound contrast agents (UCA) while maintaining high levels of noise rejection. In this paper, a two-step algorithm for computing the coefficients of the quadratic kernel leading to reduction of tissue component introduced by motion, maximizing the noise rejection and increases the specificity while optimizing the sensitivity to the UCA is presented. In the first step, quadratic kernels from individual singular modes of the PI data matrix are compared in terms of their ability of maximize the contrast to tissue ratio (CTR). In the second step, quadratic kernels resulting in the highest CTR values are convolved. The imaging results indicate that a signal processing approach to this clinical challenge is feasible.

Evaluation of Stiffness and Damping Coefficients of Multiple Axial Groove Water Lubricated Bearing Using Computational Fluid Dynamics

This research details a Computational Fluid Dynamics (CFD) approach to model fluid flow in a journal bearing with 8 equispaced semi-circular axial grooves. Water is used as the lubricant and is fed from one end of the bearing to the other, under pressure. The geometry of the bearing is modeled using a commercially available modeling software GAMBIT and the flow analysis is performed using a dedicated CFD analysis software FLUENT. The pressure distribution in the bearing clearance is obtained from FLUENT for various whirl ratios and is used to calculate the hydrodynamic force components in the radial and tangential direction of the bearing. These values along with the various whirl speeds can be used to do a regression analysis to determine the stiffness and damping coefficients. The values obtained are then compared with the stiffness and damping coefficients of a 3 Axial groove water lubricated journal bearing and those obtained from a FORTRAN code for a similar bearing.

Experimental and Numerical Study of the Effect of Lateral Wind on the Feeder Airship

Feeder is one of the airships of the Multibody Advanced Airship for Transport (MAAT) system, under development within the EU FP7 project. MAAT is based on a modular concept composed of two different parts that have the possibility to join; respectively they are the so-called Cruiser and Feeder, designed on the lighter than air principle. Feeder, also named ATEN (Airship Transport Elevator Network), is the smaller one which joins the bigger one, Cruiser, also named PTAH (Photovoltaic modular Transport Airship for High altitude),envisaged to happen at 15km altitude. During the MAAT design phase, the aerodynamic studies of the both airships and their interactions are analyzed. The objective of these studies is to understand the aerodynamic behavior of all the preselected configurations, as an important element in the overall MAAT system design. The most of these configurations are only simulated by CFD, while the most feasible one is experimentally analyzed in order to validate and thrust the CFD predictions. This paper presents the numerical and experimental investigation of the Feeder “conical like" shape configuration. The experiments are focused on the aerodynamic force coefficients and the pressure distribution over the Feeder outer surface, while the numerical simulation cover also the analysis of the velocity and pressure distribution. Finally, the wind tunnel experiment is compared with its CFD model in order to validate such specific simulations with respective experiments and to better understand the difference between the wind tunnel and in-flight circumstances.

Prediction of Watermelon Consumer Acceptability based on Vibration Response Spectrum

It is difficult to judge ripeness by outward characteristics such as size or external color. In this paper a nondestructive method was studied to determine watermelon (Crimson Sweet) quality. Responses of samples to excitation vibrations were detected using laser Doppler vibrometry (LDV) technology. Phase shift between input and output vibrations were extracted overall frequency range. First and second were derived using frequency response spectrums. After nondestructive tests, watermelons were sensory evaluated. So the samples were graded in a range of ripeness based on overall acceptability (total desired traits consumers). Regression models were developed to predict quality using obtained results and sample mass. The determination coefficients of the calibration and cross validation models were 0.89 and 0.71 respectively. This study demonstrated feasibility of information which is derived vibration response curves for predicting fruit quality. The vibration response of watermelon using the LDV method is measured without direct contact; it is accurate and timely, which could result in significant advantage for classifying watermelons based on consumer opinions.

A New Fuzzy Decision Support Method for Analysis of Economic Factors of Turkey's Construction Industry

Imperfect knowledge cannot be avoided all the time. Imperfections may have several forms; uncertainties, imprecision and incompleteness. When we look to classification of methods for the management of imperfect knowledge we see fuzzy set-based techniques. The choice of a method to process data is linked to the choice of knowledge representation, which can be numerical, symbolic, logical or semantic and it depends on the nature of the problem to be solved for example decision support, which will be mentioned in our study. Fuzzy Logic is used for its ability to manage imprecise knowledge, but it can take advantage of the ability of neural networks to learn coefficients or functions. Such an association of methods is typical of so-called soft computing. In this study a new method was used for the management of imprecision for collected knowledge which related to economic analysis of construction industry in Turkey. Because of sudden changes occurring in economic factors decrease competition strength of construction companies. The better evaluation of these changes in economical factors in view of construction industry will made positive influence on company-s decisions which are dealing construction.

On One Application of Hybrid Methods For Solving Volterra Integral Equations

As is known, one of the priority directions of research works of natural sciences is introduction of applied section of contemporary mathematics as approximate and numerical methods to solving integral equation into practice. We fare with the solving of integral equation while studying many phenomena of nature to whose numerically solving by the methods of quadrature are mainly applied. Taking into account some deficiency of methods of quadrature for finding the solution of integral equation some sciences suggested of the multistep methods with constant coefficients. Unlike these papers, here we consider application of hybrid methods to the numerical solution of Volterra integral equation. The efficiency of the suggested method is proved and a concrete method with accuracy order p = 4 is constructed. This method in more precise than the corresponding known methods.

A Further Improvement on the Resurrected Core-Spreading Vortex Method

In a previously developed fast vortex method, the diffusion of the vortex sheet induced at the solid wall by the no-slip boundary conditions was modeled according to the approximation solution of Koumoutsakos and converted into discrete blobs in the vicinity of the wall. This scheme had been successfully applied to a simulation of the flow induced with an impulsively initiated circular cylinder. In this work, further modifications on this vortex method are attempted, including replacing the approximation solution by the boundary-element-method solution, incorporating a new algorithm for handling the over-weak vortex blobs, and diffusing the vortex sheet circulation in a new way suitable for high-curvature solid bodies. The accuracy is thus largely improved. The predictions of lift and drag coefficients for a uniform flow past a NASA airfoil agree well with the existing literature.

Numerical Simulation of Minimum Distance Jet Impingement Heat Transfer

Impinging jets are used in various industrial areas as a cooling and drying technique. The current research is concerned with the means of improving the heat transfer for configurations with a minimum distance of the nozzle to the impingement surface. The impingement heat transfer is described using numerical methods over a wide range of parameters for an array of planar jets. These parameters include varying jet flow speed, width of nozzle, distance of nozzle, angle of the jet flow, velocity and geometry of the impingement surface. Normal pressure and shear stress are computed as additional parameters. Using dimensionless characteristic numbers the parameters and the results are correlated to gain generalized equations. The results demonstrate the effect of the investigated parameters on the flow.

Effect of Temperature on Specific Retention Volumes of Selected Volatile Organic Compounds Using the Gas - Liquid Chromatographic Technique Revisited

This paper is a continuation of our interest in the influence of temperature on specific retention volumes and the resulting infinite dilution activity coefficients. This has a direct effect in the design of absorption and stripping columns for the abatement of volatile organic compounds. The interaction of 13 volatile organic compounds (VOCs) with polydimethylsiloxane (PDMS) at varying temperatures was studied by gas liquid chromatography (GLC). Infinite dilution activity coefficients and specific retention volumes obtained in this study were found to be in agreement with those obtained from static headspace and group contribution methods by the authors as well as literature values for similar systems. Temperature variation also allows for transport calculations for different seasons. The results of this work confirm that PDMS is well suited for the scrubbing of VOCs from waste gas streams. Plots of specific retention volumes against temperature gave linear van-t Hoff plots.