Abstract: With the growing of computer and network, digital
data can be spread to anywhere in the world quickly. In addition,
digital data can also be copied or tampered easily so that the security
issue becomes an important topic in the protection of digital data.
Digital watermark is a method to protect the ownership of digital data.
Embedding the watermark will influence the quality certainly. In this
paper, Vector Quantization (VQ) is used to embed the watermark into
the image to fulfill the goal of data hiding. This kind of watermarking
is invisible which means that the users will not conscious the existing
of embedded watermark even though the embedded image has tiny
difference compared to the original image. Meanwhile, VQ needs a lot
of computation burden so that we adopt a fast VQ encoding scheme by
partial distortion searching (PDS) and mean approximation scheme to
speed up the data hiding process.
The watermarks we hide to the image could be gray, bi-level and
color images. Texts are also can be regarded as watermark to embed.
In order to test the robustness of the system, we adopt Photoshop to
fulfill sharpen, cropping and altering to check if the extracted
watermark is still recognizable. Experimental results demonstrate that
the proposed system can resist the above three kinds of tampering in
general cases.
Abstract: Heat transfer of leaves is a crucial factor in optimal
operation of metabolic functions in plants. In order to quantify this
phenomenon in different leaves and investigate the influence of leaf
shape on heat transfer, natural convection for pine, orange and olive
leaves was simulated as representatives of different groups of leaf
shapes. CFD techniques were used in this simulation with the
purpose to calculate heat transfer of leaves in similar environmental
conditions. The problem was simulated for steady state and threedimensional
conditions. From obtained results, it was concluded that
heat fluxes of all three different leaves are almost identical, however,
total rate of heat transfer have highest and lowest values for orange
leaves, and pine leaves, respectively.
Abstract: Behavioral aspects of experience such as will power
are rarely subjected to quantitative study owing to the numerous
complexities involved. Will is a phenomenon that has puzzled
humanity for a long time. It is a belief that will power of an individual
affects the success achieved by them in life. It is also thought that a
person endowed with great will power can overcome even the most
crippling setbacks in life while a person with a weak will cannot make
the most of life even the greatest assets. This study is an attempt
to subject the phenomena of will to the test of an artificial neural
network through a computational model. The claim being tested is
that will power of an individual largely determines success achieved
in life. It is proposed that data pertaining to success of individuals
be obtained from an experiment and the phenomenon of will be
incorporated into the model, through data generated recursively using
a relation between will and success characteristic to the model.
An artificial neural network trained using part of the data, could
subsequently be used to make predictions regarding data points in
the rest of the model. The procedure would be tried for different
models and the model where the networks predictions are found to
be in greatest agreement with the data would be selected; and used
for studying the relation between success and will.
Abstract: Grid is an environment with millions of resources
which are dynamic and heterogeneous in nature. A computational
grid is one in which the resources are computing nodes and is meant
for applications that involves larger computations. A scheduling
algorithm is said to be efficient if and only if it performs better
resource allocation even in case of resource failure. Resource
allocation is a tedious issue since it has to consider several
requirements such as system load, processing cost and time, user’s
deadline and resource failure. This work attempts in designing a
resource allocation algorithm which is cost-effective and also targets
at load balancing, fault tolerance and user satisfaction by considering
the above requirements. The proposed Budget Constrained Load
Balancing Fault Tolerant algorithm with user satisfaction (BLBFT)
reduces the schedule makespan, schedule cost and task failure rate
and improves resource utilization. Evaluation of the proposed
BLBFT algorithm is done using Gridsim toolkit and the results are
compared with the algorithms which separately concentrates on all
these factors. The comparison results ensure that the proposed
algorithm works better than its counterparts.
Abstract: Reflux condensation occurs in vertical channels and tubes when there is an upward core flow of vapour (or gas-vapour mixture) and a downward flow of the liquid film. The understanding of this condensation configuration is crucial in the design of reflux condensers, distillation columns, and in loss-of-coolant safety analyses in nuclear power plant steam generators. The unique feature of this flow is the upward flow of the vapour-gas mixture (or pure vapour) that retards the liquid flow via shear at the liquid-mixture interface. The present model solves the full, elliptic governing equations in both the film and the gas-vapour core flow. The computational mesh is non-orthogonal and adapts dynamically the phase interface, thus produces a sharp and accurate interface. Shear forces and heat and mass transfer at the interface are accounted for fundamentally. This modeling is a big step ahead of current capabilities by removing the limitations of previous reflux condensation models which inherently cannot account for the detailed local balances of shear, mass, and heat transfer at the interface. Discretisation has been done based on finite volume method and co-located variable storage scheme. An in-house computer code was developed to implement the numerical solution scheme. Detailed results are presented for laminar reflux condensation from steam-air mixtures flowing in vertical parallel plate channels. The results include velocity and gas mass fraction profiles, as well as axial variations of film thickness.
Abstract: This paper is focused on the CFD simulation of the radiaxial pump (i.e. mixed flow pump) with the aim to detect the reasons of Y-Q characteristic instability. The main reasons of pressure pulsations were detected by means of the analysis of velocity and pressure fields within the pump combined with the theoretical approach. Consequently, the modifications of spiral case and pump suction area were made based on the knowledge of flow conditions and the shape of dissipation function. The primary design of pump geometry was created as the base model serving for the comparison of individual modification influences. The basic experimental data are available for this geometry. This approach replaced the more complicated and with respect to convergence of all computational tasks more difficult calculation for the compressible liquid flow. The modification of primary pump consisted in inserting the three fins types. Subsequently, the evaluation of pressure pulsations, specific energy curves and visualization of velocity fields were chosen as the criterion for successful design.
Abstract: Subspace channel estimation methods have been
studied widely, where the subspace of the covariance matrix is
decomposed to separate the signal subspace from noise subspace. The
decomposition is normally done by using either the eigenvalue
decomposition (EVD) or the singular value decomposition (SVD) of
the auto-correlation matrix (ACM). However, the subspace
decomposition process is computationally expensive. This paper
considers the estimation of the multipath slow frequency hopping
(FH) channel using noise space based method. In particular, an
efficient method is proposed to estimate the multipath time delays by
applying multiple signal classification (MUSIC) algorithm which is
based on the null space extracted by the rank revealing LU (RRLU)
factorization. As a result, precise information is provided by the
RRLU about the numerical null space and the rank, (i.e., important
tool in linear algebra). The simulation results demonstrate the
effectiveness of the proposed novel method by approximately
decreasing the computational complexity to the half as compared
with RRQR methods keeping the same performance.
Abstract: Taiwan is a hyper endemic area for the Hepatitis B
virus (HBV). The estimated total number of HBsAg carriers in the
general population who are more than 20 years old is more than 3
million. Therefore, a case record review is conducted from January
2003 to June 2007 for all patients with a diagnosis of acute hepatitis
who were admitted to the Emergency Department (ED) of a
well-known teaching hospital. The cost for the use of medical
resources is defined as the total medical fee. In this study, principal
component analysis (PCA) is firstly employed to reduce the number of
dimensions. Support vector regression (SVR) and artificial neural
network (ANN) are then used to develop the forecasting model. A total
of 117 patients meet the inclusion criteria. 61% patients involved in
this study are hepatitis B related. The computational result shows that
the proposed PCA-SVR model has superior performance than other
compared algorithms. In conclusion, the Child-Pugh score and
echogram can both be used to predict the cost of medical resources for
patients with acute hepatitis in the ED.
Abstract: In this paper, a numerical simulation of a finned store
separating from a wing-pylon configuration has been studied and
validated. A dynamic unstructured tetrahedral mesh approach is
accomplished by using three grid sizes to numerically solving the
discretized three dimensional, inviscid and compressible Euler
equations. The method used for computations of separation of an
external store assuming quasi-steady flow condition. Computations of
quasi-steady flow have been directly coupled to a six degree-offreedom
(6DOF) rigid-body motion code to generate store
trajectories. The pressure coefficients at four different angular cuts
and time histories of various trajectory parameters and wing pressure
distribution during the store separation are compared for every grid
size with published experimental data.
Abstract: In the current work, a three-dimensional geometry of a
75% stenosed blood vessel is analyzed. Large eddy simulation (LES)
with the help of a dynamic subgrid scale Smagorinsky model is
applied to model the turbulent pulsatile flow. The geometry, the
transmural pressure and the properties of the blood and the elastic
boundary were based on clinical measurement data. For the flexible
wall model, a thin solid region is constructed around the 75%
stenosed blood vessel. The deformation of this solid region was
modelled as a deforming boundary to reduce the computational cost
of the solid model. Fluid-structure interaction is realized via a twoway
coupling between the blood flow modelled via LES and the
deforming vessel. The information of the flow pressure and the wall
motion was exchanged continually during the cycle by an arbitrary
Lagrangian-Eulerian method. The boundary condition of current time
step depended on previous solutions. The fluctuation of the velocity
in the post-stenotic region was analyzed in the study. The axial
velocity at normalized position Z=0.5 shows a negative value near
the vessel wall. The displacement of the elastic boundary was
concerned in this study. In particular, the wall displacement at the
systole and the diastole were compared. The negative displacement at
the stenosis indicates a collapse at the maximum velocity and the
deceleration phase.
Abstract: Incineration of municipal solid waste (MSW) is one of
the key scopes in the global clean energy strategy. A computational
fluid dynamics (CFD) model was established in order to reveal these
features of the combustion process in a fixed porous bed of MSW.
Transporting equations and process rate equations of the waste bed
were modeled and set up to describe the incineration process,
according to the local thermal conditions and waste property
characters. Gas phase turbulence was modeled using k-ε turbulent
model and the particle phase was modeled using the kinetic theory of
granular flow. The heterogeneous reaction rates were determined
using Arrhenius eddy dissipation and the Arrhenius-diffusion
reaction rates. The effects of primary air flow rate and temperature in
the burning process of simulated MSW are investigated
experimentally and numerically. The simulation results in bed are
accordant with experimental data well. The model provides detailed
information on burning processes in the fixed bed, which is otherwise
very difficult to obtain by conventional experimental techniques.
Abstract: In this study, a computational fluid dynamics (CFD)
model has been developed for studying the effect of surface
roughness profile on the EHL problem. The cylinders contact
geometry, meshing and calculation of the conservation of mass and
momentum equations are carried out using the commercial software
packages ICEMCFD and ANSYS Fluent. The user defined functions
(UDFs) for density, viscosity and elastic deformation of the cylinders
as the functions of pressure and temperature are defined for the CFD
model. Three different surface roughness profiles are created and
incorporated into the CFD model. It is found that the developed CFD
model can predict the characteristics of fluid flow and heat transfer in
the EHL problem, including the main parameters such as pressure
distribution, minimal film thickness, viscosity, and density changes.
The results obtained show that the pressure profile at the center of the
contact area directly relates to the roughness amplitude. A rough
surface with kurtosis value of more than 3 has greater influence over
the fluctuated shape of pressure distribution than in other cases.
Abstract: The formulated problem of optimization of the
technological process of water treatment for thermal power plants is
considered in this article. The problem is of multiparametric nature.
To optimize the process, namely, reduce the amount of waste water, a
new technology was developed to reuse such water. A mathematical
model of the technology of wastewater reuse was developed.
Optimization parameters were determined. The model consists of a
material balance equation, an equation describing the kinetics of ion
exchange for the non-equilibrium case and an equation for the ion
exchange isotherm. The material balance equation includes a
nonlinear term that depends on the kinetics of ion exchange. A direct
problem of calculating the impurity concentration at the outlet of the
water treatment plant was numerically solved. The direct problem
was approximated by an implicit point-to-point computation
difference scheme. The inverse problem was formulated as relates to
determination of the parameters of the mathematical model of the
water treatment plant operating in non-equilibrium conditions. The
formulated inverse problem was solved. Following the results of
calculation the time of start of the filter regeneration process was
determined, as well as the period of regeneration process and the
amount of regeneration and wash water. Multi-parameter
optimization of water treatment process for thermal power plants
allowed decreasing the amount of wastewater by 15%.
Abstract: Data mining idea is mounting rapidly in admiration
and also in their popularity. The foremost aspire of data mining
method is to extract data from a huge data set into several forms that
could be comprehended for additional use. The data mining is a
technology that contains with rich potential resources which could be
supportive for industries and businesses that pay attention to collect
the necessary information of the data to discover their customer’s
performances. For extracting data there are several methods are
available such as Classification, Clustering, Association,
Discovering, and Visualization… etc., which has its individual and
diverse algorithms towards the effort to fit an appropriate model to
the data. STATISTICA mostly deals with excessive groups of data
that imposes vast rigorous computational constraints. These results
trials challenge cause the emergence of powerful STATISTICA Data
Mining technologies. In this survey an overview of the STATISTICA
software is illustrated along with their significant features.
Abstract: Two finite element (FEM) models are presented in
this paper to address the random nature of the response of glued
timber structures made of wood segments with variable elastic
moduli evaluated from 3600 indentation measurements. This total
database served to create the same number of ensembles as was the
number of segments in the tested beam. Statistics of these ensembles
were then assigned to given segments of beams and the Latin
Hypercube Sampling (LHS) method was called to perform 100
simulations resulting into the ensemble of 100 deflections subjected
to statistical evaluation. Here, a detailed geometrical arrangement of
individual segments in the laminated beam was considered in the
construction of two-dimensional FEM model subjected to in fourpoint
bending to comply with the laboratory tests. Since laboratory
measurements of local elastic moduli may in general suffer from a
significant experimental error, it appears advantageous to exploit the
full scale measurements of timber beams, i.e. deflections, to improve
their prior distributions with the help of the Bayesian statistical
method. This, however, requires an efficient computational model
when simulating the laboratory tests numerically. To this end, a
simplified model based on Mindlin’s beam theory was established.
The improved posterior distributions show that the most significant
change of the Young’s modulus distribution takes place in laminae in
the most strained zones, i.e. in the top and bottom layers within the
beam center region. Posterior distributions of moduli of elasticity
were subsequently utilized in the 2D FEM model and compared with
the original simulations.
Abstract: The Markov decision process (MDP) based
methodology is implemented in order to establish the optimal
schedule which minimizes the cost. Formulation of MDP problem
is presented using the information about the current state of pipe,
improvement cost, failure cost and pipe deterioration model. The
objective function and detailed algorithm of dynamic programming
(DP) are modified due to the difficulty of implementing the
conventional DP approaches. The optimal schedule derived from
suggested model is compared to several policies via Monte
Carlo simulation. Validity of the solution and improvement in
computational time are proved.
Abstract: In this paper, an explicit homotopic function is
constructed to compute the Hochschild homology of a finite
dimensional free k-module V. Because the polynomial algebra is of
course fundamental in the computation of the Hochschild homology
HH and the cyclic homology CH of commutative algebras, we
concentrate our work to compute HH of the polynomial algebra, by
providing certain homotopic function.
Abstract: The building sector is responsible, in many
industrialized countries, for about 40% of the total energy
requirements, so it seems necessary to devote some efforts in this
area in order to achieve a significant reduction of energy
consumption and of greenhouse gases emissions.
The paper presents a study aiming at providing a design
methodology able to identify the best configuration of the system
building/plant, from a technical, economic and environmentally point
of view.
Normally, the classical approach involves a building's energy
loads analysis under steady state conditions, and subsequent selection
of measures aimed at improving the energy performance, based on
previous experience made by architects and engineers in the design
team. Instead, the proposed approach uses a sequence of two wellknown
scientifically validated calculation methods (TRNSYS and
RETScreen), that allow quite a detailed feasibility analysis.
To assess the validity of the calculation model, an existing,
historical building in Central Italy, that will be the object of
restoration and preservative redevelopment, was selected as a casestudy.
The building is made of a basement and three floors, with a
total floor area of about 3,000 square meters.
The first step has been the determination of the heating and
cooling energy loads of the building in a dynamic regime by means,
which allows simulating the real energy needs of the building in
function of its use. Traditional methodologies, based as they are on
steady-state conditions, cannot faithfully reproduce the effects of
varying climatic conditions and of inertial properties of the structure.
With this model is possible to obtain quite accurate and reliable
results that allow identifying effective combinations building-HVAC
system.
The second step has consisted of using output data obtained as
input to the calculation model, which enables to compare different
system configurations from the energy, environmental and financial
point of view, with an analysis of investment, and operation and
maintenance costs, so allowing determining the economic benefit of
possible interventions.
The classical methodology often leads to the choice of
conventional plant systems, while our calculation model provides a
financial-economic assessment for innovative energy systems and
low environmental impact.
Computational analysis can help in the design phase, particularly
in the case of complex structures with centralized plant systems, by
comparing the data returned by the calculation model for different
design options.
Abstract: The detection of moving objects from a video image
sequences is very important for object tracking, activity recognition,
and behavior understanding in video surveillance.
The most used approach for moving objects detection / tracking is
background subtraction algorithms. Many approaches have been
suggested for background subtraction. But, these are illumination
change sensitive and the solutions proposed to bypass this problem
are time consuming.
In this paper, we propose a robust yet computationally efficient
background subtraction approach and, mainly, focus on the ability to
detect moving objects on dynamic scenes, for possible applications in
complex and restricted access areas monitoring, where moving and
motionless persons must be reliably detected. It consists of three
main phases, establishing illumination changes invariance,
background/foreground modeling and morphological analysis for
noise removing.
We handle illumination changes using Contrast Limited Histogram
Equalization (CLAHE), which limits the intensity of each pixel to
user determined maximum. Thus, it mitigates the degradation due to
scene illumination changes and improves the visibility of the video
signal. Initially, the background and foreground images are extracted
from the video sequence. Then, the background and foreground
images are separately enhanced by applying CLAHE.
In order to form multi-modal backgrounds we model each channel
of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture
Model (GMM). Finally, we post process the resulting binary
foreground mask using morphological erosion and dilation
transformations to remove possible noise.
For experimental test, we used a standard dataset to challenge the
efficiency and accuracy of the proposed method on a diverse set of
dynamic scenes.
Abstract: Power systems are operating under stressed condition
due to continuous increase in demand of load. This can lead to
voltage instability problem when face additional load increase or
contingency. In order to avoid voltage instability suitable size of
reactive power compensation at optimal location in the system is
required which improves the load margin. This work aims at
obtaining optimal size as well as location of compensation in the 39-
bus New England system with the help of Bacteria Foraging and
Genetic algorithms. To reduce the computational time the work
identifies weak candidate buses in the system, and then picks only
two of them to take part in the optimization. The objective function is
based on a recently proposed voltage stability index which takes into
account the weighted average sensitivity index is a simpler and faster
approach than the conventional CPF algorithm. BFOA has been
found to give better results compared to GA.