Abstract: The recent growth of using multimedia transmission
over wireless communication systems, have challenges to protect the
data from lost due to wireless channel effect. Images are corrupted
due to the noise and fading when transmitted over wireless channel,
in wireless channel the image is transmitted block by block, Due to
severe fading, entire image blocks can be damaged. The aim of this
paper comes out from need to enhance the digital images at the
wireless receiver side. Proposed Boundary Interpolation (BI)
Algorithm using wavelet, have been adapted here used to
reconstruction the lost block in the image at the receiver depend on
the correlation between the lost block and its neighbors. New
Proposed technique by using Boundary Interpolation (BI) Algorithm
using wavelet with Pixel interleaver has been implemented. Pixel
interleaver work on distribute the pixel to new pixel position of
original image before transmitting the image. The block lost through
wireless channel is only effects individual pixel. The lost pixels at the
receiver side can be recovered by using Boundary Interpolation (BI)
Algorithm using wavelet. The results showed that the New proposed
algorithm boundary interpolation (BI) using wavelet with pixel
interleaver is better in term of MSE and PSNR.
Abstract: Revolutions Applications such as telecommunications, hands-free communications, recording, etc. which need at least one microphone, the signal is usually infected by noise and echo. The important application is the speech enhancement, which is done to remove suppressed noises and echoes taken by a microphone, beside preferred speech. Accordingly, the microphone signal has to be cleaned using digital signal processing DSP tools before it is played out, transmitted, or stored. Engineers have so far tried different approaches to improving the speech by get back the desired speech signal from the noisy observations. Especially Mobile communication, so in this paper will do reconstruction of the speech signal, observed in additive background noise, using the Kalman filter technique to estimate the parameters of the Autoregressive Process (AR) in the state space model and the output speech signal obtained by the MATLAB. The accurate estimation by Kalman filter on speech would enhance and reduce the noise then compare and discuss the results between actual values and estimated values which produce the reconstructed signals.
Abstract: Optical 3D measurement of objects is meaningful in
numerous industrial applications. In various cases shape acquisition
of weak textured objects is essential. Examples are repetition parts
made of plastic or ceramic such as housing parts or ceramic bottles as
well as agricultural products like tubers. These parts are often
conveyed in a wobbling way during the automated optical inspection.
Thus, conventional 3D shape acquisition methods like laser scanning
might fail. In this paper, a novel approach for acquiring 3D shape of
weak textured and moving objects is presented. To facilitate such
measurements an active stereo vision system with structured light is
proposed. The system consists of multiple camera pairs and auxiliary
laser pattern generators. It performs the shape acquisition within one
shot and is beneficial for rapid inspection tasks. An experimental
setup including hardware and software has been developed and
implemented.
Abstract: This work explores blind image deconvolution by recursive function approximation based on supervised learning of neural networks, under the assumption that a degraded image is linear convolution of an original source image through a linear shift-invariant (LSI) blurring matrix. Supervised learning of neural networks of radial basis functions (RBF) is employed to construct an embedded recursive function within a blurring image, try to extract non-deterministic component of an original source image, and use them to estimate hyper parameters of a linear image degradation model. Based on the estimated blurring matrix, reconstruction of an original source image from a blurred image is further resolved by an annealed Hopfield neural network. By numerical simulations, the proposed novel method is shown effective for faithful estimation of an unknown blurring matrix and restoration of an original source image.
Abstract: During the last years, the genomes of more and more
species have been sequenced, providing data for phylogenetic recon-
struction based on genome rearrangement measures. A main task in
all phylogenetic reconstruction algorithms is to solve the median of
three problem. Although this problem is NP-hard even for the sim-
plest distance measures, there are exact algorithms for the breakpoint
median and the reversal median that are fast enough for practical use.
In this paper, this approach is extended to the transposition median as
well as to the weighted reversal and transposition median. Although
there is no exact polynomial algorithm known even for the pairwise
distances, we will show that it is in most cases possible to solve
these problems exactly within reasonable time by using a branch and
bound algorithm.
Abstract: This paper focuses on testing database of existing
information system. At the beginning we describe the basic problems
of implemented databases, such as data redundancy, poor design of
database logical structure or inappropriate data types in columns of
database tables. These problems are often the result of incorrect
understanding of the primary requirements for a database of an
information system. Then we propose an algorithm to compare the
conceptual model created from vague requirements for a database
with a conceptual model reconstructed from implemented database.
An algorithm also suggests steps leading to optimization of
implemented database. The proposed algorithm is verified by an
implemented prototype. The paper also describes a fuzzy system
which works with the vague requirements for a database of an
information system, procedure for creating conceptual from vague
requirements and an algorithm for reconstructing a conceptual model
from implemented database.
Abstract: This paper proposes a novel stereo vision technique
for top view book scanners which provide us with dense 3d point
clouds of page surfaces. This is a precondition to dewarp bound
volumes independent of 2d information on the page. Our method is
based on algorithms, which normally require the projection of pattern
sequences with structured light. We use image sequences of the
moving stripe lighting of the top view scanner instead of an additional
light projection. Thus the stereo vision setup is simplified without
losing measurement accuracy. Furthermore we improve a surface
model dewarping method through introducing a difference vector
based on real measurements. Although our proposed method is hardly
expensive neither in calculation time nor in hardware requirements
we present good dewarping results even for difficult examples.
Abstract: Crucial information barely visible to the human eye is
often embedded in a series of low resolution images taken of the
same scene. Super resolution reconstruction is the process of
combining several low resolution images into a single higher
resolution image. The ideal algorithm should be fast, and should add
sharpness and details, both at edges and in regions without adding
artifacts. In this paper we propose a super resolution blind
reconstruction technique for linearly degraded images. In our
proposed technique the algorithm is divided into three parts an image
registration, wavelets based fusion and an image restoration. In this
paper three low resolution images are considered which may sub
pixels shifted, rotated, blurred or noisy, the sub pixel shifted images
are registered using affine transformation model; A wavelet based
fusion is performed and the noise is removed using soft thresolding.
Our proposed technique reduces blocking artifacts and also
smoothens the edges and it is also able to restore high frequency
details in an image. Our technique is efficient and computationally
fast having clear perspective of real time implementation.
Abstract: In this paper we consider quantum motion integrals
depended on the algebraic reconstruction of BPHZ method for
perturbative renormalization in two different procedures. Then based
on Bogoliubov character and Baker-Campbell-Hausdorff (BCH) formula,
we show that how motion integral condition on components
of Birkhoff factorization of a Feynman rules character on Connes-
Kreimer Hopf algebra of rooted trees can determine a family of fixed
point equations.
Abstract: Imprecision is a long-standing problem in CAD design
and high accuracy image-based reconstruction applications. The visual
hull which is the closed silhouette equivalent shape of the objects
of interest is an important concept in image-based reconstruction.
We extend the domain-theoretic framework, which is a robust and
imprecision capturing geometric model, to analyze the imprecision in
the output shape when the input vertices are given with imprecision.
Under this framework, we show an efficient algorithm to generate the
2D partial visual hull which represents the exact information of the
visual hull with only basic imprecision assumptions. We also show
how the visual hull from polyhedra problem can be efficiently solved
in the context of imprecise input.
Abstract: In this paper, the implementation of low power,
high throughput convolutional filters for the one dimensional
Discrete Wavelet Transform and its inverse are presented. The
analysis filters have already been used for the implementation of a
high performance DWT encoder [15] with minimum memory
requirements for the JPEG 2000 standard. This paper presents the
design techniques and the implementation of the convolutional filters
included in the JPEG2000 standard for the forward and inverse DWT
for achieving low-power operation, high performance and reduced
memory accesses. Moreover, they have the ability of performing
progressive computations so as to minimize the buffering between
the decomposition and reconstruction phases. The experimental
results illustrate the filters- low power high throughput characteristics
as well as their memory efficient operation.
Abstract: The myocardial sintigraphy is an imaging modality which provides functional informations. Whereas, coronarography modality gives useful informations about coronary arteries anatomy. In case of coronary artery disease (CAD), the coronarography can not determine precisely which moderate lesions (artery reduction between 50% and 70%), known as the “gray zone", are haemodynamicaly significant. In this paper, we aim to define the relationship between the location and the degree of the stenosis in coronary arteries and the observed perfusion on the myocardial scintigraphy. This allows us to model the impact evolution of these stenoses in order to justify a coronarography or to avoid it for patients suspected being in the gray zone. Our approach is decomposed in two steps. The first step consists in modelling a coronary artery bed and stenoses of different location and degree. The second step consists in modelling the left ventricle at stress and at rest using the sphercical harmonics model and myocardial scintigraphic data. We use the spherical harmonics descriptors to analyse left ventricle model deformation between stress and rest which permits us to conclude if ever an ischemia exists and to quantify it.
Abstract: Three-dimensional reconstruction of small objects has
been one of the most challenging problems over the last decade.
Computer graphics researchers and photography professionals have
been working on improving 3D reconstruction algorithms to fit the
high demands of various real life applications. Medical sciences,
animation industry, virtual reality, pattern recognition, tourism
industry, and reverse engineering are common fields where 3D
reconstruction of objects plays a vital role. Both lack of accuracy and
high computational cost are the major challenges facing successful
3D reconstruction. Fringe projection has emerged as a promising 3D
reconstruction direction that combines low computational cost to both
high precision and high resolution. It employs digital projection,
structured light systems and phase analysis on fringed pictures.
Research studies have shown that the system has acceptable
performance, and moreover it is insensitive to ambient light.
This paper presents an overview of fringe projection approaches. It
also presents an experimental study and implementation of a simple
fringe projection system. We tested our system using two objects
with different materials and levels of details. Experimental results
have shown that, while our system is simple, it produces acceptable
results.
Abstract: Acoustic Imaging based sound localization using microphone
array is a challenging task in digital-signal processing.
Discrete Fourier transform (DFT) based near-field acoustical holography
(NAH) is an important acoustical technique for sound source
localization and provide an efficient solution to the ill-posed problem.
However, in practice, due to the usage of small curtailed aperture
and its consequence of significant spectral leakage, the DFT could
not reconstruct the active-region-of-sound (AROS) effectively, especially
near the edges of aperture. In this paper, we emphasize the
fundamental problems of DFT-based NAH, provide a solution to
spectral leakage effect by the extrapolation based on linear predictive
coding and 2D Tukey windowing. This approach has been tested to
localize the single and multi-point sound sources. We observe that
incorporating extrapolation technique increases the spatial resolution,
localization accuracy and reduces spectral leakage when small curtail
aperture with a lower number of sensors accounts.
Abstract: When it comes to last, it is regarded as the critical
foundation of shoe design and development. A computer aided
methodology for various last form designs is proposed in this study.
The reverse engineering is mainly applied to the process of scanning
for the last form. Then with the minimum energy for revision of
surface continuity, the surface reconstruction of last is rebuilt by the
feature curves of the scanned last. When the surface reconstruction of
last is completed, the weighted arithmetic mean method is applied to
the computation on the shape morphing for the control mesh of last,
thus 3D last form of different sizes is generated from its original form
feature with functions remained. In the end, the result of this study is
applied to an application for 3D last reconstruction system. The
practicability of the proposed methodology is verified through later
case studies.
Abstract: On March 11, 2011, the East coast of Japan was hit by
one of the strongest earthquakes in history, followed by a devastating
tsunami. Although most lifelines, infrastructure, and public facilities
have been restored gradually, recovery efforts in terms of disposal of
disaster waste and revival of primary industry are lagging. This study
presents a summary of the damage inflicted by the earthquake and the
current status of reconstruction in the disaster area. Moreover, we
discuss the current trends and future perspectives on recently
implemented eco-friendly reconstruction projects and focus on the
pro-environmental behavior of disaster victims which is emerging as a
result of the energy shortage after the earthquake. Finally, we offer
ideas for initiatives for the next stage of the reconstruction policies.
Abstract: Network layer multicast, i.e. IP multicast, even after
many years of research, development and standardization, is not
deployed in large scale due to both technical (e.g. upgrading of
routers) and political (e.g. policy making and negotiation) issues.
Researchers looked for alternatives and proposed application/overlay
multicast where multicast functions are handled by end hosts, not
network layer routers. Member hosts wishing to receive multicast
data form a multicast delivery tree. The intermediate hosts in the tree
act as routers also, i.e. they forward data to the lower hosts in the
tree. Unlike IP multicast, where a router cannot leave the tree until all
members below it leave, in overlay multicast any member can leave
the tree at any time thus disjoining the tree and disrupting the data
dissemination. All the disrupted hosts have to rejoin the tree. This
characteristic of the overlay multicast causes multicast tree unstable,
data loss and rejoin overhead. In this paper, we propose that each node
sets its leaving time from the tree and sends join request to a number
of nodes in the tree. The nodes in the tree will reject the request if
their leaving time is earlier than the requesting node otherwise they
will accept the request. The node can join at one of the accepting
nodes. This makes the tree more stable as the nodes will join the tree
according to their leaving time, earliest leaving time node being at the
leaf of the tree. Some intermediate nodes may not follow their leaving
time and leave earlier than their leaving time thus disrupting the tree.
For this, we propose a proactive recovery mechanism so that disrupted
nodes can rejoin the tree at predetermined nodes immediately. We
have shown by simulation that there is less overhead when joining
the multicast tree and the recovery time of the disrupted nodes is
much less than the previous works. Keywords
Abstract: Proper orthogonal decomposition (POD) is used to reconstruct spatio-temporal data of a fully developed turbulent channel flow with density variation at Reynolds number of 150, based on the friction velocity and the channel half-width, and Prandtl number of 0.71. To apply POD to the fully developed turbulent channel flow with density variation, the flow field (velocities, density, and temperature) is scaled by the corresponding root mean square values (rms) so that the flow field becomes dimensionless. A five-vector POD problem is solved numerically. The reconstructed second-order moments of velocity, temperature, and density from POD eigenfunctions compare favorably to the original Direct Numerical Simulation (DNS) data.
Abstract: Most CT reconstruction system x-ray computed
tomography (CT) is a well established visualization technique in
medicine and nondestructive testing. However, since CT scanning
requires sampling of radiographic projections from different viewing
angles, common CT systems with mechanically moving parts are too
slow for dynamic imaging, for instance of multiphase flows or live
animals. A large number of X-ray projections are needed to
reconstruct CT images, so the collection and calculation of the
projection data consume too much time and harmful for patient. For
the purpose of solving the problem, in this study, we proposed a
method for tomographic reconstruction of a sample from a limited
number of x-ray projections by using linear interpolation method. In
simulation, we presented reconstruction from an experimental x-ray
CT scan of a Aluminum phantom that follows to two steps: X-ray
projections will be interpolated using linear interpolation method and
using it for CT reconstruction based upon Ordered Subsets
Expectation Maximization (OSEM) method.
Abstract: When programming in languages such as C, Java, etc.,
it is difficult to reconstruct the programmer's ideas only from the
program code. This occurs mainly because, much of the programmer's
ideas behind the implementation are not recorded in the code during
implementation. For example, physical aspects of computation such as
spatial structures, activities, and meaning of variables are not required
as instructions to the computer and are often excluded. This makes the
future reconstruction of the original ideas difficult. AIDA, which is a
multimedia programming language based on the cyberFilm model, can
solve these problems allowing to describe ideas behind programs
using advanced annotation methods as a natural extension to
programming. In this paper, a development environment that
implements the AIDA language is presented with a focus on the
annotation methods. In particular, an actual scientific numerical
computation code is created and the effects of the annotation methods
are analyzed.