Abstract: In this paper, we propose use of convolutional codes
for file dispersal. The proposed method is comparable in complexity
to the information Dispersal Algorithm proposed by M.Rabin and for
particular choices of (non-binary) convolutional codes, is almost as
efficient as that algorithm in terms of controlling expansion in the
total storage. Further, our proposed dispersal method allows string
search.
Abstract: In this work, a special case of the image superresolution
problem where the only type of motion is global
translational motion and the blurs are shift-invariant is investigated.
The necessary conditions for exact reconstruction of the original
image by using finite impulse-response reconstruction filters are
developed. Given that the conditions are satisfied, a method for exact
super-resolution is presented and some simulation results are shown.
Abstract: In this paper we are to find the optimum multiwavelet for compression of electrocardiogram (ECG) signals and then, selecting it for using with SPIHT codec. At present, it is not well known which multiwavelet is the best choice for optimum compression of ECG. In this work, we examine different multiwavelets on 24 sets of ECG data with entirely different characteristics, selected from MIT-BIH database. For assessing the functionality of the different multiwavelets in compressing ECG signals, in addition to known factors such as Compression Ratio (CR), Percent Root Difference (PRD), Distortion (D), Root Mean Square Error (RMSE) in compression literature, we also employed the Cross Correlation (CC) criterion for studying the morphological relations between the reconstructed and the original ECG signal and Signal to reconstruction Noise Ratio (SNR). The simulation results show that the Cardinal Balanced Multiwavelet (cardbal2) by the means of identity (Id) prefiltering method to be the best effective transformation. After finding the most efficient multiwavelet, we apply SPIHT coding algorithm on the transformed signal by this multiwavelet.
Abstract: Human amniotic membrane (HAM) is a useful
biological material for the reconstruction of damaged ocular surface.
The processing and preservation of HAM is critical to prevent the
patients undergoing amniotic membrane transplant (AMT) from cross
infections. For HAM preparation human placenta is obtained after an
elective cesarean delivery. Before collection, the donor is screened
for seronegativity of HCV, Hbs Ag, HIV and Syphilis. After
collection, placenta is washed in balanced salt solution (BSS) in
sterile environment. Amniotic membrane is then separated from the
placenta as well as chorion while keeping the preparation in BSS.
Scrapping of HAM is then carried out manually until all the debris is
removed and clear transparent membrane is acquired. Nitrocellulose
membrane filters are then placed on the stromal side of HAM, cut
around the edges with little membrane folded towards other side
making it easy to separate during surgery. HAM is finally stored in
solution of glycerine and Dulbecco-s Modified Eagle Medium
(DMEM) in 1:1 ratio containing antibiotics. The capped borosil vials
containing HAM are kept at -80°C until use. This vial is thawed to
room temperature and opened under sterile operation theatre
conditions at the time of surgery.
Abstract: Optical Coherence Tomography (OCT) combined
with the Confocal Microscopy, as a noninvasive method, permits the
determinations of materials defects in the ceramic layers depth. For
this study 256 anterior and posterior metal and integral ceramic fixed
partial dentures were used, made with Empress (Ivoclar), Wollceram
and CAD/CAM (Wieland) technology. For each investigate area 350
slices were obtain and a 3D reconstruction was perform from each
stuck. The Optical Coherent Tomography, as a noninvasive method,
can be used as a control technique in integral ceramic technology,
before placing those fixed partial dentures in the oral cavity. The
purpose of this study is to evaluate the capability of En face Optical
Coherence Tomography (OCT) combined with a fluorescent method
in detection and analysis of possible material defects in metalceramic
and integral ceramic fixed partial dentures. As a conclusion,
it is important to have a non invasive method to investigate fixed
partial prostheses before their insertion in the oral cavity in order to
satisfy the high stress requirements and the esthetic function.
Abstract: Camera calibration is an important step in 3D
reconstruction. Camera calibration may be classified into two major types: traditional calibration and self-calibration. However, a calibration method in using a checkerboard is intermediate between traditional calibration and self-calibration. A self
is proposed based on a square in this paper. Only a square in the planar
template, the camera self-calibration can be completed through the single view. The proposed algorithm is that the virtual circle and straight line are established by a square on planar template, and
circular points, vanishing points in straight lines and the relation
between them are be used, in order to obtain the image of the absolute
conic (IAC) and establish the camera intrinsic parameters. To make
the calibration template is simpler, as compared with the Zhang Zhengyou-s method. Through real experiments and experiments, the experimental results show that this algorithm is
feasible and available, and has a certain precision and robustness.
Abstract: The use of Inverse Discrete Fourier Transform (IDFT) implemented in the form of Inverse Fourier Transform (IFFT) is one of the standard method of reconstructing Magnetic Resonance Imaging (MRI) from uniformly sampled K-space data. In this tutorial, three of the major problems associated with the use of IFFT in MRI reconstruction are highlighted. The tutorial also gives brief introduction to MRI physics; MRI system from instrumentation point of view; K-space signal and the process of IDFT and IFFT for One and two dimensional (1D and 2D) data.
Abstract: The approach based on the wavelet transform has
been widely used for image denoising due to its multi-resolution
nature, its ability to produce high levels of noise reduction and the
low level of distortion introduced. However, by removing noise, high
frequency components belonging to edges are also removed, which
leads to blurring the signal features. This paper proposes a new
method of image noise reduction based on local variance and edge
analysis. The analysis is performed by dividing an image into 32 x 32
pixel blocks, and transforming the data into wavelet domain. Fast
lifting wavelet spatial-frequency decomposition and reconstruction is
developed with the advantages of being computationally efficient and
boundary effects minimized. The adaptive thresholding by local
variance estimation and edge strength measurement can effectively
reduce image noise while preserve the features of the original image
corresponding to the boundaries of the objects. Experimental results
demonstrate that the method performs well for images contaminated
by natural and artificial noise, and is suitable to be adapted for
different class of images and type of noises. The proposed algorithm
provides a potential solution with parallel computation for real time
or embedded system application.
Abstract: Oil debris signal generated from the inductive oil
debris monitor (ODM) is useful information for machine condition
monitoring but is often spoiled by background noise. To improve the
reliability in machine condition monitoring, the high-fidelity signal
has to be recovered from the noisy raw data. Considering that the noise
components with large amplitude often have higher frequency than
that of the oil debris signal, the integral transform is proposed to
enhance the detectability of the oil debris signal. To cancel out the
baseline wander resulting from the integral transform, the empirical
mode decomposition (EMD) method is employed to identify the trend
components. An optimal reconstruction strategy including both
de-trending and de-noising is presented to detect the oil debris signal
with less distortion. The proposed approach is applied to detect the oil
debris signal in the raw data collected from an experimental setup. The
result demonstrates that this approach is able to detect the weak oil
debris signal with acceptable distortion from noisy raw data.
Abstract: In this paper we are to find the optimum
multiwavelet for compression of electrocardiogram (ECG)
signals. At present, it is not well known which multiwavelet is
the best choice for optimum compression of ECG. In this
work, we examine different multiwavelets on 24 sets of ECG
data with entirely different characteristics, selected from MITBIH
database. For assessing the functionality of the different
multiwavelets in compressing ECG signals, in addition to
known factors such as Compression Ratio (CR), Percent Root
Difference (PRD), Distortion (D), Root Mean Square Error
(RMSE) in compression literature, we also employed the
Cross Correlation (CC) criterion for studying the
morphological relations between the reconstructed and the
original ECG signal and Signal to reconstruction Noise Ratio
(SNR). The simulation results show that the cardbal2 by the
means of identity (Id) prefiltering method to be the best
effective transformation.
Abstract: Given a large sparse signal, great wishes are to
reconstruct the signal precisely and accurately from lease number of
measurements as possible as it could. Although this seems possible
by theory, the difficulty is in built an algorithm to perform the
accuracy and efficiency of reconstructing. This paper proposes a new
proved method to reconstruct sparse signal depend on using new
method called Least Support Matching Pursuit (LS-OMP) merge it
with the theory of Partial Knowing Support (PSK) given new method
called Partially Knowing of Least Support Orthogonal Matching
Pursuit (PKLS-OMP).
The new methods depend on the greedy algorithm to compute the
support which depends on the number of iterations. So to make it
faster, the PKLS-OMP adds the idea of partial knowing support of its
algorithm. It shows the efficiency, simplicity, and accuracy to get
back the original signal if the sampling matrix satisfies the Restricted
Isometry Property (RIP).
Simulation results also show that it outperforms many algorithms
especially for compressible signals.
Abstract: Beta-spline is built on G2 continuity which guarantees
smoothness of generated curves and surfaces using it. This curve is
preferred to be used in object design rather than reconstruction. This
study however, employs the Beta-spline in reconstructing a 3-
dimensional G2 image of the Stanford Rabbit. The original data
consists of multi-slice binary images of the rabbit. The result is then
compared with related works using other techniques.
Abstract: This paper aims to propose a novel, robust, and simple method for obtaining a human 3D face model and camera pose (position and orientation) from a video sequence. Given a video sequence of a face recorded from an off-the-shelf digital camera, feature points used to define facial parts are tracked using the Active- Appearance Model (AAM). Then, the face-s 3D structure and camera pose of each video frame can be simultaneously calculated from the obtained point correspondences. This proposed method is primarily based on the combined approaches of Gradient Descent and Powell-s Multidimensional Minimization. Using this proposed method, temporarily occluded point including the case of self-occlusion does not pose a problem. As long as the point correspondences displayed in the video sequence have enough parallax, these missing points can still be reconstructed.
Abstract: Electric impedance imaging is a method of
reconstructing spatial distribution of electrical conductivity inside a
subject. In this paper, a new method of electrical impedance imaging
using eddy current is proposed. The eddy current distribution in the
body depends on the conductivity distribution and the magnetic field
pattern. By changing the position of magnetic core, a set of voltage
differences is measured with a pair of electrodes. This set of voltage
differences is used in image reconstruction of conductivity
distribution. The least square error minimization method is used as a
reconstruction algorithm. The back projection algorithm is used to
get two dimensional images. Based on this principle, a measurement
system is developed and some model experiments were performed
with a saline filled phantom. The shape of each model in the
reconstructed image is similar to the corresponding model,
respectively. From the results of these experiments, it is confirmed
that the proposed method is applicable in the realization of electrical
imaging.
Abstract: At the end of the 20th century it was actual the
development of transport corridors and the improvement of their
technical parameters. With this purpose, many countries and Georgia
among them manufacture to construct new highways, railways and
also reconstruction-modernization of the existing transport
infrastructure. It is necessary to explore the artificial structures
(bridges and tunnels) on the existing tracks as they are very old.
Conference report includes the peculiarities of reconstruction of
tunnels, because we think that this theme is important for the
modernization of the existing road infrastructure. We must remark
that the methods of determining mining pressure of tunnel
reconstructions are worked out according to the jobs of new tunnels
but it is necessary to foresee additional mining pressure which will be
formed during their reconstruction. In this report there are given the
methods of figuring the additional mining pressure while
reconstruction of tunnels, there was worked out the computer
program, it is determined that during reconstruction of tunnels the
additional mining pressure is 1/3rd of main mining pressure.
Abstract: The interaction of tunneling or mining with
groundwater has become a very relevant problem not only due to the
need to guarantee the safety of workers and to assure the efficiency of
the tunnel drainage systems, but also to safeguard water resources
from impoverishment and pollution risk. Therefore it is very
important to forecast the drainage processes (i.e., the evaluation of
drained discharge and drawdown caused by the excavation). The aim
of this study was to know better the system and to quantify the flow
drained from the Fontane mines, located in Val Germanasca (Turin,
Italy). This allowed to understand the hydrogeological local changes
in time. The work has therefore been structured as follows: the
reconstruction of the conceptual model with the geological,
hydrogeological and geological-structural study; the calculation of
the tunnel inflows (through the use of structural methods) and the
comparison with the measured flow rates; the water balance at the
basin scale. In this way it was possible to understand what are the
relationships between rainfall, groundwater level variations and the
effect of the presence of tunnels as a means of draining water.
Subsequently, it the effects produced by the excavation of the mining
tunnels was quantified, through numerical modeling. In particular,
the modeling made it possible to observe the drawdown variation as a
function of number, excavation depth and different mines linings.
Abstract: Time interleaved sigma-delta (TIΣΔ) architecture is a
potential candidate for high bandwidth analog to digital converters
(ADC) which remains a bottleneck for software and cognitive radio
receivers. However, the performance of the TIΣΔ architecture is
limited by the unavoidable gain and offset mismatches resulting
from the manufacturing process. This paper presents a novel digital
calibration method to compensate the gain and offset mismatch
effect. The proposed method takes advantage of the reconstruction
digital signal processing on each channel and requires only few logic
components for implementation. The run time calibration is estimated
to 10 and 15 clock cycles for offset cancellation and gain mismatch
calibration respectively.
Abstract: In illumination variant face recognition, existing
methods extracting face albedo as light normalized image may lead to
loss of extensive facial details, with light template discarded. To
improve that, a novel approach for realistic facial texture
reconstruction by combining original image and albedo image is
proposed. First, light subspaces of different identities are established
from the given reference face images; then by projecting the original
and albedo image into each light subspace respectively, texture
reference images with corresponding lighting are reconstructed and
two texture subspaces are formed. According to the projections in
texture subspaces, facial texture with normal light can be synthesized.
Due to the combination of original image, facial details can be
preserved with face albedo. In addition, image partition is applied to
improve the synthesization performance. Experiments on Yale B and
CMUPIE databases demonstrate that this algorithm outperforms the
others both in image representation and in face recognition.
Abstract: Phylogenies ; The evolutionary histories of groups of
species are one of the most widely used tools throughout the life
sciences, as well as objects of research with in systematic,
evolutionary biology. In every phylogenetic analysis reconstruction
produces trees. These trees represent the evolutionary histories of
many groups of organisms, bacteria due to horizontal gene transfer
and plants due to process of hybridization. The process of gene
transfer in bacteria and hybridization in plants lead to reticulate
networks, therefore, the methods of constructing trees fail in
constructing reticulate networks. In this paper a model has been
employed to reconstruct phylogenetic network in honey bee. This
network represents reticulate evolution in honey bee. The maximum
parsimony approach has been used to obtain this reticulate network.
Abstract: An important step in three-dimensional reconstruction
and computer vision is camera calibration, whose objective is to
estimate the intrinsic and extrinsic parameters of each camera. In this
paper, two linear methods based on the different planes are given. In
both methods, the general plane is used to replace the calibration
object with very good precision. In the first method, after controlling
the camera to undergo five times- translation movements and taking
pictures of the orthogonal planes, a set of linear constraints of the
camera intrinsic parameters is then derived by means of homography
matrix. The second method is to get all camera parameters by taking
only one picture of a given radius circle. experiments on simulated
data and real images,indicate that our method is reasonable and is a
good supplement to camera calibration.