Abstract: Discrete Wavelet Transform (DWT) has demonstrated
far superior to previous Discrete Cosine Transform (DCT) and
standard JPEG in natural as well as medical image compression. Due
to its localization properties both in special and transform domain,
the quantization error introduced in DWT does not propagate
globally as in DCT. Moreover, DWT is a global approach that avoids
block artifacts as in the JPEG. However, recent reports on natural
image compression have shown the superior performance of
contourlet transform, a new extension to the wavelet transform in two
dimensions using nonseparable and directional filter banks,
compared to DWT. It is mostly due to the optimality of contourlet in
representing the edges when they are smooth curves. In this work, we
investigate this fact for medical images, especially for CT images,
which has not been reported yet. To do that, we propose a
compression scheme in transform domain and compare the
performance of both DWT and contourlet transform in PSNR for
different compression ratios (CR) using this scheme. The results
obtained using different type of computed tomography images show
that the DWT has still good performance at lower CR but contourlet
transform performs better at higher CR.
Abstract: Artifact is one of the most important factors in
degrading the CT image quality and plays an important role in
diagnostic accuracy. In this paper, some artifacts typically appear in
Spiral CT are introduced. The different factors such as patient,
equipment and interpolation algorithm which cause the artifacts are
discussed and new developments and image processing algorithms to
prevent or reduce them are presented.
Abstract: This paper proposes an algorithm which automatically aligns and stitches the component medical images (fluoroscopic) with varying degrees of overlap into a single composite image. The alignment method is based on similarity measure between the component images. As applied here the technique is intensity based rather than feature based. It works well in domains where feature based methods have difficulty, yet more robust than traditional correlation. Component images are stitched together using the new triangular averaging based blending algorithm. The quality of the resultant image is tested for photometric inconsistencies and geometric misalignments. This method cannot correct rotational, scale and perspective artifacts.
Abstract: Image interpolation is a common problem in imaging applications. However, most interpolation algorithms in existence suffer visually to some extent the effects of blurred edges and jagged artifacts in the image. This paper presents an adaptive feature preserving bidirectional flow process, where an inverse diffusion is performed to enhance edges along the normal directions to the isophote lines (edges), while a normal diffusion is done to remove artifacts (''jaggies'') along the tangent directions. In order to preserve image features such as edges, angles and textures, the nonlinear diffusion coefficients are locally adjusted according to the first and second order directional derivatives of the image. Experimental results on synthetic images and nature images demonstrate that our interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional interpolations.
Abstract: The growing interest on national heritage
preservation has led to intensive efforts on digital documentation of
cultural heritage knowledge. Encapsulated within this effort is the
focus on ontology development that will help facilitate the
organization and retrieval of the knowledge. Ontologies surrounding
cultural heritage domain are related to archives, museum and library
information such as archaeology, artifacts, paintings, etc. The growth
in number and size of ontologies indicates the well acceptance of its
semantic enrichment in many emerging applications. Nowadays,
there are many heritage information systems available for access.
Among others is community-based e-museum designed to support the
digital cultural heritage preservation. This work extends previous
effort of developing the Traditional Malay Textile (TMT) Knowledge
Model where the model is designed with the intention of auxiliary
mapping with CIDOC CRM. Due to its internal constraints, the
model needs to be transformed in advance. This paper addresses the
issue by reviewing the previous harmonization works with CIDOC
CRM as exemplars in refining the facets in the model particularly
involving TMT-Artifact class. The result is an extensible model
which could lead to a common view for automated mapping with
CIDOC CRM. Hence, it promotes integration and exchange of
textile information especially batik-related between communities in
e-museum applications.
Abstract: The electrical potentials generated during eye movements and blinks are one of the main sources of artifacts in Electroencephalogram (EEG) recording and can propagate much across the scalp, masking and distorting brain signals. In recent times, signal separation algorithms are used widely for removing artifacts from the observed EEG data. In this paper, a recently introduced signal separation algorithm Mutual Information based Least dependent Component Analysis (MILCA) is employed to separate ocular artifacts from EEG. The aim of MILCA is to minimize the Mutual Information (MI) between the independent components (estimated sources) under a pure rotation. Performance of this algorithm is compared with eleven popular algorithms (Infomax, Extended Infomax, Fast ICA, SOBI, TDSEP, JADE, OGWE, MS-ICA, SHIBBS, Kernel-ICA, and RADICAL) for the actual independence and uniqueness of the estimated source components obtained for different sets of EEG data with ocular artifacts by using a reliable MI Estimator. Results show that MILCA is best in separating the ocular artifacts and EEG and is recommended for further analysis.
Abstract: This paper proposes a dual tree complex wavelet transform (DT-CWT) based directional interpolation scheme for noisy images. The problems of denoising and interpolation are modelled as to estimate the noiseless and missing samples under the same framework of optimal estimation. Initially, DT-CWT is used to decompose an input low-resolution noisy image into low and high frequency subbands. The high-frequency subband images are interpolated by linear minimum mean square estimation (LMMSE) based interpolation, which preserves the edges of the interpolated images. For each noisy LR image sample, we compute multiple estimates of it along different directions and then fuse those directional estimates for a more accurate denoised LR image. The estimation parameters calculated in the denoising processing can be readily used to interpolate the missing samples. The inverse DT-CWT is applied on the denoised input and interpolated high frequency subband images to obtain the high resolution image. Compared with the conventional schemes that perform denoising and interpolation in tandem, the proposed DT-CWT based noisy image interpolation method can reduce many noise-caused interpolation artifacts and preserve well the image edge structures. The visual and quantitative results show that the proposed technique outperforms many of the existing denoising and interpolation methods.
Abstract: EEG signal is one of the oldest measures of brain
activity that has been used vastly for clinical diagnoses and
biomedical researches. However, EEG signals are highly
contaminated with various artifacts, both from the subject and from
equipment interferences. Among these various kinds of artifacts,
ocular noise is the most important one. Since many applications such
as BCI require online and real-time processing of EEG signal, it is
ideal if the removal of artifacts is performed in an online fashion.
Recently, some methods for online ocular artifact removing have
been proposed. One of these methods is ARMAX modeling of EEG
signal. This method assumes that the recorded EEG signal is a
combination of EOG artifacts and the background EEG. Then the
background EEG is estimated via estimation of ARMAX parameters.
The other recently proposed method is based on adaptive filtering.
This method uses EOG signal as the reference input and subtracts
EOG artifacts from recorded EEG signals. In this paper we
investigate the efficiency of each method for removing of EOG
artifacts. A comparison is made between these two methods. Our
undertaken conclusion from this comparison is that adaptive filtering
method has better results compared with the results achieved by
ARMAX modeling.
Abstract: Ontology-based modelling of multi-formatted
software application content is a challenging area in content
management. When the number of software content unit is huge and
in continuous process of change, content change management is
important. The management of content in this context requires
targeted access and manipulation methods. We present a novel
approach to deal with model-driven content-centric information
systems and access to their content. At the core of our approach is an
ontology-based semantic annotation technique for diversely
formatted content that can improve the accuracy of access and
systems evolution. Domain ontologies represent domain-specific
concepts and conform to metamodels. Different ontologies - from
application domain ontologies to software ontologies - capture and
model the different properties and perspectives on a software content
unit. Interdependencies between domain ontologies, the artifacts and
the content are captured through a trace model. The annotation traces
are formalised and a graph-based system is selected for the
representation of the annotation traces.
Abstract: Assessment for image quality traditionally needs its
original image as a reference. The conventional method for assessment
like Mean Square Error (MSE) or Peak Signal to Noise Ratio (PSNR)
is invalid when there is no reference. In this paper, we present a new
No-Reference (NR) assessment of image quality using blur and noise.
The recent camera applications provide high quality images by help of
digital Image Signal Processor (ISP). Since the images taken by the
high performance of digital camera have few blocking and ringing
artifacts, we only focus on the blur and noise for predicting the
objective image quality. The experimental results show that the
proposed assessment method gives high correlation with subjective
Difference Mean Opinion Score (DMOS). Furthermore, the proposed
method provides very low computational load in spatial domain and
similar extraction of characteristics to human perceptional assessment.
Abstract: Transpedicular screw fixation in spinal fractures,
degenerative changes, or deformities is a well-established procedure.
However, important rate of fixation failure due to screw bending,
loosening, or pullout are still reported particularly in weak bone stock
in osteoporosis. To overcome the problem, mechanism of failure has
to be fully investigated in vitro. Post-mortem human subjects are less
accessible and animal cadavers comprise limitations due to different
geometry and mechanical properties. Therefore, the development of a
synthetic model mimicking the realistic human vertebra is highly
demanded. A bone surrogate, composed of Polyurethane (PU) foam
analogous to cancellous bone porous structure, was tested for 3
different densities in this study. The mechanical properties were
investigated under uniaxial compression test by minimizing the end
artifacts on specimens. The results indicated that PU foam of 0.32
g.cm-3 density has comparable mechanical properties to human
cancellous bone in terms of young-s modulus and yield strength.
Therefore, the obtained information can be considered as primary
step for developing a realistic cancellous bone of human vertebral
body. Further evaluations are also recommended for other density
groups.
Abstract: We constructed a method of noise reduction for
JPEG-compressed image based on Bayesian inference using the
maximizer of the posterior marginal (MPM) estimate. In this method,
we tried the MPM estimate using two kinds of likelihood, both of
which enhance grayscale images converted into the JPEG-compressed
image through the lossy JPEG image compression. One is the
deterministic model of the likelihood and the other is the probabilistic
one expressed by the Gaussian distribution. Then, using the Monte
Carlo simulation for grayscale images, such as the 256-grayscale
standard image “Lena" with 256 × 256 pixels, we examined the
performance of the MPM estimate based on the performance measure
using the mean square error. We clarified that the MPM estimate via
the Gaussian probabilistic model of the likelihood is effective for
reducing noises, such as the blocking artifacts and the mosquito noise,
if we set parameters appropriately. On the other hand, we found that
the MPM estimate via the deterministic model of the likelihood is not
effective for noise reduction due to the low acceptance ratio of the
Metropolis algorithm.
Abstract: The advent of modern technology shadows its impetus repercussions on successful Legacy systems making them obsolete with time. These systems have evolved the large organizations in major problems in terms of new business requirements, response time, financial depreciation and maintenance. Major difficulty is due to constant system evolution and incomplete, inconsistent and obsolete documents which a legacy system tends to have. The myriad dimensions of these systems can only be explored by incorporating reverse engineering, in this context, is the best method to extract useful artifacts and by exploring these artifacts for reengineering existing legacy systems to meet new requirements of organizations. A case study is conducted on six different type of software systems having source code in different programming languages using the architectural recovery framework.
Abstract: Electroencephalogram (EEG) recordings are often
contaminated with ocular and muscle artifacts. In this paper, the
canonical correlation analysis (CCA) is used as blind source
separation (BSS) technique (BSS-CCA) to decompose the artifact
contaminated EEG into component signals. We combine the BSSCCA
technique with wavelet filtering approach for minimizing both
ocular and muscle artifacts simultaneously, and refer the proposed
method as wavelet enhanced BSS-CCA. In this approach, after
careful visual inspection, the muscle artifact components are
discarded and ocular artifact components are subjected to wavelet
filtering to retain high frequency cerebral information, and then clean
EEG is reconstructed. The performance of the proposed wavelet
enhanced BSS-CCA method is tested on real EEG recordings
contaminated with ocular and muscle artifacts, for which power
spectral density is used as a quantitative measure. Our results suggest
that the proposed hybrid approach minimizes ocular and muscle
artifacts effectively, minimally affecting underlying cerebral activity
in EEG recordings.
Abstract: The work presented in this paper focus on Knowledge Management services enabling CSCW (Computer Supported Cooperative Work) applications to provide an appropriate adaptation to the user and the situation in which the user is working. In this paper, we explain how a knowledge management system can be designed to support users in different situations exploiting contextual data, users' preferences, and profiles of involved artifacts (e.g., documents, multimedia files, mockups...). The presented work roots in the experience we had in the MILK project and early steps made in the MAIS project.
Abstract: In this work, we present for the first time in our
perception an efficient digital watermarking scheme for mpeg audio
layer 3 files that operates directly in the compressed data domain,
while manipulating the time and subband/channel domain. In
addition, it does not need the original signal to detect the watermark.
Our scheme was implemented taking special care for the efficient
usage of the two limited resources of computer systems: time and
space. It offers to the industrial user the capability of watermark
embedding and detection in time immediately comparable to the real
music time of the original audio file that depends on the mpeg
compression, while the end user/audience does not face any artifacts
or delays hearing the watermarked audio file. Furthermore, it
overcomes the disadvantage of algorithms operating in the PCMData
domain to be vulnerable to compression/recompression attacks,
as it places the watermark in the scale factors domain and not in the
digitized sound audio data. The strength of our scheme, that allows it
to be used with success in both authentication and copyright
protection, relies on the fact that it gives to the users the enhanced
capability their ownership of the audio file not to be accomplished
simply by detecting the bit pattern that comprises the watermark
itself, but by showing that the legal owner knows a hard to compute
property of the watermark.
Abstract: Much has been written about the difficulties students
have with producing traditional dissertations. This includes both
native English speakers (L1) and students with English as a second
language (L2). The main emphasis of these papers has been on the
structure of the dissertation, but in all cases, even when electronic
versions are discussed, the dissertation is still in what most would
regard as a traditional written form.
Master of Science Degrees in computing disciplines require
students to gain technical proficiency and apply their knowledge to a
range of scenarios. The basis of this paper is that if a dissertation is a
means of showing that such a student has met the criteria for a pass,
which should be based on the learning outcomes of the dissertation
module, does meeting those outcomes require a student to
demonstrate their skills in a solely text based form, particularly in a
highly technical research project? Could it be possible for a student
to produce a series of related artifacts which form a cohesive package
that meets the learning out comes of the dissertation?
Abstract: To distinguish small retinal hemorrhages in early
diabetic retinopathy from dust artifacts, we analyzed hue, lightness,
and saturation (HLS) color spaces. The fundus of 5 patients with
diabetic retinopathy was photographed. For the initial experiment, we
placed 4 different colored papers on the ceiling of a darkroom. Using
each color, 10 fragments of house dust particles on a magnifier were
photographed. The colored papers were removed, and 3 different
colored light bulbs were suspended from the ceiling. Ten fragments of
house dust particles on the camera-s object lens were photographed.
We then constructed an experimental device that can photograph
artificial eyes. Five fragments of house dust particles under the ocher
fundus of the artificial eye were photographed. On analyzing HLS
color space of the dust artifact, lightness and saturation were found to
be highly sensitive. However, hue was not highly sensitive.
Abstract: Image interpolation is a common problem in imaging applications. However, most interpolation algorithms in existence suffer visually the effects of blurred edges and jagged artifacts in the image to some extent. This paper presents an adaptive feature preserving bidirectional flow process, where an inverse diffusion is performed to sharpen edges along the normal directions to the isophote lines (edges), while a normal diffusion is done to remove artifacts (“jaggies") along the tangent directions. In order to preserve image features such as edges, corners and textures, the nonlinear diffusion coefficients are locally adjusted according to the directional derivatives of the image. Experimental results on synthetic images and nature images demonstrate that our interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional interpolations.
Abstract: This paper describes an automatic algorithm to restore
the shape of three-dimensional (3D) left ventricle (LV) models created
from magnetic resonance imaging (MRI) data using a geometry-driven
optimization approach. Our basic premise is to restore the LV shape
such that the LV epicardial surface is smooth after the restoration. A
geometrical measure known as the Minimum Principle Curvature (κ2)
is used to assess the smoothness of the LV. This measure is used to
construct the objective function of a two-step optimization process.
The objective of the optimization is to achieve a smooth epicardial
shape by iterative in-plane translation of the MRI slices.
Quantitatively, this yields a minimum sum in terms of the magnitude
of κ
2, when κ2 is negative. A limited memory quasi-Newton algorithm,
L-BFGS-B, is used to solve the optimization problem. We tested our
algorithm on an in vitro theoretical LV model and 10 in vivo
patient-specific models which contain significant motion artifacts. The
results show that our method is able to automatically restore the shape
of LV models back to smoothness without altering the general shape of
the model. The magnitudes of in-plane translations are also consistent
with existing registration techniques and experimental findings.