Abstract: An image compression method has been developed
using fuzzy edge image utilizing the basic Block Truncation Coding
(BTC) algorithm. The fuzzy edge image has been validated with
classical edge detectors on the basis of the results of the well-known
Canny edge detector prior to applying to the proposed method. The
bit plane generated by the conventional BTC method is replaced with
the fuzzy bit plane generated by the logical OR operation between
the fuzzy edge image and the corresponding conventional BTC bit
plane. The input image is encoded with the block mean and standard
deviation and the fuzzy bit plane. The proposed method has been
tested with test images of 8 bits/pixel and size 512×512 and found to
be superior with better Peak Signal to Noise Ratio (PSNR) when
compared to the conventional BTC, and adaptive bit plane selection
BTC (ABTC) methods. The raggedness and jagged appearance, and
the ringing artifacts at sharp edges are greatly reduced in
reconstructed images by the proposed method with the fuzzy bit
plane.
Abstract: Social ideology, cultural values and principles shaping environment are inferred by environment and structural characteristics of construction site. In other words, this inference manifestation also indicates ideology and culture of its foundation and also applies its principles and values and somehow plays an important role in Cultural Revolution. All human behaviors and artifacts are affected and being influenced by culture. Culture is not abstract concept, it is a spiritual domain that an individual and society grow and develop in it. Social behaviors are affected by environmental comprehension, so the architecture work influences on its audience and it is the environment that fosters social behaviors. Indeed, sustainable architecture should be considered as background of culture for establishing optimal sustainable culture. Since unidentified architecture roots in cultural non identity and abnormalities, so the society possesses identity characteristics and life and as a consequence, the society and architecture are changed by transformation of life style. This article aims to investigate the interaction of architecture, society, environment and sustainable architecture formation in its cultural basis and analyzes the results approaching behavior and sustainable culture in recent era.
Abstract: Removing noise from the any processed images is very important. Noise should be removed in such a way that important information of image should be preserved. A decisionbased nonlinear algorithm for elimination of band lines, drop lines, mark, band lost and impulses in images is presented in this paper. The algorithm performs two simultaneous operations, namely, detection of corrupted pixels and evaluation of new pixels for replacing the corrupted pixels. Removal of these artifacts is achieved without damaging edges and details. However, the restricted window size renders median operation less effective whenever noise is excessive in that case the proposed algorithm automatically switches to mean filtering. The performance of the algorithm is analyzed in terms of Mean Square Error [MSE], Peak-Signal-to-Noise Ratio [PSNR], Signal-to-Noise Ratio Improved [SNRI], Percentage Of Noise Attenuated [PONA], and Percentage Of Spoiled Pixels [POSP]. This is compared with standard algorithms already in use and improved performance of the proposed algorithm is presented. The advantage of the proposed algorithm is that a single algorithm can replace several independent algorithms which are required for removal of different artifacts.
Abstract: Discrete Cosine Transform (DCT) based transform coding is very popular in image, video and speech compression due to its good energy compaction and decorrelating properties. However, at low bit rates, the reconstructed images generally suffer from visually annoying blocking artifacts as a result of coarse quantization. Lapped transform was proposed as an alternative to the DCT with reduced blocking artifacts and increased coding gain. Lapped transforms are popular for their good performance, robustness against oversmoothing and availability of fast implementation algorithms. However, there is no proper study reported in the literature regarding the statistical distributions of block Lapped Orthogonal Transform (LOT) and Lapped Biorthogonal Transform (LBT) coefficients. This study performs two goodness-of-fit tests, the Kolmogorov-Smirnov (KS) test and the 2- test, to determine the distribution that best fits the LOT and LBT coefficients. The experimental results show that the distribution of a majority of the significant AC coefficients can be modeled by the Generalized Gaussian distribution. The knowledge of the statistical distribution of transform coefficients greatly helps in the design of optimal quantizers that may lead to minimum distortion and hence achieve optimal coding efficiency.
Abstract: In the framework of the image compression by
Wavelet Transforms, we propose a perceptual method by
incorporating Human Visual System (HVS) characteristics in the
quantization stage. Indeed, human eyes haven-t an equal sensitivity
across the frequency bandwidth. Therefore, the clarity of the
reconstructed images can be improved by weighting the quantization
according to the Contrast Sensitivity Function (CSF). The visual
artifact at low bit rate is minimized. To evaluate our method, we use
the Peak Signal to Noise Ratio (PSNR) and a new evaluating criteria
witch takes into account visual criteria. The experimental results
illustrate that our technique shows improvement on image quality at
the same compression ratio.
Abstract: The frontal area in the brain is known to be involved in
behavioral judgement. Because a Kanji character can be discriminated
visually and linguistically from other characters, in Kanji character
discrimination, we hypothesized that frontal event-related potential
(ERP) waveforms reflect two discrimination processes in separate
time periods: one based on visual analysis and the other based
on lexcical access. To examine this hypothesis, we recorded ERPs
while performing a Kanji lexical decision task. In this task, either a
known Kanji character, an unknown Kanji character or a symbol was
presented and the subject had to report if the presented character was
a known Kanji character for the subject or not. The same response
was required for unknown Kanji trials and symbol trials. As a preprocessing
of signals, we examined the performance of a method
using independent component analysis for artifact rejection and found
it was effective. Therefore we used it. In the ERP results, there
were two time periods in which the frontal ERP wavefoms were
significantly different betweeen the unknown Kanji trials and the
symbol trials: around 170ms and around 300ms after stimulus onset.
This result supported our hypothesis. In addition, the result suggests
that Kanji character lexical access may be fully completed by around
260ms after stimulus onset.
Abstract: Cardiac pulse-related artifacts in the EEG recorded
simultaneously with fMRI are complex and highly variable. Their
effective removal is an unsolved problem. Our aim is to develop an
adaptive removal algorithm based on the matching pursuit (MP)
technique and to compare it to established methods using a visual
evoked potential (VEP). We recorded the VEP inside the static
magnetic field of an MR scanner (with artifacts) as well as in an
electrically shielded room (artifact free). The MP-based artifact
removal outperformed average artifact subtraction (AAS) and
optimal basis set removal (OBS) in terms of restoring the EEG field
map topography of the VEP. Subsequently, a dipole model was fitted
to the VEP under each condition using a realistic boundary element
head model. The source location of the VEP recorded inside the MR
scanner was closest to that of the artifact free VEP after cleaning
with the MP-based algorithm as well as with AAS. While none of the
tested algorithms offered complete removal, MP showed promising
results due to its ability to adapt to variations of latency, frequency
and amplitude of individual artifact occurrences while still utilizing a
common template.
Abstract: The stereophotogrammetry modality is gaining more widespread use in the clinical setting. Registration and visualization of this data, in conjunction with conventional 3D volumetric image modalities, provides virtual human data with textured soft tissue and internal anatomical and structural information. In this investigation computed tomography (CT) and stereophotogrammetry data is acquired from 4 anatomical phantoms and registered using the trimmed iterative closest point (TrICP) algorithm. This paper fully addresses the issue of imaging artifacts around the stereophotogrammetry surface edge using the registered CT data as a reference. Several iterative algorithms are implemented to automatically identify and remove stereophotogrammetry surface edge outliers, improving the overall visualization of the combined stereophotogrammetry and CT data. This paper shows that outliers at the surface edge of stereophotogrammetry data can be successfully removed automatically.
Abstract: Discrete Wavelet Transform (DWT) has demonstrated
far superior to previous Discrete Cosine Transform (DCT) and
standard JPEG in natural as well as medical image compression. Due
to its localization properties both in special and transform domain,
the quantization error introduced in DWT does not propagate
globally as in DCT. Moreover, DWT is a global approach that avoids
block artifacts as in the JPEG. However, recent reports on natural
image compression have shown the superior performance of
contourlet transform, a new extension to the wavelet transform in two
dimensions using nonseparable and directional filter banks,
compared to DWT. It is mostly due to the optimality of contourlet in
representing the edges when they are smooth curves. In this work, we
investigate this fact for medical images, especially for CT images,
which has not been reported yet. To do that, we propose a
compression scheme in transform domain and compare the
performance of both DWT and contourlet transform in PSNR for
different compression ratios (CR) using this scheme. The results
obtained using different type of computed tomography images show
that the DWT has still good performance at lower CR but contourlet
transform performs better at higher CR.
Abstract: Artifact is one of the most important factors in
degrading the CT image quality and plays an important role in
diagnostic accuracy. In this paper, some artifacts typically appear in
Spiral CT are introduced. The different factors such as patient,
equipment and interpolation algorithm which cause the artifacts are
discussed and new developments and image processing algorithms to
prevent or reduce them are presented.
Abstract: This paper proposes an algorithm which automatically aligns and stitches the component medical images (fluoroscopic) with varying degrees of overlap into a single composite image. The alignment method is based on similarity measure between the component images. As applied here the technique is intensity based rather than feature based. It works well in domains where feature based methods have difficulty, yet more robust than traditional correlation. Component images are stitched together using the new triangular averaging based blending algorithm. The quality of the resultant image is tested for photometric inconsistencies and geometric misalignments. This method cannot correct rotational, scale and perspective artifacts.
Abstract: Image interpolation is a common problem in imaging applications. However, most interpolation algorithms in existence suffer visually to some extent the effects of blurred edges and jagged artifacts in the image. This paper presents an adaptive feature preserving bidirectional flow process, where an inverse diffusion is performed to enhance edges along the normal directions to the isophote lines (edges), while a normal diffusion is done to remove artifacts (''jaggies'') along the tangent directions. In order to preserve image features such as edges, angles and textures, the nonlinear diffusion coefficients are locally adjusted according to the first and second order directional derivatives of the image. Experimental results on synthetic images and nature images demonstrate that our interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional interpolations.
Abstract: The growing interest on national heritage
preservation has led to intensive efforts on digital documentation of
cultural heritage knowledge. Encapsulated within this effort is the
focus on ontology development that will help facilitate the
organization and retrieval of the knowledge. Ontologies surrounding
cultural heritage domain are related to archives, museum and library
information such as archaeology, artifacts, paintings, etc. The growth
in number and size of ontologies indicates the well acceptance of its
semantic enrichment in many emerging applications. Nowadays,
there are many heritage information systems available for access.
Among others is community-based e-museum designed to support the
digital cultural heritage preservation. This work extends previous
effort of developing the Traditional Malay Textile (TMT) Knowledge
Model where the model is designed with the intention of auxiliary
mapping with CIDOC CRM. Due to its internal constraints, the
model needs to be transformed in advance. This paper addresses the
issue by reviewing the previous harmonization works with CIDOC
CRM as exemplars in refining the facets in the model particularly
involving TMT-Artifact class. The result is an extensible model
which could lead to a common view for automated mapping with
CIDOC CRM. Hence, it promotes integration and exchange of
textile information especially batik-related between communities in
e-museum applications.
Abstract: The electrical potentials generated during eye movements and blinks are one of the main sources of artifacts in Electroencephalogram (EEG) recording and can propagate much across the scalp, masking and distorting brain signals. In recent times, signal separation algorithms are used widely for removing artifacts from the observed EEG data. In this paper, a recently introduced signal separation algorithm Mutual Information based Least dependent Component Analysis (MILCA) is employed to separate ocular artifacts from EEG. The aim of MILCA is to minimize the Mutual Information (MI) between the independent components (estimated sources) under a pure rotation. Performance of this algorithm is compared with eleven popular algorithms (Infomax, Extended Infomax, Fast ICA, SOBI, TDSEP, JADE, OGWE, MS-ICA, SHIBBS, Kernel-ICA, and RADICAL) for the actual independence and uniqueness of the estimated source components obtained for different sets of EEG data with ocular artifacts by using a reliable MI Estimator. Results show that MILCA is best in separating the ocular artifacts and EEG and is recommended for further analysis.
Abstract: This paper proposes a dual tree complex wavelet transform (DT-CWT) based directional interpolation scheme for noisy images. The problems of denoising and interpolation are modelled as to estimate the noiseless and missing samples under the same framework of optimal estimation. Initially, DT-CWT is used to decompose an input low-resolution noisy image into low and high frequency subbands. The high-frequency subband images are interpolated by linear minimum mean square estimation (LMMSE) based interpolation, which preserves the edges of the interpolated images. For each noisy LR image sample, we compute multiple estimates of it along different directions and then fuse those directional estimates for a more accurate denoised LR image. The estimation parameters calculated in the denoising processing can be readily used to interpolate the missing samples. The inverse DT-CWT is applied on the denoised input and interpolated high frequency subband images to obtain the high resolution image. Compared with the conventional schemes that perform denoising and interpolation in tandem, the proposed DT-CWT based noisy image interpolation method can reduce many noise-caused interpolation artifacts and preserve well the image edge structures. The visual and quantitative results show that the proposed technique outperforms many of the existing denoising and interpolation methods.
Abstract: EEG signal is one of the oldest measures of brain
activity that has been used vastly for clinical diagnoses and
biomedical researches. However, EEG signals are highly
contaminated with various artifacts, both from the subject and from
equipment interferences. Among these various kinds of artifacts,
ocular noise is the most important one. Since many applications such
as BCI require online and real-time processing of EEG signal, it is
ideal if the removal of artifacts is performed in an online fashion.
Recently, some methods for online ocular artifact removing have
been proposed. One of these methods is ARMAX modeling of EEG
signal. This method assumes that the recorded EEG signal is a
combination of EOG artifacts and the background EEG. Then the
background EEG is estimated via estimation of ARMAX parameters.
The other recently proposed method is based on adaptive filtering.
This method uses EOG signal as the reference input and subtracts
EOG artifacts from recorded EEG signals. In this paper we
investigate the efficiency of each method for removing of EOG
artifacts. A comparison is made between these two methods. Our
undertaken conclusion from this comparison is that adaptive filtering
method has better results compared with the results achieved by
ARMAX modeling.
Abstract: Ontology-based modelling of multi-formatted
software application content is a challenging area in content
management. When the number of software content unit is huge and
in continuous process of change, content change management is
important. The management of content in this context requires
targeted access and manipulation methods. We present a novel
approach to deal with model-driven content-centric information
systems and access to their content. At the core of our approach is an
ontology-based semantic annotation technique for diversely
formatted content that can improve the accuracy of access and
systems evolution. Domain ontologies represent domain-specific
concepts and conform to metamodels. Different ontologies - from
application domain ontologies to software ontologies - capture and
model the different properties and perspectives on a software content
unit. Interdependencies between domain ontologies, the artifacts and
the content are captured through a trace model. The annotation traces
are formalised and a graph-based system is selected for the
representation of the annotation traces.
Abstract: Assessment for image quality traditionally needs its
original image as a reference. The conventional method for assessment
like Mean Square Error (MSE) or Peak Signal to Noise Ratio (PSNR)
is invalid when there is no reference. In this paper, we present a new
No-Reference (NR) assessment of image quality using blur and noise.
The recent camera applications provide high quality images by help of
digital Image Signal Processor (ISP). Since the images taken by the
high performance of digital camera have few blocking and ringing
artifacts, we only focus on the blur and noise for predicting the
objective image quality. The experimental results show that the
proposed assessment method gives high correlation with subjective
Difference Mean Opinion Score (DMOS). Furthermore, the proposed
method provides very low computational load in spatial domain and
similar extraction of characteristics to human perceptional assessment.
Abstract: Transpedicular screw fixation in spinal fractures,
degenerative changes, or deformities is a well-established procedure.
However, important rate of fixation failure due to screw bending,
loosening, or pullout are still reported particularly in weak bone stock
in osteoporosis. To overcome the problem, mechanism of failure has
to be fully investigated in vitro. Post-mortem human subjects are less
accessible and animal cadavers comprise limitations due to different
geometry and mechanical properties. Therefore, the development of a
synthetic model mimicking the realistic human vertebra is highly
demanded. A bone surrogate, composed of Polyurethane (PU) foam
analogous to cancellous bone porous structure, was tested for 3
different densities in this study. The mechanical properties were
investigated under uniaxial compression test by minimizing the end
artifacts on specimens. The results indicated that PU foam of 0.32
g.cm-3 density has comparable mechanical properties to human
cancellous bone in terms of young-s modulus and yield strength.
Therefore, the obtained information can be considered as primary
step for developing a realistic cancellous bone of human vertebral
body. Further evaluations are also recommended for other density
groups.
Abstract: We constructed a method of noise reduction for
JPEG-compressed image based on Bayesian inference using the
maximizer of the posterior marginal (MPM) estimate. In this method,
we tried the MPM estimate using two kinds of likelihood, both of
which enhance grayscale images converted into the JPEG-compressed
image through the lossy JPEG image compression. One is the
deterministic model of the likelihood and the other is the probabilistic
one expressed by the Gaussian distribution. Then, using the Monte
Carlo simulation for grayscale images, such as the 256-grayscale
standard image “Lena" with 256 × 256 pixels, we examined the
performance of the MPM estimate based on the performance measure
using the mean square error. We clarified that the MPM estimate via
the Gaussian probabilistic model of the likelihood is effective for
reducing noises, such as the blocking artifacts and the mosquito noise,
if we set parameters appropriately. On the other hand, we found that
the MPM estimate via the deterministic model of the likelihood is not
effective for noise reduction due to the low acceptance ratio of the
Metropolis algorithm.