Effectiveness of Contourlet vs Wavelet Transform on Medical Image Compression: a Comparative Study

Discrete Wavelet Transform (DWT) has demonstrated far superior to previous Discrete Cosine Transform (DCT) and standard JPEG in natural as well as medical image compression. Due to its localization properties both in special and transform domain, the quantization error introduced in DWT does not propagate globally as in DCT. Moreover, DWT is a global approach that avoids block artifacts as in the JPEG. However, recent reports on natural image compression have shown the superior performance of contourlet transform, a new extension to the wavelet transform in two dimensions using nonseparable and directional filter banks, compared to DWT. It is mostly due to the optimality of contourlet in representing the edges when they are smooth curves. In this work, we investigate this fact for medical images, especially for CT images, which has not been reported yet. To do that, we propose a compression scheme in transform domain and compare the performance of both DWT and contourlet transform in PSNR for different compression ratios (CR) using this scheme. The results obtained using different type of computed tomography images show that the DWT has still good performance at lower CR but contourlet transform performs better at higher CR.

Artifacts in Spiral X-ray CT Scanners: Problems and Solutions

Artifact is one of the most important factors in degrading the CT image quality and plays an important role in diagnostic accuracy. In this paper, some artifacts typically appear in Spiral CT are introduced. The different factors such as patient, equipment and interpolation algorithm which cause the artifacts are discussed and new developments and image processing algorithms to prevent or reduce them are presented.

Automatic Image Alignment and Stitching of Medical Images with Seam Blending

This paper proposes an algorithm which automatically aligns and stitches the component medical images (fluoroscopic) with varying degrees of overlap into a single composite image. The alignment method is based on similarity measure between the component images. As applied here the technique is intensity based rather than feature based. It works well in domains where feature based methods have difficulty, yet more robust than traditional correlation. Component images are stitched together using the new triangular averaging based blending algorithm. The quality of the resultant image is tested for photometric inconsistencies and geometric misalignments. This method cannot correct rotational, scale and perspective artifacts.

Feature Preserving Image Interpolation and Enhancement Using Adaptive Bidirectional Flow

Image interpolation is a common problem in imaging applications. However, most interpolation algorithms in existence suffer visually to some extent the effects of blurred edges and jagged artifacts in the image. This paper presents an adaptive feature preserving bidirectional flow process, where an inverse diffusion is performed to enhance edges along the normal directions to the isophote lines (edges), while a normal diffusion is done to remove artifacts (''jaggies'') along the tangent directions. In order to preserve image features such as edges, angles and textures, the nonlinear diffusion coefficients are locally adjusted according to the first and second order directional derivatives of the image. Experimental results on synthetic images and nature images demonstrate that our interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional interpolations.

The Traditional Malay Textile (TMT)Knowledge Model: Transformation towards Automated Mapping

The growing interest on national heritage preservation has led to intensive efforts on digital documentation of cultural heritage knowledge. Encapsulated within this effort is the focus on ontology development that will help facilitate the organization and retrieval of the knowledge. Ontologies surrounding cultural heritage domain are related to archives, museum and library information such as archaeology, artifacts, paintings, etc. The growth in number and size of ontologies indicates the well acceptance of its semantic enrichment in many emerging applications. Nowadays, there are many heritage information systems available for access. Among others is community-based e-museum designed to support the digital cultural heritage preservation. This work extends previous effort of developing the Traditional Malay Textile (TMT) Knowledge Model where the model is designed with the intention of auxiliary mapping with CIDOC CRM. Due to its internal constraints, the model needs to be transformed in advance. This paper addresses the issue by reviewing the previous harmonization works with CIDOC CRM as exemplars in refining the facets in the model particularly involving TMT-Artifact class. The result is an extensible model which could lead to a common view for automated mapping with CIDOC CRM. Hence, it promotes integration and exchange of textile information especially batik-related between communities in e-museum applications.

On Asymptotic Laws and Transfer Processes Enhancement in Complex Turbulent Flows

The lecture represents significant advances in understanding of the transfer processes mechanism in turbulent separated flows. Based upon experimental data suggesting the governing role of generated local pressure gradient that takes place in the immediate vicinity of the wall in separated flow as a result of intense instantaneous accelerations induced by large-scale vortex flow structures similarity laws for mean velocity and temperature and spectral characteristics and heat and mass transfer law for turbulent separated flows have been developed. These laws are confirmed by available experimental data. The results obtained were employed for analysis of heat and mass transfer in some very complex processes occurring in technological applications such as impinging jets, heat transfer of cylinders in cross flow and in tube banks, packed beds where processes manifest distinct properties which allow them to be classified under turbulent separated flows. Many facts have got an explanation for the first time.

The U.S. and Central Asia: Religion, Politics, Ideology

Numerous facts evidence the increasing religiosity of the population and the intensification of religious movements in various countries in the last decade of the 20th century. The number of international religious institutions and foundations; religious movements; parties and sects operating worldwide is increasing as well. Some ethnic and inter-state conflicts are obviously of a religious origin. All of this make a number of analysts to conclude that the religious factor is becoming an important part of international life, including the formation and activities of terrorist organizations. Most of all is said and written about Islam, the second, after Christianity, world religions professed according to various estimates by 1.5 bln. individuals in 127 countries.

Deep Learning and Virtual Environment

While computers are known to facilitate lower levels of learning, such as rote memorization of facts, measurable through electronically administered and graded multiple-choice questions, yes/no, and true/false answers, the imparting and measurement of higher-level cognitive skills is more vexing. These require more open-ended delivery and answers, and may be more problematic in an entirely virtual environment, notwithstanding the advances in technologies such as wikis, blogs, discussion boards, etc. As with the integration of all technology, merit is based more on the instructional design of the course than on the technology employed in, and of, itself. With this in mind, this study examined the perceptions of online students in an introductory Computer Information Systems course regarding the fostering of various higher-order thinking and team-building skills as a result of the activities, resources and technologies (ART) used in the course.

Application of Mutual Information based Least dependent Component Analysis (MILCA) for Removal of Ocular Artifacts from Electroencephalogram

The electrical potentials generated during eye movements and blinks are one of the main sources of artifacts in Electroencephalogram (EEG) recording and can propagate much across the scalp, masking and distorting brain signals. In recent times, signal separation algorithms are used widely for removing artifacts from the observed EEG data. In this paper, a recently introduced signal separation algorithm Mutual Information based Least dependent Component Analysis (MILCA) is employed to separate ocular artifacts from EEG. The aim of MILCA is to minimize the Mutual Information (MI) between the independent components (estimated sources) under a pure rotation. Performance of this algorithm is compared with eleven popular algorithms (Infomax, Extended Infomax, Fast ICA, SOBI, TDSEP, JADE, OGWE, MS-ICA, SHIBBS, Kernel-ICA, and RADICAL) for the actual independence and uniqueness of the estimated source components obtained for different sets of EEG data with ocular artifacts by using a reliable MI Estimator. Results show that MILCA is best in separating the ocular artifacts and EEG and is recommended for further analysis.

Complex Wavelet Transform Based Image Denoising and Zooming Under the LMMSE Framework

This paper proposes a dual tree complex wavelet transform (DT-CWT) based directional interpolation scheme for noisy images. The problems of denoising and interpolation are modelled as to estimate the noiseless and missing samples under the same framework of optimal estimation. Initially, DT-CWT is used to decompose an input low-resolution noisy image into low and high frequency subbands. The high-frequency subband images are interpolated by linear minimum mean square estimation (LMMSE) based interpolation, which preserves the edges of the interpolated images. For each noisy LR image sample, we compute multiple estimates of it along different directions and then fuse those directional estimates for a more accurate denoised LR image. The estimation parameters calculated in the denoising processing can be readily used to interpolate the missing samples. The inverse DT-CWT is applied on the denoised input and interpolated high frequency subband images to obtain the high resolution image. Compared with the conventional schemes that perform denoising and interpolation in tandem, the proposed DT-CWT based noisy image interpolation method can reduce many noise-caused interpolation artifacts and preserve well the image edge structures. The visual and quantitative results show that the proposed technique outperforms many of the existing denoising and interpolation methods.

CSOLAP (Continuous Spatial On-Line Analytical Processing)

Decision support systems are usually based on multidimensional structures which use the concept of hypercube. Dimensions are the axes on which facts are analyzed and form a space where a fact is located by a set of coordinates at the intersections of members of dimensions. Conventional multidimensional structures deal with discrete facts linked to discrete dimensions. However, when dealing with natural continuous phenomena the discrete representation is not adequate. There is a need to integrate spatiotemporal continuity within multidimensional structures to enable analysis and exploration of continuous field data. Research issues that lead to the integration of spatiotemporal continuity in multidimensional structures are numerous. In this paper, we discuss research issues related to the integration of continuity in multidimensional structures, present briefly a multidimensional model for continuous field data. We also define new aggregation operations. The model and the associated operations and measures are validated by a prototype.

Removing Ocular Artifacts from EEG Signals using Adaptive Filtering and ARMAX Modeling

EEG signal is one of the oldest measures of brain activity that has been used vastly for clinical diagnoses and biomedical researches. However, EEG signals are highly contaminated with various artifacts, both from the subject and from equipment interferences. Among these various kinds of artifacts, ocular noise is the most important one. Since many applications such as BCI require online and real-time processing of EEG signal, it is ideal if the removal of artifacts is performed in an online fashion. Recently, some methods for online ocular artifact removing have been proposed. One of these methods is ARMAX modeling of EEG signal. This method assumes that the recorded EEG signal is a combination of EOG artifacts and the background EEG. Then the background EEG is estimated via estimation of ARMAX parameters. The other recently proposed method is based on adaptive filtering. This method uses EOG signal as the reference input and subtracts EOG artifacts from recorded EEG signals. In this paper we investigate the efficiency of each method for removing of EOG artifacts. A comparison is made between these two methods. Our undertaken conclusion from this comparison is that adaptive filtering method has better results compared with the results achieved by ARMAX modeling.

Performance Analysis of Fuzzy Logic Based Unified Power Flow Controller

FACTS devices are used to control the power flow, to increase the transmission capacity and to optimize the stability of the power system. One of the most widely used FACTS devices is Unified Power Flow Controller (UPFC). The controller used in the control mechanism has a significantly effects on controlling of the power flow and enhancing the system stability of UPFC. According to this, the capability of UPFC is observed by using different control mechanisms based on P, PI, PID and fuzzy logic controllers (FLC) in this study. FLC was developed by taking consideration of Takagi- Sugeno inference system in the decision process and Sugeno-s weighted average method in the defuzzification process. Case studies with different operating conditions are applied to prove the ability of UPFC on controlling the power flow and the effectiveness of controllers on the performance of UPFC. PSCAD/EMTDC program is used to create the FLC and to simulate UPFC model.

Ontology-based Domain Modelling for Consistent Content Change Management

Ontology-based modelling of multi-formatted software application content is a challenging area in content management. When the number of software content unit is huge and in continuous process of change, content change management is important. The management of content in this context requires targeted access and manipulation methods. We present a novel approach to deal with model-driven content-centric information systems and access to their content. At the core of our approach is an ontology-based semantic annotation technique for diversely formatted content that can improve the accuracy of access and systems evolution. Domain ontologies represent domain-specific concepts and conform to metamodels. Different ontologies - from application domain ontologies to software ontologies - capture and model the different properties and perspectives on a software content unit. Interdependencies between domain ontologies, the artifacts and the content are captured through a trace model. The annotation traces are formalised and a graph-based system is selected for the representation of the annotation traces.

No-Reference Image Quality Assessment using Blur and Noise

Assessment for image quality traditionally needs its original image as a reference. The conventional method for assessment like Mean Square Error (MSE) or Peak Signal to Noise Ratio (PSNR) is invalid when there is no reference. In this paper, we present a new No-Reference (NR) assessment of image quality using blur and noise. The recent camera applications provide high quality images by help of digital Image Signal Processor (ISP). Since the images taken by the high performance of digital camera have few blocking and ringing artifacts, we only focus on the blur and noise for predicting the objective image quality. The experimental results show that the proposed assessment method gives high correlation with subjective Difference Mean Opinion Score (DMOS). Furthermore, the proposed method provides very low computational load in spatial domain and similar extraction of characteristics to human perceptional assessment.

Compressive Properties of a Synthetic Bone Substitute for Vertebral Cancellous Bone

Transpedicular screw fixation in spinal fractures, degenerative changes, or deformities is a well-established procedure. However, important rate of fixation failure due to screw bending, loosening, or pullout are still reported particularly in weak bone stock in osteoporosis. To overcome the problem, mechanism of failure has to be fully investigated in vitro. Post-mortem human subjects are less accessible and animal cadavers comprise limitations due to different geometry and mechanical properties. Therefore, the development of a synthetic model mimicking the realistic human vertebra is highly demanded. A bone surrogate, composed of Polyurethane (PU) foam analogous to cancellous bone porous structure, was tested for 3 different densities in this study. The mechanical properties were investigated under uniaxial compression test by minimizing the end artifacts on specimens. The results indicated that PU foam of 0.32 g.cm-3 density has comparable mechanical properties to human cancellous bone in terms of young-s modulus and yield strength. Therefore, the obtained information can be considered as primary step for developing a realistic cancellous bone of human vertebral body. Further evaluations are also recommended for other density groups.

Generator of Hypotheses an Approach of Data Mining Based on Monotone Systems Theory

Generator of hypotheses is a new method for data mining. It makes possible to classify the source data automatically and produces a particular enumeration of patterns. Pattern is an expression (in a certain language) describing facts in a subset of facts. The goal is to describe the source data via patterns and/or IF...THEN rules. Used evaluation criteria are deterministic (not probabilistic). The search results are trees - form that is easy to comprehend and interpret. Generator of hypotheses uses very effective algorithm based on the theory of monotone systems (MS) named MONSA (MONotone System Algorithm).

Optimal Allocation of FACTS Devices for ATC Enhancement Using Bees Algorithm

In this paper, a novel method using Bees Algorithm is proposed to determine the optimal allocation of FACTS devices for maximizing the Available Transfer Capability (ATC) of power transactions between source and sink areas in the deregulated power system. The algorithm simultaneously searches the FACTS location, FACTS parameters and FACTS types. Two types of FACTS are simulated in this study namely Thyristor Controlled Series Compensator (TCSC) and Static Var Compensator (SVC). A Repeated Power Flow with FACTS devices including ATC is used to evaluate the feasible ATC value within real and reactive power generation limits, line thermal limits, voltage limits and FACTS operation limits. An IEEE30 bus system is used to demonstrate the effectiveness of the algorithm as an optimization tool to enhance ATC. A Genetic Algorithm technique is used for validation purposes. The results clearly indicate that the introduction of FACTS devices in a right combination of location and parameters could enhance ATC and Bees Algorithm can be efficiently used for this kind of nonlinear integer optimization.

UPFC Supplementary Controller Design Using Real-Coded Genetic Algorithm for Damping Low Frequency Oscillations in Power Systems

This paper presents a systematic approach for designing Unified Power Flow Controller (UPFC) based supplementary damping controllers for damping low frequency oscillations in a single-machine infinite-bus power system. Detailed investigations have been carried out considering the four alternatives UPFC based damping controller namely modulating index of series inverter (mB), modulating index of shunt inverter (mE), phase angle of series inverter (δB ) and phase angle of the shunt inverter (δE ). The design problem of the proposed controllers is formulated as an optimization problem and Real- Coded Genetic Algorithm (RCGA) is employed to optimize damping controller parameters. Simulation results are presented and compared with a conventional method of tuning the damping controller parameters to show the effectiveness and robustness of the proposed design approach.

Maximizer of the Posterior Marginal Estimate for Noise Reduction of JPEG-compressed Image

We constructed a method of noise reduction for JPEG-compressed image based on Bayesian inference using the maximizer of the posterior marginal (MPM) estimate. In this method, we tried the MPM estimate using two kinds of likelihood, both of which enhance grayscale images converted into the JPEG-compressed image through the lossy JPEG image compression. One is the deterministic model of the likelihood and the other is the probabilistic one expressed by the Gaussian distribution. Then, using the Monte Carlo simulation for grayscale images, such as the 256-grayscale standard image “Lena" with 256 × 256 pixels, we examined the performance of the MPM estimate based on the performance measure using the mean square error. We clarified that the MPM estimate via the Gaussian probabilistic model of the likelihood is effective for reducing noises, such as the blocking artifacts and the mosquito noise, if we set parameters appropriately. On the other hand, we found that the MPM estimate via the deterministic model of the likelihood is not effective for noise reduction due to the low acceptance ratio of the Metropolis algorithm.