Optimal Image Compression Based on Sign and Magnitude Coding of Wavelet Coefficients

Wavelet transforms is a very powerful tools for image compression. One of its advantage is the provision of both spatial and frequency localization of image energy. However, wavelet transform coefficients are defined by both a magnitude and sign. While algorithms exist for efficiently coding the magnitude of the transform coefficients, they are not efficient for the coding of their sign. It is generally assumed that there is no compression gain to be obtained from the coding of the sign. Only recently have some authors begun to investigate the sign of wavelet coefficients in image coding. Some authors have assumed that the sign information bit of wavelet coefficients may be encoded with the estimated probability of 0.5; the same assumption concerns the refinement information bit. In this paper, we propose a new method for Separate Sign Coding (SSC) of wavelet image coefficients. The sign and the magnitude of wavelet image coefficients are examined to obtain their online probabilities. We use the scalar quantization in which the information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also examined. We show that the sign information and the refinement information may be encoded by the probability of approximately 0.5 only after about five bit planes. Two maps are separately entropy encoded: the sign map and the magnitude map. The refinement information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also entropy encoded. An algorithm is developed and simulations are performed on three standard images in grey scale: Lena, Barbara and Cameraman. Five scales are performed using the biorthogonal wavelet transform 9/7 filter bank. The obtained results are compared to JPEG2000 standard in terms of peak signal to noise ration (PSNR) for the three images and in terms of subjective quality (visual quality). It is shown that the proposed method outperforms the JPEG2000. The proposed method is also compared to other codec in the literature. It is shown that the proposed method is very successful and shows its performance in term of PSNR.

A Development of the Multiple Intelligences Measurement of Elementary Students

This research aims at development of the Multiple Intelligences Measurement of Elementary Students. The structural accuracy test and normality establishment are based on the Multiple Intelligences Theory of Gardner. This theory consists of eight aspects namely linguistics, logic and mathematics, visual-spatial relations, body and movement, music, human relations, self-realization/selfunderstanding and nature. The sample used in this research consists of elementary school students (aged between 5-11 years). The size of the sample group was determined by Yamane Table. The group has 2,504 students. Multistage Sampling was used. Basic statistical analysis and construct validity testing were done using confirmatory factor analysis. The research can be summarized as follows; 1. Multiple Intelligences Measurement consisting of 120 items is content-accurate. Internal consistent reliability according to the method of Kuder-Richardson of the whole Multiple Intelligences Measurement equals .91. The difficulty of the measurement test is between .39-.83. Discrimination is between .21-.85. 2). The Multiple Intelligences Measurement has construct validity in a good range, that is 8 components and all 120 test items have statistical significance level at .01. Chi-square value equals 4357.7; p=.00 at the degree of freedom of 244 and Goodness of Fit Index equals 1.00. Adjusted Goodness of Fit Index equals .92. Comparative Fit Index (CFI) equals .68. Root Mean Squared Residual (RMR) equals 0.064 and Root Mean Square Error of Approximation equals 0.82. 3). The normality of the Multiple Intelligences Measurement is categorized into 3 levels. Those with high intelligence are those with percentiles of more than 78. Those with moderate/medium intelligence are those with percentiles between 24 and 77.9. Those with low intelligence are those with percentiles from 23.9 downwards.

Extraction of Data from Web Pages: A Vision Based Approach

With the explosive growth of information sources available on the World Wide Web, it has become increasingly difficult to identify the relevant pieces of information, since web pages are often cluttered with irrelevant content like advertisements, navigation-panels, copyright notices etc., surrounding the main content of the web page. Hence, tools for the mining of data regions, data records and data items need to be developed in order to provide value-added services. Currently available automatic techniques to mine data regions from web pages are still unsatisfactory because of their poor performance and tag-dependence. In this paper a novel method to extract data items from the web pages automatically is proposed. It comprises of two steps: (1) Identification and Extraction of the data regions based on visual clues information. (2) Identification of data records and extraction of data items from a data region. For step1, a novel and more effective method is proposed based on visual clues, which finds the data regions formed by all types of tags using visual clues. For step2 a more effective method namely, Extraction of Data Items from web Pages (EDIP), is adopted to mine data items. The EDIP technique is a list-based approach in which the list is a linear data structure. The proposed technique is able to mine the non-contiguous data records and can correctly identify data regions, irrespective of the type of tag in which it is bound. Our experimental results show that the proposed technique performs better than the existing techniques.

A Novel Fuzzy Technique for Image Noise Reduction

A new fuzzy filter is presented for noise reduction of images corrupted with additive noise. The filter consists of two stages. In the first stage, all the pixels of image are processed for determining noisy pixels. For this, a fuzzy rule based system associates a degree to each pixel. The degree of a pixel is a real number in the range [0,1], which denotes a probability that the pixel is not considered as a noisy pixel. In the second stage, another fuzzy rule based system is employed. It uses the output of the previous fuzzy system to perform fuzzy smoothing by weighting the contributions of neighboring pixel values. Experimental results are obtained to show the feasibility of the proposed filter. These results are also compared to other filters by numerical measure and visual inspection.

Improved Lung Nodule Visualization on Chest Radiographs using Digital Filtering and Contrast Enhancement

Early detection of lung cancer through chest radiography is a widely used method due to its relatively affordable cost. In this paper, an approach to improve lung nodule visualization on chest radiographs is presented. The approach makes use of linear phase high-frequency emphasis filter for digital filtering and histogram equalization for contrast enhancement to achieve improvements. Results obtained indicate that a filtered image can reveal sharper edges and provide more details. Also, contrast enhancement offers a way to further enhance the global (or local) visualization by equalizing the histogram of the pixel values within the whole image (or a region of interest). The work aims to improve lung nodule visualization of chest radiographs to aid detection of lung cancer which is currently the leading cause of cancer deaths worldwide.

Contribution of Vitaton (Β-Carotene) to the Rearing Factors Survival Rate and Visual Flesh Color of Rainbow Trout Fish in Comparison With Astaxanthin

In this study Vitaton (an organic supplement which contains fermentative β-carotene) and synthetic astaxanthin (CAROPHYLL® Pink) were evaluated as pro-growth factors in Rainbow trout diet. An 8 week feeding trial was conducted to determine the effects of Vitaton versus astaxanthin on rearing factors, survival rate and visual flesh color of Rainbow trout (Oncorhnchynchus mykiss) with initial weight of 196±5. Four practical diets were formulated to contain 50 and 80 (ppm) of β- carotene and astaxanthin and also a control diet was prepared without any pigment. Each diet was fed to triplicate groups of fish rearing in fresh water. Fish were fed twice daily. The water temperature fluctuated from 12 to 15 (C˚) and also dissolved oxygen content was between 7 to 7.5 (mg/lit) during the experimental period. At the end of the experiment, growth and food utilization parameters and survival rate were unaffected by dietary treatments (p>0.05). Also, there was no significant difference between carcass yield within treatments (p>0.05). No significant difference recognized between visual flesh color (SalmoFan score) of fish fed Vitaton-containing diets. On the contrary, feeding on diets containing 50 and 80 (ppm) of astaxanthin, increased SalmoFan score (flesh astaxanthin concentration) from

Design and Simulation of a New Self-Learning Expert System for Mobile Robot

In this paper, we present a novel technique called Self-Learning Expert System (SLES). Unlike Expert System, where there is a need for an expert to impart experiences and knowledge to create the knowledge base, this technique tries to acquire the experience and knowledge automatically. To display this technique at work, a simulation of a mobile robot navigating through an environment with obstacles is employed using visual basic. The mobile robot will move through this area without colliding with any obstacle and save the path that it took. If the mobile robot has to go through a similar environment again, then it will apply this experience to help it move through quicker without having to check for collision.

Biological Data Integration using SOA

Nowadays scientific data is inevitably digital and stored in a wide variety of formats in heterogeneous systems. Scientists need to access an integrated view of remote or local heterogeneous data sources with advanced data accessing, analyzing, and visualization tools. This research suggests the use of Service Oriented Architecture (SOA) to integrate biological data from different data sources. This work shows SOA will solve the problems that facing integration process and if the biologist scientists can access the biological data in easier way. There are several methods to implement SOA but web service is the most popular method. The Microsoft .Net Framework used to implement proposed architecture.

Just-In-Time for Reducing Inventory Costs throughout a Supply Chain: A Case Study

Supply Chain Management (SCM) is the integration between manufacturer, transporter and customer in order to form one seamless chain that allows smooth flow of raw materials, information and products throughout the entire network that help in minimizing all related efforts and costs. The main objective of this paper is to develop a model that can accept a specified number of spare-parts within the supply chain, simulating its inventory operations throughout all stages in order to minimize the inventory holding costs, base-stock, safety-stock, and to find the optimum quantity of inventory levels, thereby suggesting a way forward to adapt some factors of Just-In-Time to minimizing the inventory costs throughout the entire supply chain. The model has been developed using Micro- Soft Excel & Visual Basic in order to study inventory allocations in any network of the supply chain. The application and reproducibility of this model were tested by comparing the actual system that was implemented in the case study with the results of the developed model. The findings showed that the total inventory costs of the developed model are about 50% less than the actual costs of the inventory items within the case study.

Photogrammetry and GIS Integration for Archaeological Documentation of Ahl-Alkahf, Jordan

Protection and proper management of archaeological heritage are an essential process of studying and interpreting the generations present and future. Protecting the archaeological heritage is based upon multidiscipline professional collaboration. This study aims to gather data by different sources (Photogrammetry and Geographic Information System (GIS)) integrated for the purpose of documenting one the of significant archeological sites (Ahl-Alkahf, Jordan). 3D modeling deals with the actual image of the features, shapes and texture to represent reality as realistically as possible by using texture. The 3D coordinates that result of the photogrammetric adjustment procedures are used to create 3D-models of the study area. Adding Textures to the 3D-models surfaces gives a 'real world' appearance to the displayed models. GIS system combined all data, including boundary maps, indicating the location of archeological sites, transportation layer, digital elevation model and orthoimages. For realistic representation of the study area, 3D - GIS model prepared, where efficient generation, management and visualization of such special data can be achieved.

Virtual Learning Process Environment: Cohort Analytics for Learning and Learning Processes

Traditional higher-education classrooms allow lecturers to observe students- behaviours and responses to a particular pedagogy during learning in a way that can influence changes to the pedagogical approach. Within current e-learning systems it is difficult to perform continuous analysis of the cohort-s behavioural tendency, making real-time pedagogical decisions difficult. This paper presents a Virtual Learning Process Environment (VLPE) based on the Business Process Management (BPM) conceptual framework. Within the VLPE, course designers can model various education pedagogies in the form of learning process workflows using an intuitive flow diagram interface. These diagrams are used to visually track the learning progresses of a cohort of students. This helps assess the effectiveness of the chosen pedagogy, providing the information required to improve course design. A case scenario of a cohort of students is presented and quantitative statistical analysis of their learning process performance is gathered and displayed in realtime using dashboards.

Feature Reduction of Nearest Neighbor Classifiers using Genetic Algorithm

The design of a pattern classifier includes an attempt to select, among a set of possible features, a minimum subset of weakly correlated features that better discriminate the pattern classes. This is usually a difficult task in practice, normally requiring the application of heuristic knowledge about the specific problem domain. The selection and quality of the features representing each pattern have a considerable bearing on the success of subsequent pattern classification. Feature extraction is the process of deriving new features from the original features in order to reduce the cost of feature measurement, increase classifier efficiency, and allow higher classification accuracy. Many current feature extraction techniques involve linear transformations of the original pattern vectors to new vectors of lower dimensionality. While this is useful for data visualization and increasing classification efficiency, it does not necessarily reduce the number of features that must be measured since each new feature may be a linear combination of all of the features in the original pattern vector. In this paper a new approach is presented to feature extraction in which feature selection, feature extraction, and classifier training are performed simultaneously using a genetic algorithm. In this approach each feature value is first normalized by a linear equation, then scaled by the associated weight prior to training, testing, and classification. A knn classifier is used to evaluate each set of feature weights. The genetic algorithm optimizes a vector of feature weights, which are used to scale the individual features in the original pattern vectors in either a linear or a nonlinear fashion. By this approach, the number of features used in classifying can be finely reduced.

Study on Mixed Convection Heat Transfer in Vertical Ducts with Radiation Effects

Experiments have been performed to investigate the radiation effects on mixed convection heat transfer for thermally developing airflow in vertical ducts with two differentially heated isothermal walls and two adiabatic walls. The investigation covers the Reynolds number Re = 800 to Re = 2900, heat flux varied from 256 W/m2 to 863 W/m2, hot wall temperature ranges from 27°C to 100 °C, aspect ratios 1 & 0.5 and the emissivity of internal walls are 0.05 and 0.85. In the present study, combined flow visualization was conducted to observe the flow patterns. The effect of surface temperature along the walls was studied to investigate the local Nusselt number variation within the duct. The result shows that flow condition and radiation significantly affect the total Nusselt number and tends to reduce the buoyancy condition.

Visual Hull with Imprecise Input

Imprecision is a long-standing problem in CAD design and high accuracy image-based reconstruction applications. The visual hull which is the closed silhouette equivalent shape of the objects of interest is an important concept in image-based reconstruction. We extend the domain-theoretic framework, which is a robust and imprecision capturing geometric model, to analyze the imprecision in the output shape when the input vertices are given with imprecision. Under this framework, we show an efficient algorithm to generate the 2D partial visual hull which represents the exact information of the visual hull with only basic imprecision assumptions. We also show how the visual hull from polyhedra problem can be efficiently solved in the context of imprecise input.

Visualization and Indexing of Spectral Databases

On-line (near infrared) spectroscopy is widely used to support the operation of complex process systems. Information extracted from spectral database can be used to estimate unmeasured product properties and monitor the operation of the process. These techniques are based on looking for similar spectra by nearest neighborhood algorithms and distance based searching methods. Search for nearest neighbors in the spectral space is an NP-hard problem, the computational complexity increases by the number of points in the discrete spectrum and the number of samples in the database. To reduce the calculation time some kind of indexing could be used. The main idea presented in this paper is to combine indexing and visualization techniques to reduce the computational requirement of estimation algorithms by providing a two dimensional indexing that can also be used to visualize the structure of the spectral database. This 2D visualization of spectral database does not only support application of distance and similarity based techniques but enables the utilization of advanced clustering and prediction algorithms based on the Delaunay tessellation of the mapped spectral space. This means the prediction has not to use the high dimension space but can be based on the mapped space too. The results illustrate that the proposed method is able to segment (cluster) spectral databases and detect outliers that are not suitable for instance based learning algorithms.

Investigating Daylight Quality in Malaysian Government Office Buildings Through Daylight Factor and Surface Luminance

In recent years, there has been an increasing interest in using daylight to save energy in buildings. In tropical regions, daylighting is always an energy saver. On the other hand, daylight provides visual comfort. According to standards, it shows that many criteria should be taken into consideration in order to have daylight utilization and visual comfort. The current standard in Malaysia, MS 1525 does not provide sufficient guideline. Hence, more research is needed on daylight performance. If architects do not consider daylight design, it not only causes inconvenience in working spaces but also causes more energy consumption as well as environmental pollution. This research had surveyed daylight performance in 5 selected office buildings from different area of Malaysian through experimental method. Several parameters of daylight quality such as daylight factor, surface luminance and surface luminance ratio were measured in different rooms in each building. The result of this research demonstrated that most of the buildings were not designed for daylight utilization. Therefore, it is very important that architects follow the daylight design recommendation to reduce consumption of electric power for artificial lighting while the sufficient quality of daylight is available.

Flow Characteristics of Pulp Liquid in Straight Ducts

An experimental investigation was performed on pulp liquid flow in straight ducts with a square cross section. Fully developed steady flow was visualized and the fiber concentration was obtained using a light-section method developed by the author et al. The obtained results reveal quantitatively, in a definite form, the distribution of the fiber concentration. From the results and measurements of pressure loss, it is found that the flow characteristics of pulp liquid in ducts can be classified into five patterns. The relationships among the distributions of mean and fluctuation of fiber concentration, the pressure loss and the flow velocity are discussed, and then the features for each pattern are extracted. The degree of nonuniformity of the fiber concentration, which is indicated by the standard deviation of its distribution, is decreased from 0.3 to 0.05 with an increase in the velocity of the tested pulp liquid from 0.4 to 0.8%.

Multiple Sensors and JPDA-IMM-UKF Algorithm for Tracking Multiple Maneuvering Targets

In this paper, we consider the problem of tracking multiple maneuvering targets using switching multiple target motion models. With this paper, we aim to contribute in solving the problem of model-based body motion estimation by using data coming from visual sensors. The Interacting Multiple Model (IMM) algorithm is specially designed to track accurately targets whose state and/or measurement (assumed to be linear) models changes during motion transition. However, when these models are nonlinear, the IMM algorithm must be modified in order to guarantee an accurate track. In this paper we propose to avoid the Extended Kalman filter because of its limitations and substitute it with the Unscented Kalman filter which seems to be more efficient especially according to the simulation results obtained with the nonlinear IMM algorithm (IMMUKF). To resolve the problem of data association, the JPDA approach is combined with the IMM-UKF algorithm, the derived algorithm is noted JPDA-IMM-UKF.

Interactive PTZ Camera Control System Using Wii Remote and Infrared Sensor Bar

This paper proposes an alternative control mechanism for an interactive Pan/Tilt/Zoom (PTZ) camera control system. Instead of using a mouse or a joystick, the proposed mechanism utilizes a Nintendo Wii remote and infrared (IR) sensor bar. The Wii remote has buttons that allows the user to control the movement of a PTZ camera through Bluetooth connectivity. In addition, the Wii remote has a built-in motion sensor that allows the user to give control signals to the PTZ camera through pitch and roll movement. A stationary IR sensor bar, placed at some distance away opposite the Wii remote, enables the detection of yaw movement. In addition, the Wii remote-s built-in IR camera has the ability to detect its spatial position, and thus generates a control signal when the user moves the Wii remote. Some experiments are carried out and their performances are compared with an industry-standard PTZ joystick.

Impulse Noise Reduction in Brain Magnetic Resonance Imaging Using Fuzzy Filters

Noise contamination in a magnetic resonance (MR) image could occur during acquisition, storage, and transmission in which effective filtering is required to avoid repeating the MR procedure. In this paper, an iterative asymmetrical triangle fuzzy filter with moving average center (ATMAVi filter) is used to reduce different levels of salt and pepper noise in a brain MR image. Besides visual inspection on filtered images, the mean squared error (MSE) is used as an objective measurement. When compared with the median filter, simulation results indicate that the ATMAVi filter is effective especially for filtering a higher level noise (such as noise density = 0.45) using a smaller window size (such as 3x3) when operated iteratively or using a larger window size (such as 5x5) when operated non-iteratively.