A Study of Color Transformation on Website Images for the Color Blind

In this paper, we study on color transformation method on website images for the color blind. The most common category of color blindness is red-green color blindness which is viewed as beige color. By transforming the colors of the images, the color blind can improve their color visibility. They can have a better view when browsing through the websites. To transform colors on the website images, we study on two algorithms which are the conversion techniques from RGB color space to HSV color space and self-organizing color transformation. The comparative study focuses on criteria based on the ease of use, quality, accuracy and efficiency. The outcome of the study leads to enhancement of website images to meet the color blinds- vision requirements in perceiving image detailed.

Providing Medical Information in Braille: Research and Development of Automatic Braille Translation Program for Japanese “eBraille“

Along with the advances in medicine, providing medical information to individual patient is becoming more important. In Japan such information via Braille is hardly provided to blind and partially sighted people. Thus we are researching and developing a Web-based automatic translation program “eBraille" to translate Japanese text into Japanese Braille. First we analyzed the Japanese transcription rules to implement them on our program. We then added medical words to the dictionary of the program to improve its translation accuracy for medical text. Finally we examined the efficacy of statistical learning models (SLMs) for further increase of word segmentation accuracy in braille translation. As a result, eBraille had the highest translation accuracy in the comparison with other translation programs, improved the accuracy for medical text and is utilized to make hospital brochures in braille for outpatients and inpatients.

Finding an Optimized Discriminate Function for Internet Application Recognition

Everyday the usages of the Internet increase and simply a world of the data become accessible. Network providers do not want to let the provided services to be used in harmful or terrorist affairs, so they used a variety of methods to protect the special regions from the harmful data. One of the most important methods is supposed to be the firewall. Firewall stops the transfer of such packets through several ways, but in some cases they do not use firewall because of its blind packet stopping, high process power needed and expensive prices. Here we have proposed a method to find a discriminate function to distinguish between usual packets and harmful ones by the statistical processing on the network router logs. So an administrator can alarm to the user. This method is very fast and can be used simply in adjacent with the Internet routers.

Computational Method for Annotation of Protein Sequence According to Gene Ontology Terms

Annotation of a protein sequence is pivotal for the understanding of its function. Accuracy of manual annotation provided by curators is still questionable by having lesser evidence strength and yet a hard task and time consuming. A number of computational methods including tools have been developed to tackle this challenging task. However, they require high-cost hardware, are difficult to be setup by the bioscientists, or depend on time intensive and blind sequence similarity search like Basic Local Alignment Search Tool. This paper introduces a new method of assigning highly correlated Gene Ontology terms of annotated protein sequences to partially annotated or newly discovered protein sequences. This method is fully based on Gene Ontology data and annotations. Two problems had been identified to achieve this method. The first problem relates to splitting the single monolithic Gene Ontology RDF/XML file into a set of smaller files that can be easy to assess and process. Thus, these files can be enriched with protein sequences and Inferred from Electronic Annotation evidence associations. The second problem involves searching for a set of semantically similar Gene Ontology terms to a given query. The details of macro and micro problems involved and their solutions including objective of this study are described. This paper also describes the protein sequence annotation and the Gene Ontology. The methodology of this study and Gene Ontology based protein sequence annotation tool namely extended UTMGO is presented. Furthermore, its basic version which is a Gene Ontology browser that is based on semantic similarity search is also introduced.

A Blind Digital Watermark in Hadamard Domain

A new blind gray-level watermarking scheme is described. In the proposed method, the host image is first divided into 4*4 non-overlapping blocks. For each block, two first AC coefficients of its Hadamard transform are then estimated using DC coefficients of its neighbor blocks. A gray-level watermark is then added into estimated values. Since embedding watermark does not change the DC coefficients, watermark extracting could be done by estimating AC coefficients and comparing them with their actual values. Several experiments are made and results suggest the robustness of the proposed algorithm.

A Semi-Fragile Signature based Scheme for Ownership Identification and Color Image Authentication

In this paper, a novel scheme is proposed for ownership identification and authentication using color images by deploying Cryptography and Digital Watermarking as underlaying technologies. The former is used to compute the contents based hash and the latter to embed the watermark. The host image that will claim to be the rightful owner is first transformed from RGB to YST color space exclusively designed for watermarking based applications. Geometrically YS ÔèÑ T and T channel corresponds to the chrominance component of color image, therefore suitable for embedding the watermark. The T channel is divided into 4×4 nonoverlapping blocks. The size of block is important for enhanced localization, security and low computation. Each block along with ownership information is then deployed by SHA160, a one way hash function to compute the content based hash, which is always unique and resistant against birthday attack instead of using MD5 that may raise the condition i.e. H(m)=H(m'). The watermark payload varies from block to block and computed by the variance factorα . The quality of watermarked images is quite high both subjectively and objectively. Our scheme is blind, computationally fast and exactly locates the tampered region.

Data Mining on the Router Logs for Statistical Application Classification

With the advance of information technology in the new era the applications of Internet to access data resources has steadily increased and huge amount of data have become accessible in various forms. Obviously, the network providers and agencies, look after to prevent electronic attacks that may be harmful or may be related to terrorist applications. Thus, these have facilitated the authorities to under take a variety of methods to protect the special regions from harmful data. One of the most important approaches is to use firewall in the network facilities. The main objectives of firewalls are to stop the transfer of suspicious packets in several ways. However because of its blind packet stopping, high process power requirements and expensive prices some of the providers are reluctant to use the firewall. In this paper we proposed a method to find a discriminate function to distinguish between usual packets and harmful ones by the statistical processing on the network router logs. By discriminating these data, an administrator may take an approach action against the user. This method is very fast and can be used simply in adjacent with the Internet routers.

A Pairing-based Blind Signature Scheme with Message Recovery

Blind signatures enable users to obtain valid signatures for a message without revealing its content to the signer. This paper presents a new blind signature scheme, i.e. identity-based blind signature scheme with message recovery. Due to the message recovery property, the new scheme requires less bandwidth than the identitybased blind signatures with similar constructions. The scheme is based on modified Weil/Tate pairings over elliptic curves, and thus requires smaller key sizes for the same level of security compared to previous approaches not utilizing bilinear pairings. Security and efficiency analysis for the scheme is provided in this paper.

Blind Low Frequency Watermarking Method

We present a low frequency watermarking method adaptive to image content. The image content is analyzed and properties of HVS are exploited to generate a visual mask of the same size as the approximation image. Using this mask we embed the watermark in the approximation image without degrading the image quality. Watermark detection is performed without using the original image. Experimental results show that the proposed watermarking method is robust against most common image processing operations, which can be easily implemented and usually do not degrade the image quality.

New Concept for the Overall use of Renewable Energy

The development and application of wind power for renewable energy has attracted growing interest in recent years. Renewable energy sources are attracting much alteration as they can reduce both environmental damage and dependence on fossil fuels. With the growing need for sustainable energy supplies, a case is made for decentralized, stand-alone power supplies (SAPS) as an alternative to power grids. In the era which traditional petroleum energy resource decreasing and the green house affect significant increasing, the development and usage of regenerative resources is inevitable. Due to the contribution of the pioneers, the development of regenerative resources already has a remarkable achievement; however, in the view of economy and quantity, it is still a long road for regenerative energy to replace traditional petroleum energy. In our prospective, in stead of investigate larger regenerative energy equipment, it is much wiser to think about the blind side and breakthrough of the current technique.

A Symbol by Symbol Clustering Based Blind Equalizer

A new blind symbol by symbol equalizer is proposed. The operation of the proposed equalizer is based on the geometric properties of the two dimensional data constellation. An unsupervised clustering technique is used to locate the clusters formed by the received data. The symmetric properties of the clusters labels are subsequently utilized in order to label the clusters. Following this step, the received data are compared to clusters and decisions are made on a symbol by symbol basis, by assigning to each data the label of the nearest cluster. The operation of the equalizer is investigated both in linear and nonlinear channels. The performance of the proposed equalizer is compared to the performance of a CMAbased blind equalizer.

An ICA Algorithm for Separation of Convolutive Mixture of Speech Signals

This paper describes Independent Component Analysis (ICA) based fixed-point algorithm for the blind separation of the convolutive mixture of speech, picked-up by a linear microphone array. The proposed algorithm extracts independent sources by non- Gaussianizing the Time-Frequency Series of Speech (TFSS) in a deflationary way. The degree of non-Gaussianization is measured by negentropy. The relative performances of algorithm under random initialization and Null beamformer (NBF) based initialization are studied. It has been found that an NBF based initial value gives speedy convergence as well as better separation performance

Learning Spatio-Temporal Topology of a Multi-Camera Network by Tracking Multiple People

This paper presents a novel approach for representing the spatio-temporal topology of the camera network with overlapping and non-overlapping fields of view (FOVs). The topology is determined by tracking moving objects and establishing object correspondence across multiple cameras. To track people successfully in multiple camera views, we used the Merge-Split (MS) approach for object occlusion in a single camera and the grid-based approach for extracting the accurate object feature. In addition, we considered the appearance of people and the transition time between entry and exit zones for tracking objects across blind regions of multiple cameras with non-overlapping FOVs. The main contribution of this paper is to estimate transition times between various entry and exit zones, and to graphically represent the camera topology as an undirected weighted graph using the transition probabilities.

Congestion Control for Internet Media Traffic

In this paper we investigated a number of the Internet congestion control algorithms that has been developed in the last few years. It was obviously found that many of these algorithms were designed to deal with the Internet traffic merely as a train of consequent packets. Other few algorithms were specifically tailored to handle the Internet congestion caused by running media traffic that represents audiovisual content. This later set of algorithms is considered to be aware of the nature of this media content. In this context we briefly explained a number of congestion control algorithms and hence categorized them into the two following categories: i) Media congestion control algorithms. ii) Common congestion control algorithms. We hereby recommend the usage of the media congestion control algorithms for the reason of being media content-aware rather than the other common type of algorithms that blindly manipulates such traffic. We showed that the spread of such media content-aware algorithms over Internet will lead to better congestion control status in the coming years. This is due to the observed emergence of the era of digital convergence where the media traffic type will form the majority of the Internet traffic.

Blind Image Deconvolution by Neural Recursive Function Approximation

This work explores blind image deconvolution by recursive function approximation based on supervised learning of neural networks, under the assumption that a degraded image is linear convolution of an original source image through a linear shift-invariant (LSI) blurring matrix. Supervised learning of neural networks of radial basis functions (RBF) is employed to construct an embedded recursive function within a blurring image, try to extract non-deterministic component of an original source image, and use them to estimate hyper parameters of a linear image degradation model. Based on the estimated blurring matrix, reconstruction of an original source image from a blurred image is further resolved by an annealed Hopfield neural network. By numerical simulations, the proposed novel method is shown effective for faithful estimation of an unknown blurring matrix and restoration of an original source image.

Super Resolution Blind Reconstruction of Low Resolution Images using Wavelets based Fusion

Crucial information barely visible to the human eye is often embedded in a series of low resolution images taken of the same scene. Super resolution reconstruction is the process of combining several low resolution images into a single higher resolution image. The ideal algorithm should be fast, and should add sharpness and details, both at edges and in regions without adding artifacts. In this paper we propose a super resolution blind reconstruction technique for linearly degraded images. In our proposed technique the algorithm is divided into three parts an image registration, wavelets based fusion and an image restoration. In this paper three low resolution images are considered which may sub pixels shifted, rotated, blurred or noisy, the sub pixel shifted images are registered using affine transformation model; A wavelet based fusion is performed and the noise is removed using soft thresolding. Our proposed technique reduces blocking artifacts and also smoothens the edges and it is also able to restore high frequency details in an image. Our technique is efficient and computationally fast having clear perspective of real time implementation.

Blind Non-Minimum Phase Channel Identification Using 3rd and 4th Order Cumulants

In this paper we propose a family of algorithms based on 3rd and 4th order cumulants for blind single-input single-output (SISO) Non-Minimum Phase (NMP) Finite Impulse Response (FIR) channel estimation driven by non-Gaussian signal. The input signal represents the signal used in 10GBASE-T (or IEEE 802.3an-2006) as a Tomlinson-Harashima Precoded (THP) version of random Pulse-Amplitude Modulation with 16 discrete levels (PAM-16). The proposed algorithms are tested using three non-minimum phase channel for different Signal-to-Noise Ratios (SNR) and for different data input length. Numerical simulation results are presented to illustrate the performance of the proposed algorithms.

Blind Identification of MA Models Using Cumulants

In this paper, many techniques for blind identification of moving average (MA) process are presented. These methods utilize third- and fourth-order cumulants of the noisy observations of the system output. The system is driven by an independent and identically distributed (i.i.d) non-Gaussian sequence that is not observed. Two nonlinear optimization algorithms, namely the Gradient Descent and the Gauss-Newton algorithms are exposed. An algorithm based on the joint-diagonalization of the fourth-order cumulant matrices (FOSI) is also considered, as well as an improved version of the classical C(q, 0, k) algorithm based on the choice of the Best 1-D Slice of fourth-order cumulants. To illustrate the effectiveness of our methods, various simulation examples are presented.

GPS Navigator for Blind Walking in a Campus

We developed a GPS-based navigation device for the blind, with audio guidance in Thai language. The device is composed of simple and inexpensive hardware components. Its user interface is quite simple. It determines optimal routes to various landmarks in our university campus by using heuristic search for the next waypoints. We tested the device and made note of its limitations and possible extensions.

A Double Referenced Contrast for Blind Source Separation

This paper addresses the problem of blind source separation (BSS). To recover original signals, from linear instantaneous mixtures, we propose a new contrast function based on the use of a double referenced system. Our approach assumes statistical independence sources. The reference vectors will be incrusted in the cumulant to evaluate the independence. The estimation of the separating matrix will be performed in two steps: whitening observations and joint diagonalization of a set of referenced cumulant matrices. Computer simulations are presented to demonstrate the effectiveness of the suggested approach.