A Wavelet-Based Watermarking Method Exploiting the Contrast Sensitivity Function

The efficiency of an image watermarking technique depends on the preservation of visually significant information. This is attained by embedding the watermark transparently with the maximum possible strength. The current paper presents an approach for still image digital watermarking in which the watermark embedding process employs the wavelet transform and incorporates Human Visual System (HVS) characteristics. The sensitivity of a human observer to contrast with respect to spatial frequency is described by the Contrast Sensitivity Function (CSF). The strength of the watermark within the decomposition subbands, which occupy an interval on the spatial frequencies, is adjusted according to this sensitivity. Moreover, the watermark embedding process is carried over the subband coefficients that lie on edges where distortions are less noticeable. The experimental evaluation of the proposed method shows very good results in terms of robustness and transparency.

A method of Authentication for Quantum Networks

Quantum cryptography offers a way of key agreement, which is unbreakable by any external adversary. Authentication is of crucial importance, as perfect secrecy is worthless if the identity of the addressee cannot be ensured before sending important information. Message authentication has been studied thoroughly, but no approach seems to be able to explicitly counter meet-in-the-middle impersonation attacks. The goal of this paper is the development of an authentication scheme being resistant against active adversaries controlling the communication channel. The scheme is built on top of a key-establishment protocol and is unconditionally secure if built upon quantum cryptographic key exchange. In general, the security is the same as for the key-agreement protocol lying underneath.

Designing Pictogram for Food Portion Size

The objective of this paper is to investigate a new approach based on the idea of pictograms for food portion size. This approach adopts the model of the United States Pharmacopeia- Drug Information (USP-DI). The representation of each food portion size composed of three parts: frame, the connotation of dietary portion sizes and layout. To investigate users- comprehension based on this approach, two experiments were conducted, included 122 Taiwanese people, 60 male and 62 female with ages between 16 and 64 (divided into age groups of 16-30, 31-45 and 46-64). In Experiment 1, the mean correcting rate of the understanding level of food items is 48.54% (S.D.= 95.08) and the mean response time 2.89sec (S.D.=2.14). The difference on the correct rates for different age groups is significant (P*=0.00

Content-based Retrieval of Medical Images

With the advance of multimedia and diagnostic images technologies, the number of radiographic images is increasing constantly. The medical field demands sophisticated systems for search and retrieval of the produced multimedia document. This paper presents an ongoing research that focuses on the semantic content of radiographic image documents to facilitate semantic-based radiographic image indexing and a retrieval system. The proposed model would divide a radiographic image document, based on its semantic content, and would be converted into a logical structure or a semantic structure. The logical structure represents the overall organization of information. The semantic structure, which is bound to logical structure, is composed of semantic objects with interrelationships in the various spaces in the radiographic image.

The Mutated Distance between Two Mixture Trees

The evolutionary tree is an important topic in bioinformation. In 2006, Chen and Lindsay proposed a new method to build the mixture tree from DNA sequences. Mixture tree is a new type evolutionary tree, and it has two additional information besides the information of ordinary evolutionary tree. One of the information is time parameter, and the other is the set of mutated sites. In 2008, Lin and Juan proposed an algorithm to compute the distance between two mixture trees. Their algorithm computes the distance with only considering the time parameter between two mixture trees. In this paper, we proposes a method to measure the similarity of two mixture trees with considering the set of mutated sites and develops two algorithm to compute the distance between two mixture trees. The time complexity of these two proposed algorithms are O(n2 × max{h(T1), h(T2)}) and O(n2), respectively

Contractor Selection in Saudi Arabia

Contractor selection in Saudi Arabia is very important due to the large construction boom and the contractor role to get over construction risks. The need for investigating contractor selection is due to the following reasons; large number of defaulted or failed projects (18%), large number of disputes attributed to contractor during the project execution stage (almost twofold), the extension of the General Agreement on Tariffs and Trade (GATT) into construction industry, and finally the few number of researches. The selection strategy is not perfect and considered as the reason behind irresponsible contractors. As a response, this research was conducted to review the contractor selection strategies as an integral part of a long advanced research to develop a good selection model. Many techniques can be used to form a selection strategy; multi criteria for optimizing decision, prequalification to discover contractor-s responsibility, bidding process for competition, third party guarantee to enhance the selection, and fuzzy techniques for ambiguities and incomplete information.

Segmentation of Breast Lesions in Ultrasound Images Using Spatial Fuzzy Clustering and Structure Tensors

Segmentation in ultrasound images is challenging due to the interference from speckle noise and fuzziness of boundaries. In this paper, a segmentation scheme using fuzzy c-means (FCM) clustering incorporating both intensity and texture information of images is proposed to extract breast lesions in ultrasound images. Firstly, the nonlinear structure tensor, which can facilitate to refine the edges detected by intensity, is used to extract speckle texture. And then, a spatial FCM clustering is applied on the image feature space for segmentation. In the experiments with simulated and clinical ultrasound images, the spatial FCM clustering with both intensity and texture information gets more accurate results than the conventional FCM or spatial FCM without texture information.

The Robust Clustering with Reduction Dimension

A clustering is process to identify a homogeneous groups of object called as cluster. Clustering is one interesting topic on data mining. A group or class behaves similarly characteristics. This paper discusses a robust clustering process for data images with two reduction dimension approaches; i.e. the two dimensional principal component analysis (2DPCA) and principal component analysis (PCA). A standard approach to overcome this problem is dimension reduction, which transforms a high-dimensional data into a lower-dimensional space with limited loss of information. One of the most common forms of dimensionality reduction is the principal components analysis (PCA). The 2DPCA is often called a variant of principal component (PCA), the image matrices were directly treated as 2D matrices; they do not need to be transformed into a vector so that the covariance matrix of image can be constructed directly using the original image matrices. The decomposed classical covariance matrix is very sensitive to outlying observations. The objective of paper is to compare the performance of robust minimizing vector variance (MVV) in the two dimensional projection PCA (2DPCA) and the PCA for clustering on an arbitrary data image when outliers are hiden in the data set. The simulation aspects of robustness and the illustration of clustering images are discussed in the end of paper

A New Protocol for Concealed Data Aggregation in Wireless Sensor Networks

Wireless sensor networks (WSN) consists of many sensor nodes that are placed on unattended environments such as military sites in order to collect important information. Implementing a secure protocol that can prevent forwarding forged data and modifying content of aggregated data and has low delay and overhead of communication, computing and storage is very important. This paper presents a new protocol for concealed data aggregation (CDA). In this protocol, the network is divided to virtual cells, nodes within each cell produce a shared key to send and receive of concealed data with each other. Considering to data aggregation in each cell is locally and implementing a secure authentication mechanism, data aggregation delay is very low and producing false data in the network by malicious nodes is not possible. To evaluate the performance of our proposed protocol, we have presented computational models that show the performance and low overhead in our protocol.

SIFT Accordion: A Space-Time Descriptor Applied to Human Action Recognition

Recognizing human action from videos is an active field of research in computer vision and pattern recognition. Human activity recognition has many potential applications such as video surveillance, human machine interaction, sport videos retrieval and robot navigation. Actually, local descriptors and bag of visuals words models achieve state-of-the-art performance for human action recognition. The main challenge in features description is how to represent efficiently the local motion information. Most of the previous works focus on the extension of 2D local descriptors on 3D ones to describe local information around every interest point. In this paper, we propose a new spatio-temporal descriptor based on a spacetime description of moving points. Our description is focused on an Accordion representation of video which is well-suited to recognize human action from 2D local descriptors without the need to 3D extensions. We use the bag of words approach to represent videos. We quantify 2D local descriptor describing both temporal and spatial features with a good compromise between computational complexity and action recognition rates. We have reached impressive results on publicly available action data set

The System Architecture of the Open European Nephrology Science Centre

The amount and heterogeneity of data in biomedical research, notably in interdisciplinary research, requires new methods for the collection, presentation and analysis of information. Important data from laboratory experiments as well as patient trials are available but come out of distributed resources. The Charite Medical School in Berlin has established together with the German Research Foundation (DFG) a new information service center for kidney diseases and transplantation (Open European Nephrology Science Centre - OpEN.SC). The system is based on a service-oriented architecture (SOA) with main and auxiliary modules arranged in four layers. To improve the reuse and efficient arrangement of the services the functionalities are described as business processes using the standardised Business Process Execution Language (BPEL).

A Discriminatory Rewarding Mechanism for Sybil Detection with Applications to Tor

This paper presents an economic game for sybil detection in a distributed computing environment. Cost parameters reflecting impacts of different sybil attacks are introduced in the sybil detection game. The optimal strategies for this game in which both sybil and non-sybil identities are expected to participate are devised. A cost sharing economic mechanism called Discriminatory Rewarding Mechanism for Sybil Detection is proposed based on this game. A detective accepts a security deposit from each active agent, negotiates with the agents and offers rewards to the sybils if the latter disclose their identity. The basic objective of the detective is to determine the optimum reward amount for each sybil which will encourage the maximum possible number of sybils to reveal themselves. Maintaining privacy is an important issue for the mechanism since the participants involved in the negotiation are generally reluctant to share their private information. The mechanism has been applied to Tor by introducing a reputation scoring function.

Signed Approach for Mining Web Content Outliers

The emergence of the Internet has brewed the revolution of information storage and retrieval. As most of the data in the web is unstructured, and contains a mix of text, video, audio etc, there is a need to mine information to cater to the specific needs of the users without loss of important hidden information. Thus developing user friendly and automated tools for providing relevant information quickly becomes a major challenge in web mining research. Most of the existing web mining algorithms have concentrated on finding frequent patterns while neglecting the less frequent ones that are likely to contain outlying data such as noise, irrelevant and redundant data. This paper mainly focuses on Signed approach and full word matching on the organized domain dictionary for mining web content outliers. This Signed approach gives the relevant web documents as well as outlying web documents. As the dictionary is organized based on the number of characters in a word, searching and retrieval of documents takes less time and less space.

Automatic 2D/2D Registration using Multiresolution Pyramid based Mutual Information in Image Guided Radiation Therapy

Medical image registration is the key technology in image guided radiation therapy (IGRT) systems. On the basis of the previous work on our IGRT prototype with a biorthogonal x-ray imaging system, we described a method focused on the 2D/2D rigid-body registration using multiresolution pyramid based mutual information in this paper. Three key steps were involved in the method : firstly, four 2D images were obtained including two x-ray projection images and two digital reconstructed radiographies(DRRs ) as the input for the registration ; Secondly, each pair of the corresponding x-ray image and DRR image were matched using multiresolution pyramid based mutual information under the ITK registration framework ; Thirdly, we got the final couch offset through a coordinate transformation by calculating the translations acquired from the two pairs of the images. A simulation example of a parotid gland tumor case and a clinical example of an anthropomorphic head phantom were employed in the verification tests. In addition, the influence of different CT slice thickness were tested. The simulation results showed that the positioning errors were 0.068±0.070, 0.072±0.098, 0.154±0.176mm along three axes which were lateral, longitudinal and vertical. The clinical test indicated that the positioning errors of the planned isocenter were 0.066, 0.07, 2.06mm on average with a CT slice thickness of 2.5mm. It can be concluded that our method with its verified accuracy and robustness can be effectively used in IGRT systems for patient setup.

Automated Stereophotogrammetry Data Cleansing

The stereophotogrammetry modality is gaining more widespread use in the clinical setting. Registration and visualization of this data, in conjunction with conventional 3D volumetric image modalities, provides virtual human data with textured soft tissue and internal anatomical and structural information. In this investigation computed tomography (CT) and stereophotogrammetry data is acquired from 4 anatomical phantoms and registered using the trimmed iterative closest point (TrICP) algorithm. This paper fully addresses the issue of imaging artifacts around the stereophotogrammetry surface edge using the registered CT data as a reference. Several iterative algorithms are implemented to automatically identify and remove stereophotogrammetry surface edge outliers, improving the overall visualization of the combined stereophotogrammetry and CT data. This paper shows that outliers at the surface edge of stereophotogrammetry data can be successfully removed automatically.

Cross Layer Optimization for Fairness Balancing Based on Adaptively Weighted Utility Functions in OFDMA Systems

Cross layer optimization based on utility functions has been recently studied extensively, meanwhile, numerous types of utility functions have been examined in the corresponding literature. However, a major drawback is that most utility functions take a fixed mathematical form or are based on simple combining, which can not fully exploit available information. In this paper, we formulate a framework of cross layer optimization based on Adaptively Weighted Utility Functions (AWUF) for fairness balancing in OFDMA networks. Under this framework, a two-step allocation algorithm is provided as a sub-optimal solution, whose control parameters can be updated in real-time to accommodate instantaneous QoS constrains. The simulation results show that the proposed algorithm achieves high throughput while balancing the fairness among multiple users.

The Decentralized Nonlinear Controller of Robot Manipulator with External Load Compensation

This paper describes a newly designed decentralized nonlinear control strategy to control a robot manipulator. Based on the concept of the nonlinear state feedback theory and decentralized concept is developed to improve the drawbacks in previous works concerned with complicate intelligent control and low cost effective sensor. The control methodology is derived in the sense of Lyapunov theorem so that the stability of the control system is guaranteed. The decentralized algorithm does not require other joint angle and velocity information. Individual Joint controller is implemented using a digital processor with nearly actuator to make it possible to achieve good dynamics and modular. Computer simulation result has been conducted to validate the effectiveness of the proposed control scheme under the occurrence of possible uncertainties and different reference trajectories. The merit of the proposed control system is indicated in comparison with a classical control system.

The Future of Electronic Money

The history of money is described in relationship to the history of computing. With the transformation and acceptance of money as information, major challenges to the security of money have involved engineering, computer science, and management. Research opportunities and challenges are described as money continues its transformation into information.

Person Identification by Using AR Model for EEG Signals

A direct connection between ElectroEncephaloGram (EEG) and the genetic information of individuals has been investigated by neurophysiologists and psychiatrists since 1960-s; and it opens a new research area in the science. This paper focuses on the person identification based on feature extracted from the EEG which can show a direct connection between EEG and the genetic information of subjects. In this work the full EO EEG signal of healthy individuals are estimated by an autoregressive (AR) model and the AR parameters are extracted as features. Here for feature vector constitution, two methods have been proposed; in the first method the extracted parameters of each channel are used as a feature vector in the classification step which employs a competitive neural network and in the second method a combination of different channel parameters are used as a feature vector. Correct classification scores at the range of 80% to 100% reveal the potential of our approach for person classification/identification and are in agreement to the previous researches showing evidence that the EEG signal carries genetic information. The novelty of this work is in the combination of AR parameters and the network type (competitive network) that we have used. A comparison between the first and the second approach imply preference of the second one.

Research Topic Map Construction

While the explosive increase in information published on the Web, researchers have to filter information when searching for conference related information. To make it easier for users to search related information, this paper uses Topic Maps and social information to implement ontology since ontology can provide the formalisms and knowledge structuring for comprehensive and transportable machine understanding that digital information requires. Besides enhancing information in Topic Maps, this paper proposes a method of constructing research Topic Maps considering social information. First, extract conference data from the web. Then extract conference topics and the relationships between them through the proposed method. Finally visualize it for users to search and browse. This paper uses ontology, containing abundant of knowledge hierarchy structure, to facilitate researchers getting useful search results. However, most previous ontology construction methods didn-t take “people" into account. So this paper also analyzes the social information which helps researchers find the possibilities of cooperation/combination as well as associations between research topics, and tries to offer better results.