Performance Evaluation of Music and Minimum Norm Eigenvector Algorithms in Resolving Noisy Multiexponential Signals

Eigenvector methods are gaining increasing acceptance in the area of spectrum estimation. This paper presents a successful attempt at testing and evaluating the performance of two of the most popular types of subspace techniques in determining the parameters of multiexponential signals with real decay constants buried in noise. In particular, MUSIC (Multiple Signal Classification) and minimum-norm techniques are examined. It is shown that these methods perform almost equally well on multiexponential signals with MUSIC displaying better defined peaks.

Effective Implementation of Burst SegmentationTechniques in OBS Networks

Optical Bursts Switching (OBS) is a relatively new optical switching paradigm. Contention and burst loss in OBS networks are major concerns. To resolve contentions, an interesting alternative to discarding the entire data burst is to partially drop the burst. Partial burst dropping is based on burst segmentation concept that its implementation is constrained by some technical challenges, besides the complexity added to the algorithms and protocols on both edge and core nodes. In this paper, the burst segmentation concept is investigated, and an implementation scheme is proposed and evaluated. An appropriate dropping policy that effectively manages the size of the segmented data bursts is presented. The dropping policy is further supported by a new control packet format that provides constant transmission overhead.

An Experiment on Personal Archiving and Retrieving Image System (PARIS)

PARIS (Personal Archiving and Retrieving Image System) is an experiment personal photograph library, which includes more than 80,000 of consumer photographs accumulated within a duration of approximately five years, metadata based on our proposed MPEG-7 annotation architecture, Dozen Dimensional Digital Content (DDDC), and a relational database structure. The DDDC architecture is specially designed for facilitating the managing, browsing and retrieving of personal digital photograph collections. In annotating process, we also utilize a proposed Spatial and Temporal Ontology (STO) designed based on the general characteristic of personal photograph collections. This paper explains PRAIS system.

Zero-knowledge-like Proof of Cryptanalysis of Bluetooth Encryption

This paper presents a protocol aiming at proving that an encryption system contains structural weaknesses without disclosing any information on those weaknesses. A verifier can check in a polynomial time that a given property of the cipher system output has been effectively realized. This property has been chosen by the prover in such a way that it cannot been achieved by known attacks or exhaustive search but only if the prover indeed knows some undisclosed weaknesses that may effectively endanger the cryptosystem security. This protocol has been denoted zero-knowledge-like proof of cryptanalysis. In this paper, we apply this protocol to the Bluetooth core encryption algorithm E0, used in many mobile environments and thus we suggest that its security can seriously be put into question.

Off-Line Hand Written Thai Character Recognition using Ant-Miner Algorithm

Much research into handwritten Thai character recognition have been proposed, such as comparing heads of characters, Fuzzy logic and structure trees, etc. This paper presents a system of handwritten Thai character recognition, which is based on the Ant-minor algorithm (data mining based on Ant colony optimization). Zoning is initially used to determine each character. Then three distinct features (also called attributes) of each character in each zone are extracted. The attributes are Head zone, End point, and Feature code. All attributes are used for construct the classification rules by an Ant-miner algorithm in order to classify 112 Thai characters. For this experiment, the Ant-miner algorithm is adapted, with a small change to increase the recognition rate. The result of this experiment is a 97% recognition rate of the training set (11200 characters) and 82.7% recognition rate of unseen data test (22400 characters).

3D Simulator of Ocular Motion and Expression

We introduce a new interactive 3D simulator of ocular motion and expressions suitable for: (1) character animation applications to game design, film production, HCI (Human Computer Interface), conversational animated agents, and virtual reality; (2) medical applications (ophthalmic neurological and muscular pathologies: research and education); and (3) real time simulation of unconscious cognitive and emotional responses (for use, e.g., in psychological research). Using state-of-the-art computer animation technology we have modeled and rigged a physiologically accurate 3D model of the eyes, eyelids, and eyebrow regions and we have 'optimized' it for use with an interactive and web deliverable platform. In addition, we have realized a prototype device for realtime control of eye motions and expressions, including unconsciously produced expressions, for application as in (1), (2), and (3) above. The 3D simulator of eye motion and ocular expression is, to our knowledge, the most advanced/realistic available so far for applications in character animation and medical pedagogy.

AnQL: A Query Language for Annotation Documents

This paper presents data annotation models at five levels of granularity (database, relation, column, tuple, and cell) of relational data to address the problem of unsuitability of most relational databases to express annotations. These models do not require any structural and schematic changes to the underlying database. These models are also flexible, extensible, customizable, database-neutral, and platform-independent. This paper also presents an SQL-like query language, named Annotation Query Language (AnQL), to query annotation documents. AnQL is simple to understand and exploits the already-existent wide knowledge and skill set of SQL.

Complexity of Component-based Development of Embedded Systems

The paper discusses complexity of component-based development (CBD) of embedded systems. Although CBD has its merits, it must be augmented with methods to control the complexities that arise due to resource constraints, timeliness, and run-time deployment of components in embedded system development. Software component specification, system-level testing, and run-time reliability measurement are some ways to control the complexity.

A Web Text Mining Flexible Architecture

Text Mining is an important step of Knowledge Discovery process. It is used to extract hidden information from notstructured o semi-structured data. This aspect is fundamental because much of the Web information is semi-structured due to the nested structure of HTML code, much of the Web information is linked, much of the Web information is redundant. Web Text Mining helps whole knowledge mining process to mining, extraction and integration of useful data, information and knowledge from Web page contents. In this paper, we present a Web Text Mining process able to discover knowledge in a distributed and heterogeneous multiorganization environment. The Web Text Mining process is based on flexible architecture and is implemented by four steps able to examine web content and to extract useful hidden information through mining techniques. Our Web Text Mining prototype starts from the recovery of Web job offers in which, through a Text Mining process, useful information for fast classification of the same are drawn out, these information are, essentially, job offer place and skills.

A Distributed Group Mutual Exclusion Algorithm for Soft Real Time Systems

The group mutual exclusion (GME) problem is an interesting generalization of the mutual exclusion problem. Several solutions of the GME problem have been proposed for message passing distributed systems. However, none of these solutions is suitable for real time distributed systems. In this paper, we propose a token-based distributed algorithms for the GME problem in soft real time distributed systems. The algorithm uses the concepts of priority queue, dynamic request set and the process state. The algorithm uses first come first serve approach in selecting the next session type between the same priority levels and satisfies the concurrent occupancy property. The algorithm allows all n processors to be inside their CS provided they request for the same session. The performance analysis and correctness proof of the algorithm has also been included in the paper.

Developing the Color Temperature Histogram Method for Improving the Content-Based Image Retrieval

This paper proposes a new method for image searches and image indexing in databases with a color temperature histogram. The color temperature histogram can be used for performance improvement of content–based image retrieval by using a combination of color temperature and histogram. The color temperature histogram can be represented by a range of 46 colors. That is more than the color histogram and the dominant color temperature. Moreover, with our method the colors that have the same color temperature can be separated while the dominant color temperature can not. The results showed that the color temperature histogram retrieved an accurate image more often than the dominant color temperature method or color histogram method. This also took less time so the color temperature can be used for indexing and searching for images.

Determining the Gender of Korean Names for Pronoun Generation

It is an important task in Korean-English machine translation to classify the gender of names correctly. When a sentence is composed of two or more clauses and only one subject is given as a proper noun, it is important to find the gender of the proper noun for correct translation of the sentence. This is because a singular pronoun has a gender in English while it does not in Korean. Thus, in Korean-English machine translation, the gender of a proper noun should be determined. More generally, this task can be expanded into the classification of the general Korean names. This paper proposes a statistical method for this problem. By considering a name as just a sequence of syllables, it is possible to get a statistics for each name from a collection of names. An evaluation of the proposed method yields the improvement in accuracy over the simple looking-up of the collection. While the accuracy of the looking-up method is 64.11%, that of the proposed method is 81.49%. This implies that the proposed method is more plausible for the gender classification of the Korean names.

Segmentation of Images through Clustering to Extract Color Features: An Application forImage Retrieval

This paper deals with the application for contentbased image retrieval to extract color feature from natural images stored in the image database by segmenting the image through clustering. We employ a class of nonparametric techniques in which the data points are regarded as samples from an unknown probability density. Explicit computation of the density is avoided by using the mean shift procedure, a robust clustering technique, which does not require prior knowledge of the number of clusters, and does not constrain the shape of the clusters. A non-parametric technique for the recovery of significant image features is presented and segmentation module is developed using the mean shift algorithm to segment each image. In these algorithms, the only user set parameter is the resolution of the analysis and either gray level or color images are accepted as inputs. Extensive experimental results illustrate excellent performance.

Effective Keyword and Similarity Thresholds for the Discovery of Themes from the User Web Access Patterns

Clustering techniques have been used by many intelligent software agents to group similar access patterns of the Web users into high level themes which express users intentions and interests. However, such techniques have been mostly focusing on one salient feature of the Web document visited by the user, namely the extracted keywords. The major aim of these techniques is to come up with an optimal threshold for the number of keywords needed to produce more focused themes. In this paper we focus on both keyword and similarity thresholds to generate themes with concentrated themes, and hence build a more sound model of the user behavior. The purpose of this paper is two fold: use distance based clustering methods to recognize overall themes from the Proxy log file, and suggest an efficient cut off levels for the keyword and similarity thresholds which tend to produce more optimal clusters with better focus and efficient size.

On the Move to Semantic Web Services

Semantic Web services will enable the semiautomatic and automatic annotation, advertisement, discovery, selection, composition, and execution of inter-organization business logic, making the Internet become a common global platform where organizations and individuals communicate with each other to carry out various commercial activities and to provide value-added services. There is a growing consensus that Web services alone will not be sufficient to develop valuable solutions due the degree of heterogeneity, autonomy, and distribution of the Web. This paper deals with two of the hottest R&D and technology areas currently associated with the Web – Web services and the Semantic Web. It presents the synergies that can be created between Web Services and Semantic Web technologies to provide a new generation of eservices.

A Perceptually Optimized Foveation Based Wavelet Embedded Zero Tree Image Coding

In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

Powerful Tool to Expand Business Intelligence: Text Mining

With the extensive inclusion of document, especially text, in the business systems, data mining does not cover the full scope of Business Intelligence. Data mining cannot deliver its impact on extracting useful details from the large collection of unstructured and semi-structured written materials based on natural languages. The most pressing issue is to draw the potential business intelligence from text. In order to gain competitive advantages for the business, it is necessary to develop the new powerful tool, text mining, to expand the scope of business intelligence. In this paper, we will work out the strong points of text mining in extracting business intelligence from huge amount of textual information sources within business systems. We will apply text mining to each stage of Business Intelligence systems to prove that text mining is the powerful tool to expand the scope of BI. After reviewing basic definitions and some related technologies, we will discuss the relationship and the benefits of these to text mining. Some examples and applications of text mining will also be given. The motivation behind is to develop new approach to effective and efficient textual information analysis. Thus we can expand the scope of Business Intelligence using the powerful tool, text mining.

Matrix Based Synthesis of EXOR dominated Combinational Logic for Low Power

This paper discusses a new, systematic approach to the synthesis of a NP-hard class of non-regenerative Boolean networks, described by FON[FOFF]={mi}[{Mi}], where for every mj[Mj]∈{mi}[{Mi}], there exists another mk[Mk]∈{mi}[{Mi}], such that their Hamming distance HD(mj, mk)=HD(Mj, Mk)=O(n), (where 'n' represents the number of distinct primary inputs). The method automatically ensures exact minimization for certain important selfdual functions with 2n-1 points in its one-set. The elements meant for grouping are determined from a newly proposed weighted incidence matrix. Then the binary value corresponding to the candidate pair is correlated with the proposed binary value matrix to enable direct synthesis. We recommend algebraic factorization operations as a post processing step to enable reduction in literal count. The algorithm can be implemented in any high level language and achieves best cost optimization for the problem dealt with, irrespective of the number of inputs. For other cases, the method is iterated to subsequently reduce it to a problem of O(n-1), O(n-2),.... and then solved. In addition, it leads to optimal results for problems exhibiting higher degree of adjacency, with a different interpretation of the heuristic, and the results are comparable with other methods. In terms of literal cost, at the technology independent stage, the circuits synthesized using our algorithm enabled net savings over AOI (AND-OR-Invert) logic, AND-EXOR logic (EXOR Sum-of- Products or ESOP forms) and AND-OR-EXOR logic by 45.57%, 41.78% and 41.78% respectively for the various problems. Circuit level simulations were performed for a wide variety of case studies at 3.3V and 2.5V supply to validate the performance of the proposed method and the quality of the resulting synthesized circuits at two different voltage corners. Power estimation was carried out for a 0.35micron TSMC CMOS process technology. In comparison with AOI logic, the proposed method enabled mean savings in power by 42.46%. With respect to AND-EXOR logic, the proposed method yielded power savings to the tune of 31.88%, while in comparison with AND-OR-EXOR level networks; average power savings of 33.23% was obtained.

Continuous Text Translation Using Text Modeling in the Thetos System

In the paper a method of modeling text for Polish is discussed. The method is aimed at transforming continuous input text into a text consisting of sentences in so called canonical form, whose characteristic is, among others, a complete structure as well as no anaphora or ellipses. The transformation is lossless as to the content of text being transformed. The modeling method has been worked out for the needs of the Thetos system, which translates Polish written texts into the Polish sign language. We believe that the method can be also used in various applications that deal with the natural language, e.g. in a text summary generator for Polish.

Performance Modeling for Web based J2EE and .NET Applications

When architecting an application, key nonfunctional requirements such as performance, scalability, availability and security, which influence the architecture of the system, are some times not adequately addressed. Performance of the application may not be looked at until there is a concern. There are several problems with this reactive approach. If the system does not meet its performance objectives, the application is unlikely to be accepted by the stakeholders. This paper suggests an approach for performance modeling for web based J2EE and .Net applications to address performance issues early in the development life cycle. It also includes a Performance Modeling Case Study, with Proof-of-Concept (PoC) and implementation details for .NET and J2EE platforms.