Representing Shared Join Points with State Charts: A High Level Design Approach

Aspect Oriented Programming promises many advantages at programming level by incorporating the cross cutting concerns into separate units, called aspects. Join Points are distinguishing features of Aspect Oriented Programming as they define the points where core requirements and crosscutting concerns are (inter)connected. Currently, there is a problem of multiple aspects- composition at the same join point, which introduces the issues like ordering and controlling of these superimposed aspects. Dynamic strategies are required to handle these issues as early as possible. State chart is an effective modeling tool to capture dynamic behavior at high level design. This paper provides methodology to formulate the strategies for multiple aspect composition at high level, which helps to better implement these strategies at coding level. It also highlights the need of designing shared join point at high level, by providing the solutions of these issues using state chart diagrams in UML 2.0. High level design representation of shared join points also helps to implement the designed strategy in systematic way.

Tree-on-DAG for Data Aggregation in Sensor Networks

Computing and maintaining network structures for efficient data aggregation incurs high overhead for dynamic events where the set of nodes sensing an event changes with time. Moreover, structured approaches are sensitive to the waiting time that is used by nodes to wait for packets from their children before forwarding the packet to the sink. An optimal routing and data aggregation scheme for wireless sensor networks is proposed in this paper. We propose Tree on DAG (ToD), a semistructured approach that uses Dynamic Forwarding on an implicitly constructed structure composed of multiple shortest path trees to support network scalability. The key principle behind ToD is that adjacent nodes in a graph will have low stretch in one of these trees in ToD, thus resulting in early aggregation of packets. Based on simulations on a 2,000-node Mica2- based network, we conclude that efficient aggregation in large-scale networks can be achieved by our semistructured approach.

Adaptive Kernel Principal Analysis for Online Feature Extraction

The batch nature limits the standard kernel principal component analysis (KPCA) methods in numerous applications, especially for dynamic or large-scale data. In this paper, an efficient adaptive approach is presented for online extraction of the kernel principal components (KPC). The contribution of this paper may be divided into two parts. First, kernel covariance matrix is correctly updated to adapt to the changing characteristics of data. Second, KPC are recursively formulated to overcome the batch nature of standard KPCA.This formulation is derived from the recursive eigen-decomposition of kernel covariance matrix and indicates the KPC variation caused by the new data. The proposed method not only alleviates sub-optimality of the KPCA method for non-stationary data, but also maintains constant update speed and memory usage as the data-size increases. Experiments for simulation data and real applications demonstrate that our approach yields improvements in terms of both computational speed and approximation accuracy.

A CTL Specification of Serializability for Transactions Accessing Uniform Data

Existing work in temporal logic on representing the execution of infinitely many transactions, uses linear-time temporal logic (LTL) and only models two-step transactions. In this paper, we use the comparatively efficient branching-time computational tree logic CTL and extend the transaction model to a class of multistep transactions, by introducing distinguished propositional variables to represent the read and write steps of n multi-step transactions accessing m data items infinitely many times. We prove that the well known correspondence between acyclicity of conflict graphs and serializability for finite schedules, extends to infinite schedules. Furthermore, in the case of transactions accessing the same set of data items in (possibly) different orders, serializability corresponds to the absence of cycles of length two. This result is used to give an efficient encoding of the serializability condition into CTL.

AGENTMAP: A Conceptual Meta-Model of Interacting Simulations

A straightforward and intuitive combination of single simulations into an aggregated master-simulation is not trivial. There are lots of problems, which trigger-specific difficulties during the modeling and execution of such a simulation. In this paper we identify these problems and aim to solve them by mapping the task to the field of multi agent systems. The solution is a new meta-model named AGENTMAP, which is able to mitigate most of the problems and to support intuitive modeling at the same time. This meta-model will be introduced and explained on basis of an example from the e-commerce domain.

Effective Keyword and Similarity Thresholds for the Discovery of Themes from the User Web Access Patterns

Clustering techniques have been used by many intelligent software agents to group similar access patterns of the Web users into high level themes which express users intentions and interests. However, such techniques have been mostly focusing on one salient feature of the Web document visited by the user, namely the extracted keywords. The major aim of these techniques is to come up with an optimal threshold for the number of keywords needed to produce more focused themes. In this paper we focus on both keyword and similarity thresholds to generate themes with concentrated themes, and hence build a more sound model of the user behavior. The purpose of this paper is two fold: use distance based clustering methods to recognize overall themes from the Proxy log file, and suggest an efficient cut off levels for the keyword and similarity thresholds which tend to produce more optimal clusters with better focus and efficient size.

Method for Concept Labeling Based on Mapping between Ontology and Thesaurus

When designing information systems that deal with large amount of domain knowledge, system designers need to consider ambiguities of labeling termsin domain vocabulary for navigating users in the information space. The goal of this study is to develop a methodology for system designers to label navigation items, taking account of ambiguities stems from synonyms or polysemes of labeling terms. In this paper, we propose a method for concept labeling based on mappings between domain ontology andthesaurus, and report results of an empirical evaluation.

Highly Scalable, Reversible and Embedded Image Compression System

A new method for low complexity image coding is presented, that permits different settings and great scalability in the generation of the final bit stream. This coding presents a continuoustone still image compression system that groups loss and lossless compression making use of finite arithmetic reversible transforms. Both transformation in the space of color and wavelet transformation are reversible. The transformed coefficients are coded by means of a coding system in depending on a subdivision into smaller components (CFDS) similar to the bit importance codification. The subcomponents so obtained are reordered by means of a highly configure alignment system depending on the application that makes possible the re-configure of the elements of the image and obtaining different levels of importance from which the bit stream will be generated. The subcomponents of each level of importance are coded using a variable length entropy coding system (VBLm) that permits the generation of an embedded bit stream. This bit stream supposes itself a bit stream that codes a compressed still image. However, the use of a packing system on the bit stream after the VBLm allows the realization of a final highly scalable bit stream from a basic image level and one or several enhance levels.

Discovery of Production Rules with Fuzzy Hierarchy

In this paper a novel algorithm is proposed that integrates the process of fuzzy hierarchy generation and rule discovery for automated discovery of Production Rules with Fuzzy Hierarchy (PRFH) in large databases.A concept of frequency matrix (Freq) introduced to summarize large database that helps in minimizing the number of database accesses, identification and removal of irrelevant attribute values and weak classes during the fuzzy hierarchy generation.Experimental results have established the effectiveness of the proposed algorithm.

2D Bar Codes Reading: Solutions for Camera Phones

Two-dimensional (2D) bar codes were designed to carry significantly more data with higher information density and robustness than its 1D counterpart. Thanks to the popular combination of cameras and mobile phones, it will naturally bring great commercial value to use the camera phone for 2D bar code reading. This paper addresses the problem of specific 2D bar code design for mobile phones and introduces a low-level encoding method of matrix codes. At the same time, we propose an efficient scheme for 2D bar codes decoding, of which the effort is put on solutions of the difficulties introduced by low image quality that is very common in bar code images taken by a phone camera.

Application of Genetic Algorithms to Feature Subset Selection in a Farsi OCR

Dealing with hundreds of features in character recognition systems is not unusual. This large number of features leads to the increase of computational workload of recognition process. There have been many methods which try to remove unnecessary or redundant features and reduce feature dimensionality. Besides because of the characteristics of Farsi scripts, it-s not possible to apply other languages algorithms to Farsi directly. In this paper some methods for feature subset selection using genetic algorithms are applied on a Farsi optical character recognition (OCR) system. Experimental results show that application of genetic algorithms (GA) to feature subset selection in a Farsi OCR results in lower computational complexity and enhanced recognition rate.

Study of Measures to Secure Video Phone Service Safety through a Preliminary Evaluationof the Information Security of the New IT Service

The rapid advance of communication technology is evolving the network environment into the broadband convergence network. Likewise, the IT services operated in the individual network are also being quickly converged in the broadband convergence network environment. VoIP and IPTV are two examples of such new services. Efforts are being made to develop the video phone service, which is an advanced form of the voice-oriented VoIP service. However, the new IT services will be subject to stability and reliability vulnerabilities if the relevant security issues are not answered during the convergence of the existing IT services currently being operated in individual networks within the wider broadband network environment. To resolve such problems, this paper attempts to analyze the possible threats and identify the necessary security measures before the deployment of the new IT services. Furthermore, it measures the quality of the encryption algorithm application example to describe the appropriate algorithm in order to present security technology that will have no negative impact on the quality of the video phone service.

Image Magnification Using Adaptive Interpolationby Pixel Level Data-Dependent Geometrical Shapes

World has entered in 21st century. The technology of computer graphics and digital cameras is prevalent. High resolution display and printer are available. Therefore high resolution images are needed in order to produce high quality display images and high quality prints. However, since high resolution images are not usually provided, there is a need to magnify the original images. One common difficulty in the previous magnification techniques is that of preserving details, i.e. edges and at the same time smoothing the data for not introducing the spurious artefacts. A definitive solution to this is still an open issue. In this paper an image magnification using adaptive interpolation by pixel level data-dependent geometrical shapes is proposed that tries to take into account information about the edges (sharp luminance variations) and smoothness of the image. It calculate threshold, classify interpolation region in the form of geometrical shapes and then assign suitable values inside interpolation region to the undefined pixels while preserving the sharp luminance variations and smoothness at the same time. The results of proposed technique has been compared qualitatively and quantitatively with five other techniques. In which the qualitative results show that the proposed method beats completely the Nearest Neighbouring (NN), bilinear(BL) and bicubic(BC) interpolation. The quantitative results are competitive and consistent with NN, BL, BC and others.

A Hybrid Feature Selection by Resampling, Chi squared and Consistency Evaluation Techniques

In this paper a combined feature selection method is proposed which takes advantages of sample domain filtering, resampling and feature subset evaluation methods to reduce dimensions of huge datasets and select reliable features. This method utilizes both feature space and sample domain to improve the process of feature selection and uses a combination of Chi squared with Consistency attribute evaluation methods to seek reliable features. This method consists of two phases. The first phase filters and resamples the sample domain and the second phase adopts a hybrid procedure to find the optimal feature space by applying Chi squared, Consistency subset evaluation methods and genetic search. Experiments on various sized datasets from UCI Repository of Machine Learning databases show that the performance of five classifiers (Naïve Bayes, Logistic, Multilayer Perceptron, Best First Decision Tree and JRIP) improves simultaneously and the classification error for these classifiers decreases considerably. The experiments also show that this method outperforms other feature selection methods.

Parametric Modeling Approach for Call Holding Times for IP based Public Safety Networks via EM Algorithm

This paper presents parametric probability density models for call holding times (CHTs) into emergency call center based on the actual data collected for over a week in the public Emergency Information Network (EIN) in Mongolia. When the set of chosen candidates of Gamma distribution family is fitted to the call holding time data, it is observed that the whole area in the CHT empirical histogram is underestimated due to spikes of higher probability and long tails of lower probability in the histogram. Therefore, we provide the Gaussian parametric model of a mixture of lognormal distributions with explicit analytical expressions for the modeling of CHTs of PSNs. Finally, we show that the CHTs for PSNs are fitted reasonably by a mixture of lognormal distributions via the simulation of expectation maximization algorithm. This result is significant as it expresses a useful mathematical tool in an explicit manner of a mixture of lognormal distributions.

Target Concept Selection by Property Overlap in Ontology Population

An ontology is widely used in many kinds of applications as a knowledge representation tool for domain knowledge. However, even though an ontology schema is well prepared by domain experts, it is tedious and cost-intensive to add instances into the ontology. The most confident and trust-worthy way to add instances into the ontology is to gather instances from tables in the related Web pages. In automatic populating of instances, the primary task is to find the most proper concept among all possible concepts within the ontology for a given table. This paper proposes a novel method for this problem by defining the similarity between the table and the concept using the overlap of their properties. According to a series of experiments, the proposed method achieves 76.98% of accuracy. This implies that the proposed method is a plausible way for automatic ontology population from Web tables.

An Adaptive Fuzzy Clustering Approach for the Network Management

The Chiu-s method which generates a Takagi-Sugeno Fuzzy Inference System (FIS) is a method of fuzzy rules extraction. The rules output is a linear function of inputs. In addition, these rules are not explicit for the expert. In this paper, we develop a method which generates Mamdani FIS, where the rules output is fuzzy. The method proceeds in two steps: first, it uses the subtractive clustering principle to estimate both the number of clusters and the initial locations of a cluster centers. Each obtained cluster corresponds to a Mamdani fuzzy rule. Then, it optimizes the fuzzy model parameters by applying a genetic algorithm. This method is illustrated on a traffic network management application. We suggest also a Mamdani fuzzy rules generation method, where the expert wants to classify the output variables in some fuzzy predefined classes.

A Design-Based Cohesion Metric for Object-Oriented Classes

Class cohesion is an important object-oriented software quality attribute. It indicates how much the members in a class are related. Assessing the class cohesion and improving the class quality accordingly during the object-oriented design phase allows for cheaper management of the later phases. In this paper, the notion of distance between pairs of methods and pairs of attribute types in a class is introduced and used as a basis for introducing a novel class cohesion metric. The metric considers the methodmethod, attribute-attribute, and attribute-method direct interactions. It is shown that the metric gives more sensitive values than other well-known design-based class cohesion metrics.

A Decision Boundary based Discretization Technique using Resampling

Many supervised induction algorithms require discrete data, even while real data often comes in a discrete and continuous formats. Quality discretization of continuous attributes is an important problem that has effects on speed, accuracy and understandability of the induction models. Usually, discretization and other types of statistical processes are applied to subsets of the population as the entire population is practically inaccessible. For this reason we argue that the discretization performed on a sample of the population is only an estimate of the entire population. Most of the existing discretization methods, partition the attribute range into two or several intervals using a single or a set of cut points. In this paper, we introduce a technique by using resampling (such as bootstrap) to generate a set of candidate discretization points and thus, improving the discretization quality by providing a better estimation towards the entire population. Thus, the goal of this paper is to observe whether the resampling technique can lead to better discretization points, which opens up a new paradigm to construction of soft decision trees.

Search Engine Module in Voice Recognition Browser to Facilitate the Visually Impaired in Virtual Learning (MGSYS VISI-VL)

Nowadays, web-based technologies influence in people-s daily life such as in education, business and others. Therefore, many web developers are too eager to develop their web applications with fully animation graphics and forgetting its accessibility to its users. Their purpose is to make their web applications look impressive. Thus, this paper would highlight on the usability and accessibility of a voice recognition browser as a tool to facilitate the visually impaired and blind learners in accessing virtual learning environment. More specifically, the objectives of the study are (i) to explore the challenges faced by the visually impaired learners in accessing virtual learning environment (ii) to determine the suitable guidelines for developing a voice recognition browser that is accessible to the visually impaired. Furthermore, this study was prepared based on an observation conducted with the Malaysian visually impaired learners. Finally, the result of this study would underline on the development of an accessible voice recognition browser for the visually impaired.