Simulation of Activity Stream inside Energy Social Business Environment using Assemblage Theory and Simplicial Complex Tool

Social, mobility and information aggregation inside business environment need to converge to reach the next step of collaboration to enhance interaction and innovation. The following article is based on the “Assemblage" concept seen as a framework to formalize new user interfaces and applications. The area of research is the Energy Social Business Environment, especially the Energy Smart Grids, which are considered as functional and technical foundations of the revolution of the Energy Sector of tomorrow. The assemblages are modelized by means of mereology and simplicial complexes. Its objective is to offer new central attention and decision-making tools to end-users.

Using Different Aspects of the Signings for Appearance-based Sign Language Recognition

Sign language is used by the deaf and hard of hearing people for communication. Automatic sign language recognition is a challenging research area since sign language often is the only way of communication for the deaf people. Sign language includes different components of visual actions made by the signer using the hands, the face, and the torso, to convey his/her meaning. To use different aspects of signs, we combine the different groups of features which have been extracted from the image frames recorded directly by a stationary camera. We combine the features in two levels by employing three techniques. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, or by concatenating feature groups over time and using LDA to choose the most discriminant elements. At the model level, a late fusion of differently trained models can be carried out by a log-linear model combination. In this paper, we investigate these three combination techniques in an automatic sign language recognition system and show that the recognition rate can be significantly improved.

Design Based Performance Prediction of Component Based Software Products

Component-Based software engineering provides an opportunity for better quality and increased productivity in software development by using reusable software components [10]. One of the most critical aspects of the quality of a software system is its performance. The systematic application of software performance engineering techniques throughout the development process can help to identify design alternatives that preserve desirable qualities such as extensibility and reusability while meeting performance objectives [1]. In the present scenario, software engineering methodologies strongly focus on the functionality of the system, while applying a “fix- it-later" approach to software performance aspects [3]. As a result, lengthy fine-tunings, expensive extra hard ware, or even redesigns are necessary for the system to meet the performance requirements. In this paper, we propose design based, implementation independent, performance prediction approach to reduce the overhead associated in the later phases while developing a performance guaranteed software product with the help of Unified Modeling Language (UML).

SATA: A Web Based Scheduling Support System

Developing a university course schedule is difficult. This is due to the limitations in the resources available. The process is made even harder with different faculties or departments having different ways of stating their schedule requirements. The person in charge of taking the schedule requirements and turning them into a proper course schedule is not only burden with the task of allocating the appropriate classes and time to lecturers and students, they also need to understand the schedule requirements. Therefore a scheduling support system named SATA is developed to assist ICRESS in the course scheduling process. SATA has been put to use for several semesters and the results have been encouraging. It won a bronze medal in the 2008 Invention, Innovation and Design competition (IID-08) and has been submitted to be patented in October 2008

Prototype of a Federative Factory Data Management for the Support of Factory Planning Processes

Due to short product life cycles, increasing variety of products and short cycles of leap innovations manufacturing companies have to increase the flexibility of factory structures. Flexibility of factory structures is based on defined factory planning processes in which product, process and resource data of various partial domains have to be considered. Thus factory planning processes can be characterized as iterative, interdisciplinary and participative processes [1]. To support interdisciplinary and participative character of planning processes, a federative factory data management (FFDM) as a holistic solution will be described. FFDM is already implemented in form of a prototype. The interim results of the development of FFDM will be shown in this paper. The principles are the extracting of product, process and resource data from documents of various partial domains providing as web services on a server. The described data can be requested by the factory planner by using a FFDM-browser.

Decision Maturity Framework: Introducing Maturity In Heuristic Search

Heuristics-based search methodologies normally work on searching a problem space of possible solutions toward finding a “satisfactory" solution based on “hints" estimated from the problem-specific knowledge. Research communities use different types of methodologies. Unfortunately, most of the times, these hints are immature and can lead toward hindering these methodologies by a premature convergence. This is due to a decrease of diversity in search space that leads to a total implosion and ultimately fitness stagnation of the population. In this paper, a novel Decision Maturity framework (DMF) is introduced as a solution to this problem. The framework simply improves the decision on the direction of the search by materializing hints enough before using them. Ideas from this framework are injected into the particle swarm optimization methodology. Results were obtained under both static and dynamic environment. The results show that decision maturity prevents premature converges to a high degree.

Tablet Computer as a User Interface: Intelligent Solutions for Multifunctional Hardcopy Devices

Tablet computers and Multifunctional Hardcopy Devices (MHDs) are common devices in daily life. Though, many scientific studies have not been published. The tablet computers are straightforward to use whereas the MHDs are comparatively difficult to use. Thus, to assist different levels of users, we propose combining these two devices to achieve straightforward intelligent user interface (UI) and versatile What You See Is What You Get (WYSIWYG) document management and production. Our approach to this issue is to design an intelligent user dependent UI for a MHD applying a tablet computer. Furthermore, we propose hardware interconnection and versatile intelligent software between these two devices. In this study, we first provide a state-of-the-art survey on MHDs and tablet computers, and their interconnections. Secondly we provide a comparative UI survey on two state-of-the-art MHDs with a proposal of a novel UI for the MHDs using Jakob Nielsen-s Ten Usability Heuristics Evaluation.

3D Dense Correspondence for 3D Dense Morphable Face Shape Model

Realistic 3D face model is desired in various applications such as face recognition, games, avatars, animations, and etc. Construction of 3D face model is composed of 1) building a face shape model and 2) rendering the face shape model. Thus, building a realistic 3D face shape model is an essential step for realistic 3D face model. Recently, 3D morphable model is successfully introduced to deal with the various human face shapes. 3D dense correspondence problem should be precedently resolved for constructing a realistic 3D dense morphable face shape model. Several approaches to 3D dense correspondence problem in 3D face modeling have been proposed previously, and among them optical flow based algorithms and TPS (Thin Plate Spline) based algorithms are representative. Optical flow based algorithms require texture information of faces, which is sensitive to variation of illumination. In TPS based algorithms proposed so far, TPS process is performed on the 2D projection representation in cylindrical coordinates of the 3D face data, not directly on the 3D face data and thus errors due to distortion in data during 2D TPS process may be inevitable. In this paper, we propose a new 3D dense correspondence algorithm for 3D dense morphable face shape modeling. The proposed algorithm does not need texture information and applies TPS directly on 3D face data. Through construction procedures, it is observed that the proposed algorithm constructs realistic 3D face morphable model reliably and fast.

New Algorithms for Finding Short Reset Sequences in Synchronizing Automata

Finding synchronizing sequences for the finite automata is a very important problem in many practical applications (part orienters in industry, reset problem in biocomputing theory, network issues etc). Problem of finding the shortest synchronizing sequence is NP-hard, so polynomial algorithms probably can work only as heuristic ones. In this paper we propose two versions of polynomial algorithms which work better than well-known Eppstein-s Greedy and Cycle algorithms.

Robust Statistics Based Algorithm to Remove Salt and Pepper Noise in Images

In this paper, a robust statistics based filter to remove salt and pepper noise in digital images is presented. The function of the algorithm is to detect the corrupted pixels first since the impulse noise only affect certain pixels in the image and the remaining pixels are uncorrupted. The corrupted pixels are replaced by an estimated value using the proposed robust statistics based filter. The proposed method perform well in removing low to medium density impulse noise with detail preservation upto a noise density of 70% compared to standard median filter, weighted median filter, recursive weighted median filter, progressive switching median filter, signal dependent rank ordered mean filter, adaptive median filter and recently proposed decision based algorithm. The visual and quantitative results show the proposed algorithm outperforms in restoring the original image with superior preservation of edges and better suppression of impulse noise

Minimal Spanning Tree based Fuzzy Clustering

Most of fuzzy clustering algorithms have some discrepancies, e.g. they are not able to detect clusters with convex shapes, the number of the clusters should be a priori known, they suffer from numerical problems, like sensitiveness to the initialization, etc. This paper studies the synergistic combination of the hierarchical and graph theoretic minimal spanning tree based clustering algorithm with the partitional Gath-Geva fuzzy clustering algorithm. The aim of this hybridization is to increase the robustness and consistency of the clustering results and to decrease the number of the heuristically defined parameters of these algorithms to decrease the influence of the user on the clustering results. For the analysis of the resulted fuzzy clusters a new fuzzy similarity measure based tool has been presented. The calculated similarities of the clusters can be used for the hierarchical clustering of the resulted fuzzy clusters, which information is useful for cluster merging and for the visualization of the clustering results. As the examples used for the illustration of the operation of the new algorithm will show, the proposed algorithm can detect clusters from data with arbitrary shape and does not suffer from the numerical problems of the classical Gath-Geva fuzzy clustering algorithm.

A Cumulative Learning Approach to Data Mining Employing Censored Production Rules (CPRs)

Knowledge is indispensable but voluminous knowledge becomes a bottleneck for efficient processing. A great challenge for data mining activity is the generation of large number of potential rules as a result of mining process. In fact sometimes result size is comparable to the original data. Traditional data mining pruning activities such as support do not sufficiently reduce the huge rule space. Moreover, many practical applications are characterized by continual change of data and knowledge, thereby making knowledge voluminous with each change. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. Michalski & Winston proposed Censored Production Rules (CPRs), as an extension of production rules, that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence, are tight or there is simply no information available as to whether it holds or not. Thus the 'If P Then D' part of the CPR expresses important information while the Unless C part acts only as a switch changes the polarity of D to ~D. In this paper a scheme based on Dempster-Shafer Theory (DST) interpretation of a CPR is suggested for discovering CPRs from the discovered flat PRs. The discovery of CPRs from flat rules would result in considerable reduction of the already discovered rules. The proposed scheme incrementally incorporates new knowledge and also reduces the size of knowledge base considerably with each episode. Examples are given to demonstrate the behaviour of the proposed scheme. The suggested cumulative learning scheme would be useful in mining data streams.

Spread Spectrum Image Watermarking for Secured Multimedia Data Communication

Digital watermarking is a way to provide the facility of secure multimedia data communication besides its copyright protection approach. The Spread Spectrum modulation principle is widely used in digital watermarking to satisfy the robustness of multimedia signals against various signal-processing operations. Several SS watermarking algorithms have been proposed for multimedia signals but very few works have discussed on the issues responsible for secure data communication and its robustness improvement. The current paper has critically analyzed few such factors namely properties of spreading codes, proper signal decomposition suitable for data embedding, security provided by the key, successive bit cancellation method applied at decoder which have greater impact on the detection reliability, secure communication of significant signal under camouflage of insignificant signals etc. Based on the analysis, robust SS watermarking scheme for secure data communication is proposed in wavelet domain and improvement in secure communication and robustness performance is reported through experimental results. The reported result also shows improvement in visual and statistical invisibility of the hidden data.

Recognition of Noisy Words Using the Time Delay Neural Networks Approach

This paper presents a recognition system for isolated words like robot commands. It’s carried out by Time Delay Neural Networks; TDNN. To teleoperate a robot for specific tasks as turn, close, etc… In industrial environment and taking into account the noise coming from the machine. The choice of TDNN is based on its generalization in terms of accuracy, in more it acts as a filter that allows the passage of certain desirable frequency characteristics of speech; the goal is to determine the parameters of this filter for making an adaptable system to the variability of speech signal and to noise especially, for this the back propagation technique was used in learning phase. The approach was applied on commands pronounced in two languages separately: The French and Arabic. The results for two test bases of 300 spoken words for each one are 87%, 97.6% in neutral environment and 77.67%, 92.67% when the white Gaussian noisy was added with a SNR of 35 dB.

Virtual Scene based on VRML and Java

VRML( The virtual reality modeling language) is a standard language used to build up 3D virtualized models. The quick development of internet technology and computer manipulation has promoted the commercialization of reality virtualization. VRML, thereof, is expected to be the most effective framework of building up virtual reality. This article has studied plans to build virtualized scenes based on the technology of virtual reality and Java programe, and introduced how to execute real-time data transactions of VRML file and Java programe by applying Script Node, in doing so we have the VRML interactivity being strengthened.

Multilevel Classifiers in Recognition of Handwritten Kannada Numerals

The recognition of handwritten numeral is an important area of research for its applications in post office, banks and other organizations. This paper presents automatic recognition of handwritten Kannada numerals based on structural features. Five different types of features, namely, profile based 10-segment string, water reservoir; vertical and horizontal strokes, end points and average boundary length from the minimal bounding box are used in the recognition of numeral. The effect of each feature and their combination in the numeral classification is analyzed using nearest neighbor classifiers. It is common to combine multiple categories of features into a single feature vector for the classification. Instead, separate classifiers can be used to classify based on each visual feature individually and the final classification can be obtained based on the combination of separate base classification results. One popular approach is to combine the classifier results into a feature vector and leaving the decision to next level classifier. This method is extended to extract a better information, possibility distribution, from the base classifiers in resolving the conflicts among the classification results. Here, we use fuzzy k Nearest Neighbor (fuzzy k-NN) as base classifier for individual feature sets, the results of which together forms the feature vector for the final k Nearest Neighbor (k-NN) classifier. Testing is done, using different features, individually and in combination, on a database containing 1600 samples of different numerals and the results are compared with the results of different existing methods.

Performance Analysis of the Subgroup Method for Collective I/O

As many scientific applications require large data processing, the importance of parallel I/O has been increasingly recognized. Collective I/O is one of the considerable features of parallel I/O and enables application programmers to easily handle their large data volume. In this paper we measured and analyzed the performance of original collective I/O and the subgroup method, the way of using collective I/O of MPI effectively. From the experimental results, we found that the subgroup method showed good performance with small data size.

Variable Step-Size Affine Projection Algorithm With a Weighted and Regularized Projection Matrix

This paper presents a forgetting factor scheme for variable step-size affine projection algorithms (APA). The proposed scheme uses a forgetting processed input matrix as the projection matrix of pseudo-inverse to estimate system deviation. This method introduces temporal weights into the projection matrix, which is typically a better model of the real error's behavior than homogeneous temporal weights. The regularization overcomes the ill-conditioning introduced by both the forgetting process and the increasing size of the input matrix. This algorithm is tested by independent trials with coloured input signals and various parameter combinations. Results show that the proposed algorithm is superior in terms of convergence rate and misadjustment compared to existing algorithms. As a special case, a variable step size NLMS with forgetting factor is also presented in this paper.

Data Extraction of XML Files using Searching and Indexing Techniques

XML files contain data which is in well formatted manner. By studying the format or semantics of the grammar it will be helpful for fast retrieval of the data. There are many algorithms which describes about searching the data from XML files. There are no. of approaches which uses data structure or are related to the contents of the document. In these cases user must know about the structure of the document and information retrieval techniques using NLPs is related to content of the document. Hence the result may be irrelevant or not so successful and may take more time to search.. This paper presents fast XML retrieval techniques by using new indexing technique and the concept of RXML. When indexing an XML document, the system takes into account both the document content and the document structure and assigns the value to each tag from file. To query the system, a user is not constrained about fixed format of query.

New Mitigating Technique to Overcome DDOS Attack

In this paper, we explore a new scheme for filtering spoofed packets (DDOS attack) which is a combination of path fingerprint and client puzzle concepts. In this each IP packet has a unique fingerprint is embedded that represents, the route a packet has traversed. The server maintains a mapping table which contains the client IP address and its corresponding fingerprint. In ingress router, client puzzle is placed. For each request, the puzzle issuer provides a puzzle which the source has to solve. Our design has the following advantages over prior approaches, 1) Reduce the network traffic, as we place a client puzzle at the ingress router. 2) Mapping table at the server is lightweight and moderate.