Robot Path Planning in 3D Space Using Binary Integer Programming

This paper presents a novel algorithm for path planning of mobile robots in known 3D environments using Binary Integer Programming (BIP). In this approach the problem of path planning is formulated as a BIP with variables taken from 3D Delaunay Triangulation of the Free Configuration Space and solved to obtain an optimal channel made of connected tetrahedrons. The 3D channel is then partitioned into convex fragments which are used to build safe and short paths within from Start to Goal. The algorithm is simple, complete, does not suffer from local minima, and is applicable to different workspaces with convex and concave polyhedral obstacles. The noticeable feature of this algorithm is that it is simply extendable to n-D Configuration spaces.

Information Modelling for Adaptive Composition in Collaborative Work Environment

Extensive information is required within a R&D environment, and a considerable amount of time and efforts are being spent on finding the necessary information. An adaptive information providing system would be beneficial to the environment, and a conceptual model of the resources, people and context is mandatory for developing such applications. In this paper, an information model on various contexts and resources is proposed which provides the possibility of effective applications for use in adaptive information systems within a R&D project and meeting environment.

Image Authenticity and Perceptual Optimization via Genetic Algorithm and a Dependence Neighborhood

Information hiding for authenticating and verifying the content integrity of the multimedia has been exploited extensively in the last decade. We propose the idea of using genetic algorithm and non-deterministic dependence by involving the un-watermarkable coefficients for digital image authentication. Genetic algorithm is used to intelligently select coefficients for watermarking in a DCT based image authentication scheme, which implicitly watermark all the un-watermarkable coefficients also, in order to thwart different attacks. Experimental results show that such intelligent selection results in improvement of imperceptibility of the watermarked image, and implicit watermarking of all the coefficients improves security against attacks such as cover-up, vector quantization and transplantation.

A Combination of Similarity Ranking and Time for Social Research Paper Searching

Nowadays social media are important tools for web resource discovery. The performance and capabilities of web searches are vital, especially search results from social research paper bookmarking. This paper proposes a new algorithm for ranking method that is a combination of similarity ranking with paper posted time or CSTRank. The paper posted time is static ranking for improving search results. For this particular study, the paper posted time is combined with similarity ranking to produce a better ranking than other methods such as similarity ranking or SimRank. The retrieval performance of combination rankings is evaluated using mean values of NDCG. The evaluation in the experiments implies that the chosen CSTRank ranking by using weight score at ratio 90:10 can improve the efficiency of research paper searching on social bookmarking websites.

Multi-Dimensional Concerns Mining for Web Applications via Concept-Analysis

Web applications have become very complex and crucial, especially when combined with areas such as CRM (Customer Relationship Management) and BPR (Business Process Reengineering), the scientific community has focused attention to Web applications design, development, analysis, and testing, by studying and proposing methodologies and tools. This paper proposes an approach to automatic multi-dimensional concern mining for Web Applications, based on concepts analysis, impact analysis, and token-based concern identification. This approach lets the user to analyse and traverse Web software relevant to a particular concern (concept, goal, purpose, etc.) via multi-dimensional separation of concerns, to document, understand and test Web applications. This technique was developed in the context of WAAT (Web Applications Analysis and Testing) project. A semi-automatic tool to support this technique is currently under development.

Sloshing Control in Tilting Phases of the Pouring Process

We propose a control design scheme that aims to prevent undesirable liquid outpouring and suppress sloshing during the forward and backward tilting phases of the pouring process, for the case of liquid containers carried by manipulators. The proposed scheme combines a partial inverse dynamics controller with a PID controller, tuned with the use of a “metaheuristic" search algorithm. The “metaheuristic" search algorithm tunes the PID controller based on simulation results of the plant-s linearization around the operating point corresponding to the critical tilting angle, where outpouring initiates. Liquid motion is modeled using the well-known pendulumtype model. However, the proposed controller does not require measurements of the liquid-s motion within the tank.

Classification and Analysis of Risks in Software Engineering

Despite various methods that exist in software risk management, software projects have a high rate of failure. When complexity and size of the projects are increased, managing software development becomes more difficult. In these projects the need for more analysis and risk assessment is vital. In this paper, a classification for software risks is specified. Then relations between these risks using risk tree structure are presented. Analysis and assessment of these risks are done using probabilistic calculations. This analysis helps qualitative and quantitative assessment of risk of failure. Moreover it can help software risk management process. This classification and risk tree structure can apply to some software tools.

Application of Neural Network in User Authentication for Smart Home System

Security has been an important issue and concern in the smart home systems. Smart home networks consist of a wide range of wired or wireless devices, there is possibility that illegal access to some restricted data or devices may happen. Password-based authentication is widely used to identify authorize users, because this method is cheap, easy and quite accurate. In this paper, a neural network is trained to store the passwords instead of using verification table. This method is useful in solving security problems that happened in some authentication system. The conventional way to train the network using Backpropagation (BPN) requires a long training time. Hence, a faster training algorithm, Resilient Backpropagation (RPROP) is embedded to the MLPs Neural Network to accelerate the training process. For the Data Part, 200 sets of UserID and Passwords were created and encoded into binary as the input. The simulation had been carried out to evaluate the performance for different number of hidden neurons and combination of transfer functions. Mean Square Error (MSE), training time and number of epochs are used to determine the network performance. From the results obtained, using Tansig and Purelin in hidden and output layer and 250 hidden neurons gave the better performance. As a result, a password-based user authentication system for smart home by using neural network had been developed successfully.

Performance Comparison for AODV, DSR and DSDV W.R.T. CBR and TCP in Large Networks

Mobile Ad hoc Network (MANET) is a wireless ad hoc self-configuring network of mobile routers (and associated hosts) connected by wireless links, the union of which forms an arbitrary topology, cause of the random mobility of the nodes. In this paper, an attempt has been made to compare these three protocols DSDV, AODV and DSR on the performance basis under different traffic protocols namely CBR and TCP in a large network. The simulation tool is NS2, the scenarios are made to see the effect of pause times. The results presented in this paper clearly indicate that the different protocols behave differently under different pause times. Also, the results show the main characteristics of different traffic protocols operating on MANETs and thus select the best protocol on each scenario.

Specialization-based parallel Processing without Memo-trees

The purpose of this paper is to propose a framework for constructing correct parallel processing programs based on Equivalent Transformation Framework (ETF). ETF regards computation as In the framework, a problem-s domain knowledge and a query are described in definite clauses, and computation is regarded as transformation of the definite clauses. Its meaning is defined by a model of the set of definite clauses, and the transformation rules generated must preserve meaning. We have proposed a parallel processing method based on “specialization", a part of operation in the transformations, which resembles substitution in logic programming. The method requires “Memo-tree", a history of specialization to maintain correctness. In this paper we proposes the new method for the specialization-base parallel processing without Memo-tree.

An Implicit Representation of Spherical Product for Increasing the Shape Variety of Super-quadrics in Implicit Surface Modeling

Super-quadrics can represent a set of implicit surfaces, which can be used furthermore as primitive surfaces to construct a complex object via Boolean set operations in implicit surface modeling. In fact, super-quadrics were developed to create a parametric surface by performing spherical product on two parametric curves and some of the resulting parametric surfaces were also represented as implicit surfaces. However, because not every parametric curve can be redefined implicitly, this causes only implicit super-elliptic and super-hyperbolic curves are applied to perform spherical product and so only implicit super-ellipsoids and hyperboloids are developed in super-quadrics. To create implicit surfaces with more diverse shapes than super-quadrics, this paper proposes an implicit representation of spherical product, which performs spherical product on two implicit curves like super-quadrics do. By means of the implicit representation, many new implicit curves such as polygonal, star-shaped and rose-shaped curves can be used to develop new implicit surfaces with a greater variety of shapes than super-quadrics, such as polyhedrons, hyper-ellipsoids, superhyperboloids and hyper-toroids containing star-shaped and roseshaped major and minor circles. Besides, the newly developed implicit surfaces can also be used to define new primitive implicit surfaces for constructing a more complex implicit surface in implicit surface modeling.

Recurrent Radial Basis Function Network for Failure Time Series Prediction

An adaptive software reliability prediction model using evolutionary connectionist approach based on Recurrent Radial Basis Function architecture is proposed. Based on the currently available software failure time data, Fuzzy Min-Max algorithm is used to globally optimize the number of the k Gaussian nodes. The corresponding optimized neural network architecture is iteratively and dynamically reconfigured in real-time as new actual failure time data arrives. The performance of our proposed approach has been tested using sixteen real-time software failure data. Numerical results show that our proposed approach is robust across different software projects, and has a better performance with respect to next-steppredictability compared to existing neural network model for failure time prediction.

An Analysis of the Social Network Structure of Knowledge Management Students at NTU

This paper maps the structure of the social network of the 2011 class ofsixty graduate students of the Masters of Science (Knowledge Management) programme at the Nanyang Technological University, based on their friending relationships on Facebook. To ensure anonymity, actual names were not used. Instead, they were replaced with codes constructed from their gender, nationality, mode of study, year of enrollment and a unique number. The relationships between friends within the class, and among the seniors and alumni of the programme wereplotted. UCINet and Pajek were used to plot the sociogram, to compute the density, inclusivity, and degree, global, betweenness, and Bonacich centralities, to partition the students into two groups, namely, active and peripheral, and to identify the cut-points. Homophily was investigated, and it was observed for nationality and study mode. The groups students formed on Facebook were also studied, and of fifteen groups, eight were classified as dead, which we defined as those that have been inactive for over two months.

Concept Abduction in Description Logics with Cardinality Restrictions

Recently the usefulness of Concept Abduction, a novel non-monotonic inference service for Description Logics (DLs), has been argued in the context of ontology-based applications such as semantic matchmaking and resource retrieval. Based on tableau calculus, a method has been proposed to realize this reasoning task in ALN, a description logic that supports simple cardinality restrictions as well as other basic constructors. However, in many ontology-based systems, the representation of ontology would require expressive formalisms for capturing domain-specific constraints, this language is not sufficient. In order to increase the applicability of the abductive reasoning method in such contexts, we would like to present in the scope of this paper an extension of the tableaux-based algorithm for dealing with concepts represented inALCQ, the description logic that extends ALN with full concept negation and quantified number restrictions.

Portfolio Management: A Fuzzy Set Based Approach to Monitoring Size to Maximize Return and Minimize Risk

Fuzzy logic can be used when knowledge is incomplete or when ambiguity of data exists. The purpose of this paper is to propose a proactive fuzzy set- based model for reacting to the risk inherent in investment activities relative to a complete view of portfolio management. Fuzzy rules are given where, depending on the antecedents, the portfolio size may be slightly or significantly decreased or increased. The decision maker considers acceptable bounds on the proportion of acceptable risk and return. The Fuzzy Controller model allows learning to be achieved as 1) the firing strength of each rule is measured, 2) fuzzy output allows rules to be updated, and 3) new actions are recommended as the system continues to loop. An extension is given to the fuzzy controller that evaluates potential financial loss before adjusting the portfolio. An application is presented that illustrates the algorithm and extension developed in the paper.

Object-Oriented Cognitive-Spatial Complexity Measures

Software maintenance and mainly software comprehension pose the largest costs in the software lifecycle. In order to assess the cost of software comprehension, various complexity measures have been proposed in the literature. This paper proposes new cognitive-spatial complexity measures, which combine the impact of spatial as well as architectural aspect of the software to compute the software complexity. The spatial aspect of the software complexity is taken into account using the lexical distances (in number of lines of code) between different program elements and the architectural aspect of the software complexity is taken into consideration using the cognitive weights of control structures present in control flow of the program. The proposed measures are evaluated using standard axiomatic frameworks and then, the proposed measures are compared with the corresponding existing cognitive complexity measures as well as the spatial complexity measures for object-oriented software. This study establishes that the proposed measures are better indicators of the cognitive effort required for software comprehension than the other existing complexity measures for object-oriented software.

Diffusion Analysis of a Scalable Feistel Network

A generalization of the concepts of Feistel Networks (FN), known as Extended Feistel Network (EFN) is examined. EFN splits the input blocks into n > 2 sub-blocks. Like conventional FN, EFN consists of a series of rounds whereby at least one sub-block is subjected to an F function. The function plays a key role in the diffusion process due to its completeness property. It is also important to note that in EFN the F-function is the most computationally expensive operation in a round. The aim of this paper is to determine a suitable type of EFN for a scalable cipher. This is done by analyzing the threshold number of rounds for different types of EFN to achieve the completeness property as well as the number of F-function required in the network. The work focuses on EFN-Type I, Type II and Type III only. In the analysis it is found that EFN-Type II and Type III diffuses at the same rate and both are faster than Type-I EFN. Since EFN-Type-II uses less F functions as compared to EFN-Type III, therefore Type II is the most suitable EFN for use in a scalable cipher.

Performance of QoS Parameters in MANET Application Traffics in Large Scale Scenarios

A mobile Ad-hoc network consists of wireless nodes communicating without the need for a centralized administration. A user can move anytime in an ad hoc scenario and, as a result, such a network needs to have routing protocols which can adopt dynamically changing topology. To accomplish this, a number of ad hoc routing protocols have been proposed and implemented, which include DSR, OLSR and AODV. This paper presents a study on the QoS parameters for MANET application traffics in large-scale scenarios with 50 and 120 nodes. The application traffics analyzed in this study is File Transfer Protocol (FTP). In large scale networks (120 nodes) OLSR shows better performance and in smaller scale networks (50 nodes)AODV shows less packet drop rate and OLSR shows better throughput.

A New Approach for Counting Passersby Utilizing Space-Time Images

Understanding the number of people and the flow of the persons is useful for efficient promotion of the institution managements and company-s sales improvements. This paper introduces an automated method for counting passerby using virtualvertical measurement lines. The process of recognizing a passerby is carried out using an image sequence obtained from the USB camera. Space-time image is representing the human regions which are treated using the segmentation process. To handle the problem of mismatching, different color space are used to perform the template matching which chose automatically the best matching to determine passerby direction and speed. A relation between passerby speed and the human-pixel area is used to distinguish one or two passersby. In the experiment, the camera is fixed at the entrance door of the hall in a side viewing position. Finally, experimental results verify the effectiveness of the presented method by correctly detecting and successfully counting them in order to direction with accuracy of 97%.

Hand Gesture Recognition: Sign to Voice System (S2V)

Hand gesture is one of the typical methods used in sign language for non-verbal communication. It is most commonly used by people who have hearing or speech problems to communicate among themselves or with normal people. Various sign language systems have been developed by manufacturers around the globe but they are neither flexible nor cost-effective for the end users. This paper presents a system prototype that is able to automatically recognize sign language to help normal people to communicate more effectively with the hearing or speech impaired people. The Sign to Voice system prototype, S2V, was developed using Feed Forward Neural Network for two-sequence signs detection. Different sets of universal hand gestures were captured from video camera and utilized to train the neural network for classification purpose. The experimental results have shown that neural network has achieved satisfactory result for sign-to-voice translation.