All-Pairs Shortest-Paths Problem for Unweighted Graphs in O(n2 log n) Time

Given a simple connected unweighted undirected graph G = (V (G), E(G)) with |V (G)| = n and |E(G)| = m, we present a new algorithm for the all-pairs shortest-path (APSP) problem. The running time of our algorithm is in O(n2 log n). This bound is an improvement over previous best known O(n2.376) time bound of Raimund Seidel (1995) for general graphs. The algorithm presented does not rely on fast matrix multiplication. Our algorithm with slight modifications, enables us to compute the APSP problem for unweighted directed graph in time O(n2 log n), improving a previous best known O(n2.575) time bound of Uri Zwick (2002).

WPRiMA Tool: Managing Risks in Web Projects

Risk management is an essential fraction of project management, which plays a significant role in project success. Many failures associated with Web projects are the consequences of poor awareness of the risks involved and lack of process models that can serve as a guideline for the development of Web based applications. To circumvent this problem, contemporary process models have been devised for the development of conventional software. This paper introduces the WPRiMA (Web Project Risk Management Assessment) as the tool, which is used to implement RIAP, the risk identification architecture pattern model, which focuses upon the data from the proprietor-s and vendor-s perspectives. The paper also illustrates how WPRiMA tool works and how it can be used to calculate the risk level for a given Web project, to generate recommendations in order to facilitate risk avoidance in a project, and to improve the prospects of early risk management.

Real-Time Hand Tracking and Gesture Recognition System Using Neural Networks

This paper introduces a hand gesture recognition system to recognize real time gesture in unstrained environments. Efforts should be made to adapt computers to our natural means of communication: Speech and body language. A simple and fast algorithm using orientation histograms will be developed. It will recognize a subset of MAL static hand gestures. A pattern recognition system will be using a transforrn that converts an image into a feature vector, which will be compared with the feature vectors of a training set of gestures. The final system will be Perceptron implementation in MATLAB. This paper includes experiments of 33 hand postures and discusses the results. Experiments shows that the system can achieve a 90% recognition average rate and is suitable for real time applications.

Fast Codevector Search Algorithm for 3-D Vector Quantized Codebook

This paper presents a very simple and efficient algorithm for codebook search, which reduces a great deal of computation as compared to the full codebook search. The algorithm is based on sorting and centroid technique for search. The results table shows the effectiveness of the proposed algorithm in terms of computational complexity. In this paper we also introduce a new performance parameter named as Average fractional change in pixel value as we feel that it gives better understanding of the closeness of the image since it is related to the perception. This new performance parameter takes into consideration the average fractional change in each pixel value.

Bi-Criteria Latency Optimization of Intra-and Inter-Autonomous System Traffic Engineering

Traffic Engineering (TE) is the process of controlling how traffic flows through a network in order to facilitate efficient and reliable network operations while simultaneously optimizing network resource utilization and traffic performance. TE improves the management of data traffic within a network and provides the better utilization of network resources. Many research works considers intra and inter Traffic Engineering separately. But in reality one influences the other. Hence the effective network performances of both inter and intra Autonomous Systems (AS) are not optimized properly. To achieve a better Joint Optimization of both Intra and Inter AS TE, we propose a joint Optimization technique by considering intra-AS features during inter – AS TE and vice versa. This work considers the important criterion say latency within an AS and between ASes. and proposes a Bi-Criteria Latency optimization model. Hence an overall network performance can be improved by considering this jointoptimization technique in terms of Latency.

Design and Simulation of a New Self-Learning Expert System for Mobile Robot

In this paper, we present a novel technique called Self-Learning Expert System (SLES). Unlike Expert System, where there is a need for an expert to impart experiences and knowledge to create the knowledge base, this technique tries to acquire the experience and knowledge automatically. To display this technique at work, a simulation of a mobile robot navigating through an environment with obstacles is employed using visual basic. The mobile robot will move through this area without colliding with any obstacle and save the path that it took. If the mobile robot has to go through a similar environment again, then it will apply this experience to help it move through quicker without having to check for collision.

Seamless Flow of Voluminous Data in High Speed Network without Congestion Using Feedback Mechanism

Continuously growing needs for Internet applications that transmit massive amount of data have led to the emergence of high speed network. Data transfer must take place without any congestion and hence feedback parameters must be transferred from the receiver end to the sender end so as to restrict the sending rate in order to avoid congestion. Even though TCP tries to avoid congestion by restricting the sending rate and window size, it never announces the sender about the capacity of the data to be sent and also it reduces the window size by half at the time of congestion therefore resulting in the decrease of throughput, low utilization of the bandwidth and maximum delay. In this paper, XCP protocol is used and feedback parameters are calculated based on arrival rate, service rate, traffic rate and queue size and hence the receiver informs the sender about the throughput, capacity of the data to be sent and window size adjustment, resulting in no drastic decrease in window size, better increase in sending rate because of which there is a continuous flow of data without congestion. Therefore as a result of this, there is a maximum increase in throughput, high utilization of the bandwidth and minimum delay. The result of the proposed work is presented as a graph based on throughput, delay and window size. Thus in this paper, XCP protocol is well illustrated and the various parameters are thoroughly analyzed and adequately presented.

RFU Based Computational Unit Design For Reconfigurable Processors

Fully customized hardware based technology provides high performance and low power consumption by specializing the tasks in hardware but lacks design flexibility since any kind of changes require re-design and re-fabrication. Software based solutions operate with software instructions due to which a great flexibility is achieved from the easy development and maintenance of the software code. But this execution of instructions introduces a high overhead in performance and area consumption. In past few decades the reconfigurable computing domain has been introduced which overcomes the traditional trades-off between flexibility and performance and is able to achieve high performance while maintaining a good flexibility. The dramatic gains in terms of chip performance and design flexibility achieved through the reconfigurable computing systems are greatly dependent on the design of their computational units being integrated with reconfigurable logic resources. The computational unit of any reconfigurable system plays vital role in defining its strength. In this research paper an RFU based computational unit design has been presented using the tightly coupled, multi-threaded reconfigurable cores. The proposed design has been simulated for VLIW based architectures and a high gain in performance has been observed as compared to the conventional computing systems.

A Novel Digital Watermarking Technique Basedon ISB (Intermediate Significant Bit)

Least Significant Bit (LSB) technique is the earliest developed technique in watermarking and it is also the most simple, direct and common technique. It essentially involves embedding the watermark by replacing the least significant bit of the image data with a bit of the watermark data. The disadvantage of LSB is that it is not robust against attacks. In this study intermediate significant bit (ISB) has been used in order to improve the robustness of the watermarking system. The aim of this model is to replace the watermarked image pixels by new pixels that can protect the watermark data against attacks and at the same time keeping the new pixels very close to the original pixels in order to protect the quality of watermarked image. The technique is based on testing the value of the watermark pixel according to the range of each bit-plane.

How Prior Knowledge Affects User's Understanding of System Requirements?

Requirements are critical to system validation as they guide all subsequent stages of systems development. Inadequately specified requirements generate systems that require major revisions or cause system failure entirely. Use Cases have become the main vehicle for requirements capture in many current Object Oriented (OO) development methodologies, and a means for developers to communicate with different stakeholders. In this paper we present the results of a laboratory experiment that explored whether different types of use case format are equally effective in facilitating high knowledge user-s understanding. Results showed that the provision of diagrams along with the textual use case descriptions significantly improved user comprehension of system requirements in both familiar and unfamiliar application domains. However, when comparing groups that received models of textual description accompanied with diagrams of different level of details (simple and detailed) we found no significant difference in performance.

Software Maintenance Severity Prediction with Soft Computing Approach

As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done on time especially for the critical applications. In this paper, we have explored the different predictor models to NASA-s public domain defect dataset coded in Perl programming language. Different machine learning algorithms belonging to the different learner categories of the WEKA project including Mamdani Based Fuzzy Inference System and Neuro-fuzzy based system have been evaluated for the modeling of maintenance severity or impact of fault severity. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provides relatively better prediction accuracy as compared to other models and hence, can be used for the maintenance severity prediction of the software.

Analytical Study of Component Based Software Engineering

This paper is a survey of current component-based software technologies and the description of promotion and inhibition factors in CBSE. The features that software components inherit are also discussed. Quality Assurance issues in componentbased software are also catered to. The feat research on the quality model of component based system starts with the study of what the components are, CBSE, its development life cycle and the pro & cons of CBSE. Various attributes are studied and compared keeping in view the study of various existing models for general systems and CBS. When illustrating the quality of a software component an apt set of quality attributes for the description of the system (or components) should be selected. Finally, the research issues that can be extended are tabularized.

Navigation Patterns Mining Approach based on Expectation Maximization Algorithm

Web usage mining algorithms have been widely utilized for modeling user web navigation behavior. In this study we advance a model for mining of user-s navigation pattern. The model makes user model based on expectation-maximization (EM) algorithm.An EM algorithm is used in statistics for finding maximum likelihood estimates of parameters in probabilistic models, where the model depends on unobserved latent variables. The experimental results represent that by decreasing the number of clusters, the log likelihood converges toward lower values and probability of the largest cluster will be decreased while the number of the clusters increases in each treatment.

Combining Variable Ordering Heuristics for Improving Search Algorithms Performance

Variable ordering heuristics are used in constraint satisfaction algorithms. Different characteristics of various variable ordering heuristics are complementary. Therefore we have tried to get the advantages of all heuristics to improve search algorithms performance for solving constraint satisfaction problems. This paper considers combinations based on products and quotients, and then a newer form of combination based on weighted sums of ratings from a set of base heuristics, some of which result in definite improvements in performance.

Semi-Automatic Approach for Semantic Annotation

The third phase of web means semantic web requires many web pages which are annotated with metadata. Thus, a crucial question is where to acquire these metadata. In this paper we propose our approach, a semi-automatic method to annotate the texts of documents and web pages and employs with a quite comprehensive knowledge base to categorize instances with regard to ontology. The approach is evaluated against the manual annotations and one of the most popular annotation tools which works the same as our tool. The approach is implemented in .net framework and uses the WordNet for knowledge base, an annotation tool for the Semantic Web.

Inefficiency of Data Storing in Physical Memory

Memory forensic is important in digital investigation. The forensic is based on the data stored in physical memory that involve memory management and processing time. However, the current forensic tools do not consider the efficiency in terms of storage management and the processing time. This paper shows the high redundancy of data found in the physical memory that cause inefficiency in processing time and memory management. The experiment is done using Borland C compiler on Windows XP with 512 MB of physical memory.

Support Vector Machines For Understanding Lane Color and Sidewalks

Understanding road features such as lanes, the color of lanes, and sidewalks in a live video captured from a moving vehicle is essential to build video-based navigation systems. In this paper, we present a novel idea to understand the road features using support vector machines. Various feature vectors including color components of road markings and the difference between two regions, i.e., chosen AOIs, and so on are fed into SVM, deciding colors of lanes and sidewalks robustly. Experimental results are provided to show the robustness of the proposed idea.

3D Spatial Interaction with the Wii Remote for Head-Mounted Display Virtual Reality

This research investigates the design of a low-cost 3D spatial interaction approach using the Wii Remote for immersive Head-Mounted Display (HMD) virtual reality. Current virtual reality applications that incorporate the Wii Remote are either desktop virtual reality applications or systems that use large screen displays. However, the requirements for an HMD virtual reality system differ from such systems. This is mainly because in HMD virtual reality, the display screen does not remain at a fixed location. The user views the virtual environment through display screens that are in front of the user-s eyes and when the user moves his/her head, these screens move as well. This means that the display has to be updated in realtime based on where the user is currently looking. Normal usage of the Wii Remote requires the controller to be pointed in a certain direction, typically towards the display. This is too restrictive for HMD virtual reality systems that ideally require the user to be able to turn around in the virtual environment. Previous work proposed a design to achieve this, however it suffered from a number of drawbacks. The aim of this study is to look into a suitable method of using the Wii Remote for 3D interaction in a space around the user for HMD virtual reality. This paper presents an overview of issues that had to be considered, the system design as well as experimental results.

The New AIMD Congestion Control Algorithm

Congestion control is one of the fundamental issues in computer networks. Without proper congestion control mechanisms there is the possibility of inefficient utilization of resources, ultimately leading to network collapse. Hence congestion control is an effort to adapt the performance of a network to changes in the traffic load without adversely affecting users perceived utilities. AIMD (Additive Increase Multiplicative Decrease) is the best algorithm among the set of liner algorithms because it reflects good efficiency as well as good fairness. Our control model is based on the assumption of the original AIMD algorithm; we show that both efficiency and fairness of AIMD can be improved. We call our approach is New AIMD. We present experimental results with TCP that match the expectation of our theoretical analysis.

A Survey: Clustering Ensembles Techniques

The clustering ensembles combine multiple partitions generated by different clustering algorithms into a single clustering solution. Clustering ensembles have emerged as a prominent method for improving robustness, stability and accuracy of unsupervised classification solutions. So far, many contributions have been done to find consensus clustering. One of the major problems in clustering ensembles is the consensus function. In this paper, firstly, we introduce clustering ensembles, representation of multiple partitions, its challenges and present taxonomy of combination algorithms. Secondly, we describe consensus functions in clustering ensembles including Hypergraph partitioning, Voting approach, Mutual information, Co-association based functions and Finite mixture model, and next explain their advantages, disadvantages and computational complexity. Finally, we compare the characteristics of clustering ensembles algorithms such as computational complexity, robustness, simplicity and accuracy on different datasets in previous techniques.