Study of Explicit Finite Difference Method in One Dimensional System

One of the most important parameters in petroleum reservoirs is the pressure distribution along the reservoir, as the pressure varies with the time and location. A popular method to determine the pressure distribution in a reservoir in the unsteady state regime of flow is applying Darcy’s equation and solving this equation numerically. The numerical simulation of reservoirs is based on these numerical solutions of different partial differential equations (PDEs) representing the multiphase flow of fluids. Pressure profile has obtained in a one dimensional system solving Darcy’s equation explicitly. Changes of pressure profile in three situations are investigated in this work. These situations include section length changes, step time changes and time approach to infinity. The effects of these changes in pressure profile are shown and discussed in the paper.

A New Efficient RNS Reverse Converter for the 4-Moduli Set 

In this paper, we propose a new efficient reverse converter for the 4-moduli set {2n, 2n + 1, 2n − 1, 22n+1 – 1} based on a modified Chinese Remainder Theorem and Mixed Radix Conversion. Additionally, the resulting architecture is further reduced to obtain a reverse converter that utilizes only carry save adders, a multiplexer and carry propagate adders. The proposed converter has an area cost of (12n + 2) FAs and (5n + 1) HAs with a delay of (9n + 6)tFA + tMUX. When compared with state of the art, our proposal demonstrates to be faster, at the expense of slightly more hardware resources. Further, the Area-Time square metric was computed which indicated that our proposed scheme outperforms the state of the art reverse converter.

Theoretical Analysis of the Effect of Accounting for Special Methods in Similarity-Based Cohesion Measurement

Class cohesion is an important object-oriented software quality attributes, and it refers to the degree of relatedness of class attributes and methods. Several class cohesion measures are proposed in the literature, and the impact of considering the special methods (i.e., constructors, destructors, and access and delegation methods) in cohesion calculation is not thoroughly theoretically studied for most of them. In this paper, we address this issue for three popular similarity-based class cohesion measures. For each of the considered measures we theoretically study the impact of including or excluding special methods on the values that are obtained by applying the measure. This study is based on analyzing the definitions and formulas that are proposed for the measures. The results show that including/excluding special methods has a considerable effect on the obtained cohesion values and that this effect varies from one measure to another. The study shows the importance of considering the types of methods that have to be accounted for when proposing a similarity-based cohesion measure.

A Variable Stiffness Approach to Vibration Control

This work introduces a new concept for controlling the mechanical vibrations via variable stiffness coil spring. The concept relies on fitting a screw though the spring to change the number of active spring coils. A prototype has been built and tested with promising results toward an innovation in the field of vibration control.

The Use of Ontology Framework for Automation Digital Forensics Investigation

One of the main goals of a computer forensic analyst is to determine the cause and effect of the acquisition of a digital evidence in order to obtain relevant information on the case is being handled. In order to get fast and accurate results, this paper will discuss the approach known as Ontology Framework. This model uses a structured hierarchy of layers that create connectivity between the variant and searching investigation of activity that a computer forensic analysis activities can be carried out automatically. There are two main layers are used, namely Analysis Tools and Operating System. By using the concept of Ontology, the second layer is automatically designed to help investigator to perform the acquisition of digital evidence. The methodology of automation approach of this research is by utilizing Forward Chaining where the system will perform a search against investigative steps and atomically structured in accordance with the rules of the Ontology.

Migration of the Relational Data Base (RDB) to the Object Relational Data Base (ORDB)

This paper proposes an approach for translating an existing relational database (RDB) schema into ORDB. The transition is done with methods that can extract various functions from a RDB which is based on aggregations, associations between the various tables, and the reflexive relationships. These methods can extract even the inheritance knowing that no process of reverse engineering can know that it is an Inheritance; therefore, our approach exceeded all of the previous studies made for ​​the transition from RDB to ORDB. In summation, the creation of the New Data Model (NDM) that stocks the RDB in a form of a structured table, and from the NDM we create our navigational model in order to simplify the implementation object from which we develop our different types. Through these types we precede to the last step, the creation of tables. The step mentioned above does not require any human interference. All this is done automatically, and a prototype has already been created which proves the effectiveness of this approach.

Generation of Photo-Mosaic Images through Block Matching and Color Adjustment

Mosaic refers to a technique that makes image by gathering lots of small materials in various colors. This paper presents an automatic algorithm that makes the photo-mosaic image using photos. The algorithm is composed of 4 steps: partition and feature extraction, block matching, redundancy removal and color adjustment. The input image is partitioned in the small block to extract feature. Each block is matched to find similar photo in database by comparing similarity with Euclidean difference between blocks. The intensity of the block is adjusted to enhance the similarity of image by replacing the value of light and darkness with that of relevant block. Further, the quality of image is improved by minimizing the redundancy of tiles in the adjacent blocks. Experimental results support that the proposed algorithm is excellent in quantitative analysis and qualitative analysis.

High Capacity Reversible Watermarking through Interpolated Error Shifting

Reversible watermarking that not only protects the copyright but also preserve the original quality of the digital content have been intensively studied. In particular, the demand for reversible watermarking has increased. In this paper, we propose a reversible watermarking scheme based on interpolation-error shifting and error pre-compensation. The intensity of a pixel is interpolated from the intensities of neighboring pixels, and the difference histogram between the interpolated and the original intensities is obtained and modified to embed the watermark message. By restoring the difference histogram, the embedded watermark is extracted and the original image is recovered by compensating for the interpolation error. The overflow and underflow are prevented by error pre-compensation. To show the performance of the method, the proposed algorithm is compared with other methods using various test images.

Multi-Objective Multi-Mode Resource-Constrained Project Scheduling Problem by Preemptive Fuzzy Goal Programming

This research proposes a preemptive fuzzy goal programming model for multi-objective multi-mode resource constrained project scheduling problem. The objectives of the problem are minimization of the total time and the total cost of the project. Objective in a multi-mode resource-constrained project scheduling problem is often a minimization of makespan. However, both time and cost should be considered at the same time with different level of important priorities. Moreover, all elements of cost functions in a project are not included in the conventional cost objective function. Incomplete total project cost causes an error in finding the project scheduling time. In this research, preemptive fuzzy goal programming is presented to solve the multi-objective multi-mode resource constrained project scheduling problem. It can find the compromise solution of the problem. Moreover, it is also flexible in adjusting to find a variety of alternative solutions. 

Shopping Cart System: Load Balancing and Fault Tolerance in the OSGi Service Platform

The main purpose of this paper was to find a simple solution for load balancing and fault tolerance in OSGi. The challenge was to implement a highly available web application such as a shopping cart system with load balancing and fault tolerance, without having to change the core of OSGi.

Software Vulnerability Markets: Discoverers and Buyers

Some of the key aspects of vulnerability—discovery, dissemination, and disclosure—have received some attention recently. However, the role of interaction among the vulnerability discoverers and vulnerability acquirers has not yet been adequately addressed. Our study suggests that a major percentage of discoverers, a majority in some cases, are unaffiliated with the software developers and thus are free to disseminate the vulnerabilities they discover in any way they like. As a result, multiple vulnerability markets have emerged. In some of these markets, the exchange is regulated, but in others, there is little or no regulation. In recent vulnerability discovery literature, the vulnerability discoverers have remained anonymous individuals. Although there has been an attempt to model the level of their efforts, information regarding their identities, modes of operation, and what they are doing with the discovered vulnerabilities has not been explored. Reports of buying and selling of the vulnerabilities are now appearing in the press; however, the existence of such markets requires validation, and the natures of the markets need to be analyzed. To address this need, we have attempted to collect detailed information. We have identified the most prolific vulnerability discoverers throughout the past decade and examined their motivation and methods. A large percentage of these discoverers are located in Eastern and Western Europe and in the Far East. We have contacted several of them in order to collect firsthand information regarding their techniques, motivations, and involvement in the vulnerability markets. We examine why many of the discoverers appear to retire after a highly successful vulnerability-finding career. The paper identifies the actual vulnerability markets, rather than the hypothetical ideal markets that are often examined. The emergence of worldwide government agencies as vulnerability buyers has significant implications. We discuss potential factors that can impact the risk to society and the need for detailed exploration.

Project and Module Based Teaching and Learning

This paper proposes a new teaching and learning approach-project and module based teaching and learning (PMBTL). The PMBTL approach incorporates the merits of project/problem based and module based learning methods, and overcomes the limitations of these methods. The correlation between teaching, learning, practice and assessment is emphasized in this approach, and new methods have been proposed accordingly. The distinct features of these new methods differentiate the PMBTL approach from conventional teaching approaches. Evaluation of this approach on practical teaching and learning activities demonstrates the effectiveness and stability of the approach in improving the performance and quality of teaching and learning. The approach proposed in this paper is also intuitive to the design of other teaching units. 

Semantically Enriched Web Usage Mining for Personalization

The continuous growth in the size of the World Wide Web has resulted in intricate Web sites, demanding enhanced user skills and more sophisticated tools to help the Web user to find the desired information. In order to make Web more user friendly, it is necessary to provide personalized services and recommendations to the Web user. For discovering interesting and frequent navigation patterns from Web server logs many Web usage mining techniques have been applied. The recommendation accuracy of usage based techniques can be improved by integrating Web site content and site structure in the personalization process. Herein, we propose semantically enriched Web Usage Mining method for Personalization (SWUMP), an extension to solely usage based technique. This approach is a combination of the fields of Web Usage Mining and Semantic Web. In the proposed method, we envisage enriching the undirected graph derived from usage data with rich semantic information extracted from the Web pages and the Web site structure. The experimental results show that the SWUMP generates accurate recommendations and is able to achieve 10-20% better accuracy than the solely usage based model. The SWUMP addresses the new item problem inherent to solely usage based techniques.

Software Effort Estimation Models Using Radial Basis Function Network

Software Effort Estimation is the process of estimating the effort required to develop software. By estimating the effort, the cost and schedule required to estimate the software can be determined. Accurate Estimate helps the developer to allocate the resource accordingly in order to avoid cost overrun and schedule overrun. Several methods are available in order to estimate the effort among which soft computing based method plays a prominent role. Software cost estimation deals with lot of uncertainty among all soft computing methods neural network is good in handling uncertainty. In this paper Radial Basis Function Network is compared with the back propagation network and the results are validated using six data sets and it is found that RBFN is best suitable to estimate the effort. The Results are validated using two tests the error test and the statistical test.

Comparative Study on Swarm Intelligence Techniques for Biclustering of Microarray Gene Expression Data

Microarray gene expression data play a vital in biological processes, gene regulation and disease mechanism. Biclustering in gene expression data is a subset of the genes indicating consistent patterns under the subset of the conditions. Finding a biclustering is an optimization problem. In recent years, swarm intelligence techniques are popular due to the fact that many real-world problems are increasingly large, complex and dynamic. By reasons of the size and complexity of the problems, it is necessary to find an optimization technique whose efficiency is measured by finding the near optimal solution within a reasonable amount of time. In this paper, the algorithmic concepts of the Particle Swarm Optimization (PSO), Shuffled Frog Leaping (SFL) and Cuckoo Search (CS) algorithms have been analyzed for the four benchmark gene expression dataset. The experiment results show that CS outperforms PSO and SFL for 3 datasets and SFL give better performance in one dataset. Also this work determines the biological relevance of the biclusters with Gene Ontology in terms of function, process and component.

Restructuring of XML Documents in the Form of Ontologies

The intense use of the web has made it a very changing environment, its content is in permanent evolution to adapt to the demands. The standards have accompanied this evolution by passing from standards that regroup data with their presentations without any structuring such as HTML, to standards that separate both and give more importance to the structural aspect of the content such as XML standard and its derivatives. Currently, with the appearance of the Semantic Web, ontologies become increasingly present on the web and standards that allow their representations as OWL and RDF/RDFS begin to gain momentum. This paper provided an automatic method that converts XML schema document to ontologies represented in OWL.

A Distributed Approach to Extract High Utility Itemsets from XML Data

This paper investigates a new data mining capability that entails mining of High Utility Itemsets (HUI) in a distributed environment. Existing research in data mining deals with only presence or absence of an items and do not consider the semantic measures like weight or cost of the items. Thus, HUI mining algorithm has evolved. HUI mining is the one kind of utility mining concept, aims to identify itemsets whose utility satisfies a given threshold. Although, the approach of mining HUIs in a distributed environment and mining of the same from XML data have not explored yet. In this work, a novel approach is proposed to mine HUIs from the XML based data in a distributed environment. This work utilizes Service Oriented Computing (SOC) paradigm which provides Knowledge as a Service (KaaS). The interesting patterns are provided via the web services with the help of knowledge server to answer the queries of the consumers. The performance of the approach is evaluated on various databases using execution time and memory consumption.

Level Set and Morphological Operation Techniques in Application of Dental Image Segmentation

Medical image analysis is one of the great effects of computer image processing. There are several processes to analysis the medical images which the segmentation process is one of the challenging and most important step. In this paper the segmentation method proposed in order to segment the dental radiograph images. Thresholding method has been applied to simplify the images and to morphologically open binary image technique performed to eliminate the unnecessary regions on images. Furthermore, horizontal and vertical integral projection techniques used to extract the each individual tooth from radiograph images. Segmentation process has been done by applying the level set method on each extracted images. Nevertheless, the experiments results by 90% accuracy demonstrate that proposed method achieves high accuracy and promising result.

Survey on Energy Efficient Routing Protocols in Mobile Ad Hoc Networks

Mobile Ad-Hoc Network (MANET) is a network without infrastructure dynamically formed by autonomous system of mobile nodes that are connected via wireless links. Mobile nodes communicate with each other on the fly. In this network each node also acts as a router. The battery power and the bandwidth are very scarce resources in this network. The network lifetime and connectivity of nodes depend on battery power. Therefore, energy is a valuable constraint which should be efficiently used. In this paper we survey various energy efficient routing protocols. The energy efficient routing protocols are classified on the basis of approaches they use to minimize the energy consumption. The purpose of this paper is to facilitate the research work and combine the existing solution and to develop a more energy efficient routing mechanism.

Early Requirement Engineering for Design of Learner Centric Dynamic LMS

We present a modeling framework that supports the engineering of early requirements specifications for design of learner centric dynamic Learning Management System. The framework is based on i* modeling tool and Means End Analysis, that adopts primitive concepts for modeling early requirements (such as actor, goal, and strategic dependency). We show how pedagogical and computational requirements for designing a learner centric Learning Management system can be adapted for the automatic early requirement engineering specifications. Finally, we presented a model on a Learner Quanta based adaptive Courseware. Our early requirement analysis shows that how means end analysis reveals gaps and inconsistencies in early requirements specifications that are by no means trivial to discover without the help of formal analysis tool.