Towards Development of Solution for Business Process-Oriented Data Analysis

This paper proposes a modeling methodology for the development of data analysis solution. The Author introduce the approach to address data warehousing issues at the at enterprise level. The methodology covers the process of the requirements eliciting and analysis stage as well as initial design of data warehouse. The paper reviews extended business process model, which satisfy the needs of data warehouse development. The Author considers that the use of business process models is necessary, as it reflects both enterprise information systems and business functions, which are important for data analysis. The Described approach divides development into three steps with different detailed elaboration of models. The Described approach gives possibility to gather requirements and display them to business users in easy manner.

Digital Image Watermarking in the Wavelet Transform Domain

In this paper, we start by first characterizing the most important and distinguishing features of wavelet-based watermarking schemes. We studied the overwhelming amount of algorithms proposed in the literature. Application scenario, copyright protection is considered and building on the experience that was gained, implemented two distinguishing watermarking schemes. Detailed comparison and obtained results are presented and discussed. We concluded that Joo-s [1] technique is more robust for standard noise attacks than Dote-s [2] technique.

Improving the Effectiveness of Software Testing through Test Case Reduction

This paper proposes a new technique for improving the efficiency of software testing, which is based on a conventional attempt to reduce test cases that have to be tested for any given software. The approach utilizes the advantage of Regression Testing where fewer test cases would lessen time consumption of the testing as a whole. The technique also offers a means to perform test case generation automatically. Compared to one of the techniques in the literature where the tester has no option but to perform the test case generation manually, the proposed technique provides a better option. As for the test cases reduction, the technique uses simple algebraic conditions to assign fixed values to variables (Maximum, minimum and constant variables). By doing this, the variables values would be limited within a definite range, resulting in fewer numbers of possible test cases to process. The technique can also be used in program loops and arrays.

Weight-Based Query Optimization System Using Buffer

Fast retrieval of data has been a need of user in any database application. This paper introduces a buffer based query optimization technique in which queries are assigned weights according to their number of execution in a query bank. These queries and their optimized executed plans are loaded into the buffer at the start of the database application. For every query the system searches for a match in the buffer and executes the plan without creating new plans.

A Graph-Based Approach for Placement of No-Replicated Databases in Grid

On a such wide-area environment as a Grid, data placement is an important aspect of distributed database systems. In this paper, we address the problem of initial placement of database no-replicated fragments in Grid architecture. We propose a graph based approach that considers resource restrictions. The goal is to optimize the use of computing, storage and communication resources. The proposed approach is developed in two phases: in the first phase, we perform fragment grouping using knowledge about fragments dependency and, in the second phase, we determine an efficient placement of the fragment groups on the Grid. We also show, via experimental analysis that our approach gives solutions that are close to being optimal for different databases and Grid configurations.

Target Concept Selection by Property Overlap in Ontology Population

An ontology is widely used in many kinds of applications as a knowledge representation tool for domain knowledge. However, even though an ontology schema is well prepared by domain experts, it is tedious and cost-intensive to add instances into the ontology. The most confident and trust-worthy way to add instances into the ontology is to gather instances from tables in the related Web pages. In automatic populating of instances, the primary task is to find the most proper concept among all possible concepts within the ontology for a given table. This paper proposes a novel method for this problem by defining the similarity between the table and the concept using the overlap of their properties. According to a series of experiments, the proposed method achieves 76.98% of accuracy. This implies that the proposed method is a plausible way for automatic ontology population from Web tables.

Constructing of Classifier for Face Recognition on the Basis of the Conjugation Indexes

In this work the opportunity of construction of the qualifiers for face-recognition systems based on conjugation criteria is investigated. The linkage between the bipartite conjugation, the conjugation with a subspace and the conjugation with the null-space is shown. The unified solving rule is investigated. It makes the decision on the rating of face to a class considering the linkage between conjugation values. The described recognition method can be successfully applied to the distributed systems of video control and video observation.

A New Approach for Recoverable Timestamp Ordering Schedule

A new approach for timestamp ordering problem in serializable schedules is presented. Since the number of users using databases is increasing rapidly, the accuracy and needing high throughput are main topics in database area. Strict 2PL does not allow all possible serializable schedules and so does not result high throughput. The main advantages of the approach are the ability to enforce the execution of transaction to be recoverable and the high achievable performance of concurrent execution in central databases. Comparing to Strict 2PL, the general structure of the algorithm is simple, free deadlock, and allows executing all possible serializable schedules which results high throughput. Various examples which include different orders of database operations are discussed.

Unsupervised Image Segmentation Based on Fuzzy Connectedness with Sale Space Theory

In this paper, we propose an approach of unsupervised segmentation with fuzzy connectedness. Valid seeds are first specified by an unsupervised method based on scale space theory. A region is then extracted for each seed with a relative object extraction method of fuzzy connectedness. Afterwards, regions are merged according to the values between them of an introduced measure. Some theorems and propositions are also provided to show the reasonableness of the measure for doing mergence. Experiment results on a synthetic image, a color image and a large amount of MR images of our method are reported.

The Causation and Solution of Ringing Effect in DCT-based Video Coding

Ringing effect is one of the most annoying visual artifacts in digital video. It is a significant factor of subjective quality deterioration. However, there is a widely-accepted misunderstanding of its cause. In this paper, we propose a reasonable interpretation of the cause of ringing effect. Based on the interpretation, we suggest further two methods to reduce ringing effect in DCT-based video coding. The methods adaptively adjust quantizers according to video features. Our experiments proved that the methods could efficiently improve subjective quality with acceptable additional computing costs.

Adaptive PID Controller based on Reinforcement Learning for Wind Turbine Control

A self tuning PID control strategy using reinforcement learning is proposed in this paper to deal with the control of wind energy conversion systems (WECS). Actor-Critic learning is used to tune PID parameters in an adaptive way by taking advantage of the model-free and on-line learning properties of reinforcement learning effectively. In order to reduce the demand of storage space and to improve the learning efficiency, a single RBF neural network is used to approximate the policy function of Actor and the value function of Critic simultaneously. The inputs of RBF network are the system error, as well as the first and the second-order differences of error. The Actor can realize the mapping from the system state to PID parameters, while the Critic evaluates the outputs of the Actor and produces TD error. Based on TD error performance index and gradient descent method, the updating rules of RBF kernel function and network weights were given. Simulation results show that the proposed controller is efficient for WECS and it is perfectly adaptable and strongly robust, which is better than that of a conventional PID controller.

Web Content Mining: A Solution to Consumer's Product Hunt

With the rapid growth in business size, today's businesses orient towards electronic technologies. Amazon.com and e-bay.com are some of the major stakeholders in this regard. Unfortunately the enormous size and hugely unstructured data on the web, even for a single commodity, has become a cause of ambiguity for consumers. Extracting valuable information from such an everincreasing data is an extremely tedious task and is fast becoming critical towards the success of businesses. Web content mining can play a major role in solving these issues. It involves using efficient algorithmic techniques to search and retrieve the desired information from a seemingly impossible to search unstructured data on the Internet. Application of web content mining can be very encouraging in the areas of Customer Relations Modeling, billing records, logistics investigations, product cataloguing and quality management. In this paper we present a review of some very interesting, efficient yet implementable techniques from the field of web content mining and study their impact in the area specific to business user needs focusing both on the customer as well as the producer. The techniques we would be reviewing include, mining by developing a knowledge-base repository of the domain, iterative refinement of user queries for personalized search, using a graphbased approach for the development of a web-crawler and filtering information for personalized search using website captions. These techniques have been analyzed and compared on the basis of their execution time and relevance of the result they produced against a particular search.

Global Security Using Human Face Understanding under Vision Ubiquitous Architecture System

Different methods containing biometric algorithms are presented for the representation of eigenfaces detection including face recognition, are identification and verification. Our theme of this research is to manage the critical processing stages (accuracy, speed, security and monitoring) of face activities with the flexibility of searching and edit the secure authorized database. In this paper we implement different techniques such as eigenfaces vector reduction by using texture and shape vector phenomenon for complexity removal, while density matching score with Face Boundary Fixation (FBF) extracted the most likelihood characteristics in this media processing contents. We examine the development and performance efficiency of the database by applying our creative algorithms in both recognition and detection phenomenon. Our results show the performance accuracy and security gain with better achievement than a number of previous approaches in all the above processes in an encouraging mode.

MAP-Based Image Super-resolution Reconstruction

From a set of shifted, blurred, and decimated image , super-resolution image reconstruction can get a high-resolution image. So it has become an active research branch in the field of image restoration. In general, super-resolution image restoration is an ill-posed problem. Prior knowledge about the image can be combined to make the problem well-posed, which contributes to some regularization methods. In the regularization methods at present, however, regularization parameter was selected by experience in some cases and other techniques have too heavy computation cost for computing the parameter. In this paper, we construct a new super-resolution algorithm by transforming the solving of the System stem Є=An into the solving of the equations X+A*X-1A=I , and propose an inverse iterative method.

IFDewey: A New Insert-Friendly Labeling Schemafor XML Data

XML has become a popular standard for information exchange via web. Each XML document can be presented as a rooted, ordered, labeled tree. The Node label shows the exact position of a node in the original document. Region and Dewey encoding are two famous methods of labeling trees. In this paper, we propose a new insert friendly labeling method named IFDewey based on recently proposed scheme, called Extended Dewey. In Extended Dewey many labels must be modified when a new node is inserted into the XML tree. Our method eliminates this problem by reserving even numbers for future insertion. Numbers generated by Extended Dewey may be even or odd. IFDewey modifies Extended Dewey so that only odd numbers are generated and even numbers can then be used for a much easier insertion of nodes.

Selective Intra Prediction Mode Decision for H.264/AVC Encoders

H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression standards such as MPEG-2, but computational complexity is increased significantly. In this paper, we propose selective mode decision schemes for fast intra prediction mode selection. The objective is to reduce the computational complexity of the H.264/AVC encoder without significant rate-distortion performance degradation. In our proposed schemes, the intra prediction complexity is reduced by limiting the luma and chroma prediction modes using the directional information of the 16×16 prediction mode. Experimental results are presented to show that the proposed schemes reduce the complexity by up to 78% maintaining the similar PSNR quality with about 1.46% bit rate increase in average.

Using Data Clustering in Oral Medicine

The vast amount of information hidden in huge databases has created tremendous interests in the field of data mining. This paper examines the possibility of using data clustering techniques in oral medicine to identify functional relationships between different attributes and classification of similar patient examinations. Commonly used data clustering algorithms have been reviewed and as a result several interesting results have been gathered.

Word Recognition and Learning based on Associative Memories and Hidden Markov Models

A word recognition architecture based on a network of neural associative memories and hidden Markov models has been developed. The input stream, composed of subword-units like wordinternal triphones consisting of diphones and triphones, is provided to the network of neural associative memories by hidden Markov models. The word recognition network derives words from this input stream. The architecture has the ability to handle ambiguities on subword-unit level and is also able to add new words to the vocabulary during performance. The architecture is implemented to perform the word recognition task in a language processing system for understanding simple command sentences like “bot show apple".

3D Face Modeling based on 3D Dense Morphable Face Shape Model

Realistic 3D face model is more precise in representing pose, illumination, and expression of face than 2D face model so that it can be utilized usefully in various applications such as face recognition, games, avatars, animations, and etc. In this paper, we propose a 3D face modeling method based on 3D dense morphable shape model. The proposed 3D modeling method first constructs a 3D dense morphable shape model from 3D face scan data obtained using a 3D scanner. Next, the proposed method extracts and matches facial landmarks from 2D image sequence containing a face to be modeled, and then reconstructs 3D vertices coordinates of the landmarks using a factorization-based SfM technique. Then, the proposed method obtains a 3D dense shape model of the face to be modeled by fitting the constructed 3D dense morphable shape model into the reconstructed 3D vertices. Also, the proposed method makes a cylindrical texture map using 2D face image sequence. Finally, the proposed method generates a 3D face model by rendering the 3D dense face shape model using the cylindrical texture map. Through building processes of 3D face model by the proposed method, it is shown that the proposed method is relatively easy, fast and precise.

Semi-Automatic Analyzer to Detect Authorial Intentions in Scientific Documents

Information Retrieval has the objective of studying models and the realization of systems allowing a user to find the relevant documents adapted to his need of information. The information search is a problem which remains difficult because the difficulty in the representing and to treat the natural languages such as polysemia. Intentional Structures promise to be a new paradigm to extend the existing documents structures and to enhance the different phases of documents process such as creation, editing, search and retrieval. The intention recognition of the author-s of texts can reduce the largeness of this problem. In this article, we present intentions recognition system is based on a semi-automatic method of extraction the intentional information starting from a corpus of text. This system is also able to update the ontology of intentions for the enrichment of the knowledge base containing all possible intentions of a domain. This approach uses the construction of a semi-formal ontology which considered as the conceptualization of the intentional information contained in a text. An experiments on scientific publications in the field of computer science was considered to validate this approach.