Improved Feature Processing for Iris Biometric Authentication System

Iris-based biometric authentication is gaining importance in recent times. Iris biometric processing however, is a complex process and computationally very expensive. In the overall processing of iris biometric in an iris-based biometric authentication system, feature processing is an important task. In feature processing, we extract iris features, which are ultimately used in matching. Since there is a large number of iris features and computational time increases as the number of features increases, it is therefore a challenge to develop an iris processing system with as few as possible number of features and at the same time without compromising the correctness. In this paper, we address this issue and present an approach to feature extraction and feature matching process. We apply Daubechies D4 wavelet with 4 levels to extract features from iris images. These features are encoded with 2 bits by quantizing into 4 quantization levels. With our proposed approach it is possible to represent an iris template with only 304 bits, whereas existing approaches require as many as 1024 bits. In addition, we assign different weights to different iris region to compare two iris templates which significantly increases the accuracy. Further, we match the iris template based on a weighted similarity measure. Experimental results on several iris databases substantiate the efficacy of our approach.

Algebraic Specification of Serializability for Partitioned Transactions

The usual correctness condition for a schedule of concurrent database transactions is some form of serializability of the transactions. For general forms, the problem of deciding whether a schedule is serializable is NP-complete. In those cases other approaches to proving correctness, using proof rules that allow the steps of the proof of serializability to be guided manually, are desirable. Such an approach is possible in the case of conflict serializability which is proved algebraically by deriving serial schedules using commutativity of non-conflicting operations. However, conflict serializability can be an unnecessarily strong form of serializability restricting concurrency and thereby reducing performance. In practice, weaker, more general, forms of serializability for extended models of transactions are used. Currently, there are no known methods using proof rules for proving those general forms of serializability. In this paper, we define serializability for an extended model of partitioned transactions, which we show to be as expressive as serializability for general partitioned transactions. An algebraic method for proving general serializability is obtained by giving an initial-algebra specification of serializable schedules of concurrent transactions in the model. This demonstrates that it is possible to conduct algebraic proofs of correctness of concurrent transactions in general cases.

Information Retrieval: Improving Question Answering Systems by Query Reformulation and Answer Validation

Question answering (QA) aims at retrieving precise information from a large collection of documents. Most of the Question Answering systems composed of three main modules: question processing, document processing and answer processing. Question processing module plays an important role in QA systems to reformulate questions. Moreover answer processing module is an emerging topic in QA systems, where these systems are often required to rank and validate candidate answers. These techniques aiming at finding short and precise answers are often based on the semantic relations and co-occurrence keywords. This paper discussed about a new model for question answering which improved two main modules, question processing and answer processing which both affect on the evaluation of the system operations. There are two important components which are the bases of the question processing. First component is question classification that specifies types of question and answer. Second one is reformulation which converts the user's question into an understandable question by QA system in a specific domain. The objective of an Answer Validation task is thus to judge the correctness of an answer returned by a QA system, according to the text snippet given to support it. For validating answers we apply candidate answer filtering, candidate answer ranking and also it has a final validation section by user voting. Also this paper described new architecture of question and answer processing modules with modeling, implementing and evaluating the system. The system differs from most question answering systems in its answer validation model. This module makes it more suitable to find exact answer. Results show that, from total 50 asked questions, evaluation of the model, show 92% improving the decision of the system.

Cursor Position Estimation Model for Virtual Touch Screen Using Camera

Virtual touch screen using camera is an ordinary screen which uses a camera to imitate the touch screen by taking a picture of an indicator, e.g., finger, which is laid on the screen, converting the indicator tip position on the picture to the position on the screen, and moving the cursor on the screen to that position. In fact, the indicator is not laid on the screen directly, but it is intervened by the cover at some intervals. In spite of this gap, if the eye-indicator-camera angle is not large, the mapping from the indicator tip positions on the image to the corresponding cursor positions on the screen is not difficult and could be done with a little error. However, the larger the angle is, the bigger the error in the mapping occurs. This paper proposes cursor position estimation model for virtual touch screen using camera which could eliminate this kind of error. The proposed model (i) moves the on-screen pilot cursor to the screen position which locates on the screen at the position just behind the indicator tip when the indicator tip has been looked from the camera position, and then (ii) converts that pilot cursor position to the desirable cursor position (the position on the screen when it has been looked from the user-s eye through the indicator tip) by using the bilinear transformation. Simulation results show the correctness of the estimated cursor position by using the proposed model.

Development of a Health Literacy Scale for Chinese-Speaking Adults in Taiwan

Background, measuring an individual-s Health Literacy is gaining attention, yet no appropriate instrument is available in Taiwan. Measurement tools that were developed and used in western countries may not be appropriate for use in Taiwan due to a different language system. Purpose of this research was to develop a Health Literacy measurement instrument specific for Taiwan adults. Methods, several experts of clinic physicians; healthcare administrators and scholars identified 125 common used health related Chinese phrases from major medical knowledge sources that easy accessible to the public. A five-point Likert scale is used to measure the understanding level of the target population. Such measurement is then used to compare with the correctness of their answers to a health knowledge test for validation. Samples, samples under study were purposefully taken from four groups of people in the northern Pingtung, OPD patients, university students, community residents, and casual visitors to the central park. A set of health knowledge index with 10 questions is used to screen those false responses. A sample size of 686 valid cases out of 776 was then included to construct this scale. An independent t-test was used to examine each individual phrase. The phrases with the highest significance are then identified and retained to compose this scale. Result, a Taiwan Health Literacy Scale (THLS) was finalized with 66 health-related phrases under nine divisions. Cronbach-s alpha of each division is at a satisfactory level of 89% and above. Conclusions, factors significantly differentiate the levels of health literacy are education, female gender, age, family members of stroke victims, experience with patient care, and healthcare professionals in the initial application in this study..

Localization by DKF Multi Sensor Fusion in the Uncertain Environments for Mobile Robot

This paper presents an optimized algorithm for robot localization which increases the correctness and accuracy of the estimating position of mobile robot to more than 150% of the past methods [1] in the uncertain and noisy environment. In this method the odometry and vision sensors are combined by an adapted well-known discrete kalman filter [2]. This technique also decreased the computation process of the algorithm by DKF simple implementation. The experimental trial of the algorithm is performed on the robocup middle size soccer robot; the system can be used in more general environments.

Detection and Classification of Faults on Parallel Transmission Lines Using Wavelet Transform and Neural Network

The protection of parallel transmission lines has been a challenging task due to mutual coupling between the adjacent circuits of the line. This paper presents a novel scheme for detection and classification of faults on parallel transmission lines. The proposed approach uses combination of wavelet transform and neural network, to solve the problem. While wavelet transform is a powerful mathematical tool which can be employed as a fast and very effective means of analyzing power system transient signals, artificial neural network has a ability to classify non-linear relationship between measured signals by identifying different patterns of the associated signals. The proposed algorithm consists of time-frequency analysis of fault generated transients using wavelet transform, followed by pattern recognition using artificial neural network to identify the type of the fault. MATLAB/Simulink is used to generate fault signals and verify the correctness of the algorithm. The adaptive discrimination scheme is tested by simulating different types of fault and varying fault resistance, fault location and fault inception time, on a given power system model. The simulation results show that the proposed scheme for fault diagnosis is able to classify all the faults on the parallel transmission line rapidly and correctly.

Double Reduction of Ada-ECATNet Representation using Rewriting Logic

One major difficulty that faces developers of concurrent and distributed software is analysis for concurrency based faults like deadlocks. Petri nets are used extensively in the verification of correctness of concurrent programs. ECATNets [2] are a category of algebraic Petri nets based on a sound combination of algebraic abstract types and high-level Petri nets. ECATNets have 'sound' and 'complete' semantics because of their integration in rewriting logic [12] and its programming language Maude [13]. Rewriting logic is considered as one of very powerful logics in terms of description, verification and programming of concurrent systems. We proposed in [4] a method for translating Ada-95 tasking programs to ECATNets formalism (Ada-ECATNet). In this paper, we show that ECATNets formalism provides a more compact translation for Ada programs compared to the other approaches based on simple Petri nets or Colored Petri nets (CPNs). Such translation doesn-t reduce only the size of program, but reduces also the number of program states. We show also, how this compact Ada-ECATNet may be reduced again by applying reduction rules on it. This double reduction of Ada-ECATNet permits a considerable minimization of the memory space and run time of corresponding Maude program.

Architecture of Speech-based Registration System

In this era of technology, fueled by the pervasive usage of the internet, security is a prime concern. The number of new attacks by the so-called “bots", which are automated programs, is increasing at an alarming rate. They are most likely to attack online registration systems. Technology, called “CAPTCHA" (Completely Automated Public Turing test to tell Computers and Humans Apart) do exist, which can differentiate between automated programs and humans and prevent replay attacks. Traditionally CAPTCHA-s have been implemented with the challenge involved in recognizing textual images and reproducing the same. We propose an approach where the visual challenge has to be read out from which randomly selected keywords are used to verify the correctness of spoken text and in turn detect the presence of human. This is supplemented with a speaker recognition system which can identify the speaker also. Thus, this framework fulfills both the objectives – it can determine whether the user is a human or not and if it is a human, it can verify its identity.

W3-Miner: Mining Weighted Frequent Subtree Patterns in a Collection of Trees

Mining frequent tree patterns have many useful applications in XML mining, bioinformatics, network routing, etc. Most of the frequent subtree mining algorithms (i.e. FREQT, TreeMiner and CMTreeMiner) use anti-monotone property in the phase of candidate subtree generation. However, none of these algorithms have verified the correctness of this property in tree structured data. In this research it is shown that anti-monotonicity does not generally hold, when using weighed support in tree pattern discovery. As a result, tree mining algorithms that are based on this property would probably miss some of the valid frequent subtree patterns in a collection of trees. In this paper, we investigate the correctness of anti-monotone property for the problem of weighted frequent subtree mining. In addition we propose W3-Miner, a new algorithm for full extraction of frequent subtrees. The experimental results confirm that W3-Miner finds some frequent subtrees that the previously proposed algorithms are not able to discover.

Computer Proven Correctness of the Rabin Public-Key Scheme

We decribe a formal specification and verification of the Rabin public-key scheme in the formal proof system Is-abelle/HOL. The idea is to use the two views of cryptographic verification: the computational approach relying on the vocabulary of probability theory and complexity theory and the formal approach based on ideas and techniques from logic and programming languages. The analysis presented uses a given database to prove formal properties of our implemented functions with computer support. Thema in task in designing a practical formalization of correctness as well as security properties is to cope with the complexity of cryptographic proving. We reduce this complexity by exploring a light-weight formalization that enables both appropriate formal definitions as well as eficient formal proofs. This yields the first computer-proved implementation of the Rabin public-key scheme in Isabelle/HOL. Consequently, we get reliable proofs with a minimal error rate augmenting the used database. This provides a formal basis for more computer proof constructions in this area.

A Novel Methodology for Synthesis of Fault Trees from MATLAB-Simulink Model

Fault tree analysis is a well-known method for reliability and safety assessment of engineering systems. In the last 3 decades, a number of methods have been introduced, in the literature, for automatic construction of fault trees. The main difference between these methods is the starting model from which the tree is constructed. This paper presents a new methodology for the construction of static and dynamic fault trees from a system Simulink model. The method is introduced and explained in detail, and its correctness and completeness is experimentally validated by using an example, taken from literature. Advantages of the method are also mentioned.

Enhanced Conference Organization Based On Correlation of Web Information and Ontology Based Expertise Search

From the importance of the conference and its constructive role in the studies discussion, there must be a strong organization that allows the exploitation of the discussions in opening new horizons. The vast amount of information scattered across the web, make it difficult to find experts, who can play a prominent role in organizing conferences. In this paper we proposed a new approach of extracting researchers- information from various Web resources and correlating them in order to confirm their correctness. As a validator of this approach, we propose a service that will be useful to set up a conference. Its main objective is to find appropriate experts, as well as the social events for a conference. For this application we us Semantic Web technologies like RDF and ontology to represent the confirmed information, which are linked to another ontology (skills ontology) that are used to present and compute the expertise.

A Traffic Simulation Package Based on Travel Demand

In this paper we propose a new traffic simulation package, TDMSim, which supports both macroscopic and microscopic simulation on free-flowing and regulated traffic systems. Both simulators are based on travel demands, which specify the numbers of vehicles departing from origins to arrive at different destinations. The microscopic simulator implements the carfollowing model given the pre-defined routes of the vehicles but also supports the rerouting of vehicles. We also propose a macroscopic simulator which is built in integration with the microscopic simulator to allow the simulation to be scaled for larger networks without sacrificing the precision achievable through the microscopic simulator. The macroscopic simulator also enables the reuse of previous simulation results when simulating traffic on the same networks at later time. Validations have been conducted to show the correctness of both simulators.

Constraint Based Frequent Pattern Mining Technique for Solving GCS Problem

Generalized Center String (GCS) problem are generalized from Common Approximate Substring problem and Common substring problems. GCS are known to be NP-hard allowing the problems lies in the explosion of potential candidates. Finding longest center string without concerning the sequence that may not contain any motifs is not known in advance in any particular biological gene process. GCS solved by frequent pattern-mining techniques and known to be fixed parameter tractable based on the fixed input sequence length and symbol set size. Efficient method known as Bpriori algorithms can solve GCS with reasonable time/space complexities. Bpriori 2 and Bpriori 3-2 algorithm are been proposed of any length and any positions of all their instances in input sequences. In this paper, we reduced the time/space complexity of Bpriori algorithm by Constrained Based Frequent Pattern mining (CBFP) technique which integrates the idea of Constraint Based Mining and FP-tree mining. CBFP mining technique solves the GCS problem works for all center string of any length, but also for the positions of all their mutated copies of input sequence. CBFP mining technique construct TRIE like with FP tree to represent the mutated copies of center string of any length, along with constraints to restraint growth of the consensus tree. The complexity analysis for Constrained Based FP mining technique and Bpriori algorithm is done based on the worst case and average case approach. Algorithm's correctness compared with the Bpriori algorithm using artificial data is shown.

A Distributed Topology Control Algorithm to Conserve Energy in Heterogeneous Wireless Mesh Networks

A considerable amount of energy is consumed during transmission and reception of messages in a wireless mesh network (WMN). Reducing per-node transmission power would greatly increase the network lifetime via power conservation in addition to increasing the network capacity via better spatial bandwidth reuse. In this work, the problem of topology control in a hybrid WMN of heterogeneous wireless devices with varying maximum transmission ranges is considered. A localized distributed topology control algorithm is presented which calculates the optimal transmission power so that (1) network connectivity is maintained (2) node transmission power is reduced to cover only the nearest neighbours (3) networks lifetime is extended. Simulations and analysis of results are carried out in the NS-2 environment to demonstrate the correctness and effectiveness of the proposed algorithm.

Gas Detection via Machine Learning

We present an Electronic Nose (ENose), which is aimed at identifying the presence of one out of two gases, possibly detecting the presence of a mixture of the two. Estimation of the concentrations of the components is also performed for a volatile organic compound (VOC) constituted by methanol and acetone, for the ranges 40-400 and 22-220 ppm (parts-per-million), respectively. Our system contains 8 sensors, 5 of them being gas sensors (of the class TGS from FIGARO USA, INC., whose sensing element is a tin dioxide (SnO2) semiconductor), the remaining being a temperature sensor (LM35 from National Semiconductor Corporation), a humidity sensor (HIH–3610 from Honeywell), and a pressure sensor (XFAM from Fujikura Ltd.). Our integrated hardware–software system uses some machine learning principles and least square regression principle to identify at first a new gas sample, or a mixture, and then to estimate the concentrations. In particular we adopt a training model using the Support Vector Machine (SVM) approach with linear kernel to teach the system how discriminate among different gases. Then we apply another training model using the least square regression, to predict the concentrations. The experimental results demonstrate that the proposed multiclassification and regression scheme is effective in the identification of the tested VOCs of methanol and acetone with 96.61% correctness. The concentration prediction is obtained with 0.979 and 0.964 correlation coefficient for the predicted versus real concentrations of methanol and acetone, respectively.

A New Voting Approach to Texture Defect Detection Based on Multiresolutional Decomposition

Wavelets have provided the researchers with significant positive results, by entering the texture defect detection domain. The weak point of wavelets is that they are one-dimensional by nature so they are not efficient enough to describe and analyze two-dimensional functions. In this paper we present a new method to detect the defect of texture images by using curvelet transform. Simulation results of the proposed method on a set of standard texture images confirm its correctness. Comparing the obtained results indicates the ability of curvelet transform in describing discontinuity in two-dimensional functions compared to wavelet transform

Specialization-based parallel Processing without Memo-trees

The purpose of this paper is to propose a framework for constructing correct parallel processing programs based on Equivalent Transformation Framework (ETF). ETF regards computation as In the framework, a problem-s domain knowledge and a query are described in definite clauses, and computation is regarded as transformation of the definite clauses. Its meaning is defined by a model of the set of definite clauses, and the transformation rules generated must preserve meaning. We have proposed a parallel processing method based on “specialization", a part of operation in the transformations, which resembles substitution in logic programming. The method requires “Memo-tree", a history of specialization to maintain correctness. In this paper we proposes the new method for the specialization-base parallel processing without Memo-tree.

Detection of Pathogenic Escherichia coli Strains Pollution in Red Deer Meat in Latvia and Determination the Compatibility of VT1, VT2, eae A Genes in their Isolate

Tasks of the work were study the possible E.coli contamination in red deer meat, identify pathogenic strains from isolated E.coli, determine their incidence in red deer meat and determine the presence of VT1, VT2 and eaeA genes for the pathogenic E.coli. 8 (10%) samples were randomly selected from 80 analysed isolates of E.coli and PCR reaction was performed on them. PCR was done both on initial materials – samples of red deer meat - and for already isolated liqueurs. Two of analysed venison samples contain verotoxin-producing strains of E. coli. It means that this meat is not safe to consumer. It was proven by the sequestration reaction of E. coli and by comparison of the obtained results with the database of microorganism genome available on the internet that the isolated culture corresponds to region 16S rDNS of E. coli thus presenting correctness of the microbiological methods.