ORank: An Ontology Based System for Ranking Documents

Increasing growth of information volume in the internet causes an increasing need to develop new (semi)automatic methods for retrieval of documents and ranking them according to their relevance to the user query. In this paper, after a brief review on ranking models, a new ontology based approach for ranking HTML documents is proposed and evaluated in various circumstances. Our approach is a combination of conceptual, statistical and linguistic methods. This combination reserves the precision of ranking without loosing the speed. Our approach exploits natural language processing techniques for extracting phrases and stemming words. Then an ontology based conceptual method will be used to annotate documents and expand the query. To expand a query the spread activation algorithm is improved so that the expansion can be done in various aspects. The annotated documents and the expanded query will be processed to compute the relevance degree exploiting statistical methods. The outstanding features of our approach are (1) combining conceptual, statistical and linguistic features of documents, (2) expanding the query with its related concepts before comparing to documents, (3) extracting and using both words and phrases to compute relevance degree, (4) improving the spread activation algorithm to do the expansion based on weighted combination of different conceptual relationships and (5) allowing variable document vector dimensions. A ranking system called ORank is developed to implement and test the proposed model. The test results will be included at the end of the paper.

An Embedded System for Artificial Intelligence Applications

Conventional approaches in the implementation of logic programming applications on embedded systems are solely of software nature. As a consequence, a compiler is needed that transforms the initial declarative logic program to its equivalent procedural one, to be programmed to the microprocessor. This approach increases the complexity of the final implementation and reduces the overall system's performance. On the contrary, presenting hardware implementations which are only capable of supporting logic programs prevents their use in applications where logic programs need to be intertwined with traditional procedural ones, for a specific application. We exploit HW/SW codesign methods to present a microprocessor, capable of supporting hybrid applications using both programming approaches. We take advantage of the close relationship between attribute grammar (AG) evaluation and knowledge engineering methods to present a programmable hardware parser that performs logic derivations and combine it with an extension of a conventional RISC microprocessor that performs the unification process to report the success or failure of those derivations. The extended RISC microprocessor is still capable of executing conventional procedural programs, thus hybrid applications can be implemented. The presented implementation is programmable, supports the execution of hybrid applications, increases the performance of logic derivations (experimental analysis yields an approximate 1000% increase in performance) and reduces the complexity of the final implemented code. The proposed hardware design is supported by a proposed extended C-language called C-AG.

Simulation and Design of the Geometric Characteristics of the Oscillatory Thermal Cycler

Since polymerase chain reaction (PCR) has been invented, it has emerged as a powerful tool in genetic analysis. The PCR products are closely linked with thermal cycles. Therefore, to reduce the reaction time and make temperature distribution uniform in the reaction chamber, a novel oscillatory thermal cycler is designed. The sample is placed in a fixed chamber, and three constant isothermal zones are established and lined in the system. The sample is oscillated and contacted with three different isothermal zones to complete thermal cycles. This study presents the design of the geometric characteristics of the chamber. The commercial software CFD-ACE+TM is utilized to investigate the influences of various materials, heating times, chamber volumes, and moving speed of the chamber on the temperature distributions inside the chamber. The chamber moves at a specific velocity and the boundary conditions with time variations are related to the moving speed. Whereas the chamber moves, the boundary is specified at the conditions of the convection or the uniform temperature. The user subroutines compiled by the FORTRAN language are used to make the numerical results realistically. Results show that the reaction chamber with a rectangular prism is heated on six faces; the effects of various moving speeds of the chamber on the temperature distributions are examined. Regarding to the temperature profiles and the standard deviation of the temperature at the Y-cut cross section, the non-uniform temperature inside chamber is found as the moving speed is larger than 0.01 m/s. By reducing the heating faces to four, the standard deviation of the temperature of the reaction chamber is under 1.4×10-3K with the range of velocities between 0.0001 m/s and 1 m/s. The nature convective boundary conditions are set at all boundaries while the chamber moves between two heaters, the effects of various moving velocities of the chamber on the temperature distributions are negligible at the assigned time duration.

Key Factors of Curriculum Innovation in Language Teacher Education

The focus of the study is to understand the factors of curriculum innovation from the perspective of Language teacher education. The overall aim of the study is to investigate Language educators- perceptions of factors of curriculum innovation. In the theoretical framework the main focus is on discussion about different curriculum approaches for language teacher education and limiting and facilitating factors of innovation. In order to achieve the aim of the study, an observational research is employed. The empirical basis of the study consists of questionnaire with sixty-three language teachers from eight Romanian higher education institutions. The findings reveal variation in Language teachers- conceptions of the dominant factors of curricular innovation.

Efficient Hardware Realization of Truncated Multipliers using FPGA

Truncated multiplier is a good candidate for digital signal processing (DSP) applications including finite impulse response (FIR) and discrete cosine transform (DCT). Through truncated multiplier a significant reduction in Field Programmable Gate Array (FPGA) resources can be achieved. This paper presents for the first time a comparison of resource utilization of Spartan-3AN and Virtex-5 implementation of standard and truncated multipliers using Very High Speed Integrated Circuit Hardware Description Language (VHDL). The Virtex-5 FPGA shows significant improvement as compared to Spartan-3AN FPGA device. The Virtex-5 FPGA device shows better performance with a percentage ratio of number of occupied slices for standard to truncated multipliers is increased from 40% to 73.86% as compared to Spartan- 3AN is decreased from 68.75% to 58.78%. Results show that the anomaly in Spartan-3AN FPGA device average connection and maximum pin delay have been efficiently reduced in Virtex-5 FPGA device.

Post Colonial Socio-Cultural Reflections in Telugu Literature

The Post colonial society in India has witnessed the turmoil to come out from the widespread control and influence of colonialism. The socio-cultural life of a society with all its dynamics is reflected in realistic forms of literature. The social events and human experience are drawn into a new creative form and are given to the reader as a new understanding and perspective of life. It enables the reader to understand the essence of life and motivates him to prepare for a positive change. After India becoming free from the colonial rule in 1947, systematic efforts were made by central and state governments and institutions to limit the role of English and simultaneously enlarge the function of Indian languages by planning in a strategic manner. The eighteen languages recognized as national languages are having very rich literatures. Telugu language is one among the Dravidian language family and is widely spoken by a majority of people. The post colonial socio-cultural factors were very well reflected in Telugu literature. The anti-colonial, reform oriented, progressive, post modernistic trends in Telugu literature are nothing but creative reflections of the post colonial society. This paper examines the major socio-cultural reflections in Telugu literature of the post colonial period.

Word Stemming Algorithms and Retrieval Effectiveness in Malay and Arabic Documents Retrieval Systems

Documents retrieval in Information Retrieval Systems (IRS) is generally about understanding of information in the documents concern. The more the system able to understand the contents of documents the more effective will be the retrieval outcomes. But understanding of the contents is a very complex task. Conventional IRS apply algorithms that can only approximate the meaning of document contents through keywords approach using vector space model. Keywords may be unstemmed or stemmed. When keywords are stemmed and conflated in retrieving process, we are a step forwards in applying semantic technology in IRS. Word stemming is a process in morphological analysis under natural language processing, before syntactic and semantic analysis. We have developed algorithms for Malay and Arabic and incorporated stemming in our experimental systems in order to measure retrieval effectiveness. The results have shown that the retrieval effectiveness has increased when stemming is used in the systems.

UML Modeling for Instruction Pipeline Design

Unified Modeling language (UML) is one of the important modeling languages used for the visual representation of the research problem. In the present paper, UML model is designed for the Instruction pipeline which is used for the evaluation of the instructions of software programs. The class and sequence diagrams are designed & performance is evaluated for instructions of a sample program through a case study.

Tagging by Combining Rules- Based Method and Memory-Based Learning

Many natural language expressions are ambiguous, and need to draw on other sources of information to be interpreted. Interpretation of the e word تعاون to be considered as a noun or a verb depends on the presence of contextual cues. To interpret words we need to be able to discriminate between different usages. This paper proposes a hybrid of based- rules and a machine learning method for tagging Arabic words. The particularity of Arabic word that may be composed of stem, plus affixes and clitics, a small number of rules dominate the performance (affixes include inflexional markers for tense, gender and number/ clitics include some prepositions, conjunctions and others). Tagging is closely related to the notion of word class used in syntax. This method is based firstly on rules (that considered the post-position, ending of a word, and patterns), and then the anomaly are corrected by adopting a memory-based learning method (MBL). The memory_based learning is an efficient method to integrate various sources of information, and handling exceptional data in natural language processing tasks. Secondly checking the exceptional cases of rules and more information is made available to the learner for treating those exceptional cases. To evaluate the proposed method a number of experiments has been run, and in order, to improve the importance of the various information in learning.

Effects of Multimedia-based Instructional Designs for Arabic Language Learning among Pupils of Different Achievement Levels

The purpose of this study is to investigate the effects of modality principles in instructional software among first grade pupils- achievements in the learning of Arabic Language. Two modes of instructional software were systematically designed and developed, audio with images (AI), and text with images (TI). The quasi-experimental design was used in the study. The sample consisted of 123 male and female pupils from IRBED Education Directorate, Jordan. The pupils were randomly assigned to any one of the two modes. The independent variable comprised the two modes of the instructional software, the students- achievement levels in the Arabic Language class and gender. The dependent variable was the achievements of the pupils in the Arabic Language test. The theoretical framework of this study was based on Mayer-s Cognitive Theory of Multimedia Learning. Four hypotheses were postulated and tested. Analyses of Variance (ANOVA) showed that pupils using the (AI) mode performed significantly better than those using (TI) mode. This study concluded that the audio with images mode was an important aid to learning as compared to text with images mode.

The Semantic Web: a New Approach for Future World Wide Web

The purpose of semantic web research is to transform the Web from a linked document repository into a distributed knowledge base and application platform, thus allowing the vast range of available information and services to be more efficiently exploited. As a first step in this transformation, languages such as OWL have been developed. Although fully realizing the Semantic Web still seems some way off, OWL has already been very successful and has rapidly become a defacto standard for ontology development in fields as diverse as geography, geology, astronomy, agriculture, defence and the life sciences. The aim of this paper is to classify key concepts of Semantic Web as well as introducing a new practical approach which uses these concepts to outperform Word Wide Web.

Research on Strategy for Automated Scaleless-Map Compilation

As a tool for human spatial cognition and thinking, the map has been playing an important role. Maps are perhaps as fundamental to society as language and the written word. Economic and social development requires extensive and in-depth understanding of their own living environment, from the scope of the overall global to urban housing. This has brought unprecedented opportunities and challenges for traditional cartography . This paper first proposed the concept of scaleless-map and its basic characteristics, through the analysis of the existing multi-scale representation techniques. Then some strategies are presented for automated mapping compilation. Taking into account the demand of automated map compilation, detailed proposed the software - WJ workstation must have four technical features, which are generalization operators, symbol primitives, dynamically annotation and mapping process template. This paper provides a more systematic new idea and solution to improve the intelligence and automation of the scaleless cartography.

On Analysis of Boundness Property for ECATNets by Using Rewriting Logic

To analyze the behavior of Petri nets, the accessibility graph and Model Checking are widely used. However, if the analyzed Petri net is unbounded then the accessibility graph becomes infinite and Model Checking can not be used even for small Petri nets. ECATNets [2] are a category of algebraic Petri nets. The main feature of ECATNets is their sound and complete semantics based on rewriting logic [8] and its language Maude [9]. ECATNets analysis may be done by using techniques of accessibility analysis and Model Checking defined in Maude. But, these two techniques supported by Maude do not work also with infinite-states systems. As a category of Petri nets, ECATNets can be unbounded and so infinite systems. In order to know if we can apply accessibility analysis and Model Checking of Maude to an ECATNet, we propose in this paper an algorithm allowing the detection if the ECATNet is bounded or not. Moreover, we propose a rewriting logic based tool implementing this algorithm. We show that the development of this tool using the Maude system is facilitated thanks to the reflectivity of the rewriting logic. Indeed, the self-interpretation of this logic allows us both the modelling of an ECATNet and acting on it.

Automatic Lip Contour Tracking and Visual Character Recognition for Computerized Lip Reading

Computerized lip reading has been one of the most actively researched areas of computer vision in recent past because of its crime fighting potential and invariance to acoustic environment. However, several factors like fast speech, bad pronunciation, poor illumination, movement of face, moustaches and beards make lip reading difficult. In present work, we propose a solution for automatic lip contour tracking and recognizing letters of English language spoken by speakers using the information available from lip movements. Level set method is used for tracking lip contour using a contour velocity model and a feature vector of lip movements is then obtained. Character recognition is performed using modified k nearest neighbor algorithm which assigns more weight to nearer neighbors. The proposed system has been found to have accuracy of 73.3% for character recognition with speaker lip movements as the only input and without using any speech recognition system in parallel. The approach used in this work is found to significantly solve the purpose of lip reading when size of database is small.

Efficient DTW-Based Speech Recognition System for Isolated Words of Arabic Language

Despite the fact that Arabic language is currently one of the most common languages worldwide, there has been only a little research on Arabic speech recognition relative to other languages such as English and Japanese. Generally, digital speech processing and voice recognition algorithms are of special importance for designing efficient, accurate, as well as fast automatic speech recognition systems. However, the speech recognition process carried out in this paper is divided into three stages as follows: firstly, the signal is preprocessed to reduce noise effects. After that, the signal is digitized and hearingized. Consequently, the voice activity regions are segmented using voice activity detection (VAD) algorithm. Secondly, features are extracted from the speech signal using Mel-frequency cepstral coefficients (MFCC) algorithm. Moreover, delta and acceleration (delta-delta) coefficients have been added for the reason of improving the recognition accuracy. Finally, each test word-s features are compared to the training database using dynamic time warping (DTW) algorithm. Utilizing the best set up made for all affected parameters to the aforementioned techniques, the proposed system achieved a recognition rate of about 98.5% which outperformed other HMM and ANN-based approaches available in the literature.

Trimmed Mean as an Adaptive Robust Estimator of a Location Parameter for Weibull Distribution

One of the purposes of the robust method of estimation is to reduce the influence of outliers in the data, on the estimates. The outliers arise from gross errors or contamination from distributions with long tails. The trimmed mean is a robust estimate. This means that it is not sensitive to violation of distributional assumptions of the data. It is called an adaptive estimate when the trimming proportion is determined from the data rather than being fixed a “priori-. The main objective of this study is to find out the robustness properties of the adaptive trimmed means in terms of efficiency, high breakdown point and influence function. Specifically, it seeks to find out the magnitude of the trimming proportion of the adaptive trimmed mean which will yield efficient and robust estimates of the parameter for data which follow a modified Weibull distribution with parameter λ = 1/2 , where the trimming proportion is determined by a ratio of two trimmed means defined as the tail length. Secondly, the asymptotic properties of the tail length and the trimmed means are also investigated. Finally, a comparison is made on the efficiency of the adaptive trimmed means in terms of the standard deviation for the trimming proportions and when these were fixed a “priori". The asymptotic tail lengths defined as the ratio of two trimmed means and the asymptotic variances were computed by using the formulas derived. While the values of the standard deviations for the derived tail lengths for data of size 40 simulated from a Weibull distribution were computed for 100 iterations using a computer program written in Pascal language. The findings of the study revealed that the tail lengths of the Weibull distribution increase in magnitudes as the trimming proportions increase, the measure of the tail length and the adaptive trimmed mean are asymptotically independent as the number of observations n becomes very large or approaching infinity, the tail length is asymptotically distributed as the ratio of two independent normal random variables, and the asymptotic variances decrease as the trimming proportions increase. The simulation study revealed empirically that the standard error of the adaptive trimmed mean using the ratio of tail lengths is relatively smaller for different values of trimming proportions than its counterpart when the trimming proportions were fixed a 'priori'.

Access Policy Specification for SCADA Networks

Efforts to secure supervisory control and data acquisition (SCADA) systems must be supported under the guidance of sound security policies and mechanisms to enforce them. Critical elements of the policy must be systematically translated into a format that can be used by policy enforcement components. Ideally, the goal is to ensure that the enforced policy is a close reflection of the specified policy. However, security controls commonly used to enforce policies in the IT environment were not designed to satisfy the specific needs of the SCADA environment. This paper presents a language, based on the well-known XACML framework, for the expression of authorization policies for SCADA systems.

A NXM Version of 5X5 Playfair Cipher for any Natural Language (Urdu as Special Case)

In this paper a modified version NXM of traditional 5X5 playfair cipher is introduced which enable the user to encrypt message of any Natural language by taking appropriate size of the matrix depending upon the size of the natural language. 5X5 matrix has the capability of storing only 26 characters of English language and unable to store characters of any language having more than 26 characters. To overcome this limitation NXM matrix is introduced which solve this limitation. In this paper a special case of Urdu language is discussed. Where # is used for completing odd pair and * is used for repeating letters.

Dissertation by Portfolio - A Break from Traditional Approaches

Much has been written about the difficulties students have with producing traditional dissertations. This includes both native English speakers (L1) and students with English as a second language (L2). The main emphasis of these papers has been on the structure of the dissertation, but in all cases, even when electronic versions are discussed, the dissertation is still in what most would regard as a traditional written form. Master of Science Degrees in computing disciplines require students to gain technical proficiency and apply their knowledge to a range of scenarios. The basis of this paper is that if a dissertation is a means of showing that such a student has met the criteria for a pass, which should be based on the learning outcomes of the dissertation module, does meeting those outcomes require a student to demonstrate their skills in a solely text based form, particularly in a highly technical research project? Could it be possible for a student to produce a series of related artifacts which form a cohesive package that meets the learning out comes of the dissertation?

Transliterating Methods of the Kazakh Onyms in the Arabic Language

Transliteration is frequently used especially in writing geographic denominations, personal names (onyms) etc. Proper names (onyms) of all languages must sound similarly in translated works as well as in scientific projects and works written in mother tongue, because we can get introduced with the nation, its history, culture, traditions and other spiritual values through the onyms of that nation. Therefore it is necessary to systematize the different transliterations of onyms of foreign languages. This paper is dedicated to the problem of making the project of transliterating Kazakh onyms into Arabic. In order to achieve this goal we use scientific or practical types of transliteration. Because in this type of transliteration provides easy reading writing source language's texts in the target language without any diacritical symbols, it is limited by the target language's alphabetic system.