A Visual Control Flow Language and Its Termination Properties

This paper presents the visual control flow support of Visual Modeling and Transformation System (VMTS), which facilitates composing complex model transformations out of simple transformation steps and executing them. The VMTS Visual Control Flow Language (VCFL) uses stereotyped activity diagrams to specify control flow structures and OCL constraints to choose between different control flow branches. This work discusses the termination properties of VCFL and provides an algorithm to support the termination analysis of VCFL transformations.

Specifying a Timestamp-based Protocol For Multi-step Transactions Using LTL

Most of the concurrent transactional protocols consider serializability as a correctness criterion of the transactions execution. Usually, the proof of the serializability relies on mathematical proofs for a fixed finite number of transactions. In this paper, we introduce a protocol to deal with an infinite number of transactions which are iterated infinitely often. We specify serializability of the transactions and the protocol using a specification language based on temporal logics. It is worthwhile using temporal logics such as LTL (Lineartime Temporal Logic) to specify transactions, to gain full automatic verification by using model checkers.

The Relationship between Learners-Motivation (Integrative and Instrumental) and English Proficiency among Iranian EFL Learners

The current study aims at investigating the relationship between the learners- integrative and instrumental motivation and English proficiency among Iranian EFL learners. The participants in this study consisted of 128 undergraduate university students including 64 males and 64 females, majoring in English as a foreign language, from Shiraz Azad University. Two research instruments were used to gather the needed data for this study: 1) Language Proficiency Test. 2) A scale on motivation which determines the type of the EFL learners- motivation. Correlatin coefficient and t-test were used to analyze the collected data and the main result was found as follows: There is a significant relationship between the integrative motivation and instrumental motivation with English proficiency among EFL learners of Shiraz Azad University.

BugCatcher.Net: Detecting Bugs and Proposing Corrective Solutions

Although achieving zero-defect software release is practically impossible, software industries should take maximum care to detect defects/bugs well ahead in time allowing only bare minimums to creep into released version. This is a clear indicator of time playing an important role in the bug detection. In addition to this, software quality is the major factor in software engineering process. Moreover, early detection can be achieved only through static code analysis as opposed to conventional testing. BugCatcher.Net is a static analysis tool, which detects bugs in .NET® languages through MSIL (Microsoft Intermediate Language) inspection. The tool utilizes a Parser based on Finite State Automata to carry out bug detection. After being detected, bugs need to be corrected immediately. BugCatcher.Net facilitates correction, by proposing a corrective solution for reported warnings/bugs to end users with minimum side effects. Moreover, the tool is also capable of analyzing the bug trend of a program under inspection.

Query Optimization Techniques for XML Databases

Over the past few years, XML (eXtensible Mark-up Language) has emerged as the standard for information representation and data exchange over the Internet. This paper provides a kick-start for new researches venturing in XML databases field. We survey the storage representation for XML document, review the XML query processing and optimization techniques with respect to the particular storage instance. Various optimization technologies have been developed to solve the query retrieval and updating problems. Towards the later year, most researchers proposed hybrid optimization techniques. Hybrid system opens the possibility of covering each technology-s weakness by its strengths. This paper reviews the advantages and limitations of optimization techniques.

The Model of Blended Learning and Its Use at Foreign Language Teaching

In present article the model of Blended Learning, its advantage at foreign language teaching, and also some problems that can arise during its use are considered. The Blended Learning is a special organization of learning, which allows to combine classroom work and modern technologies in electronic distance teaching environment. Nowadays a lot of European educational institutions and companies use such technology. Through this method: student gets the opportunity to learn in a group (classroom) with a teacher and additionally at home at a convenient time; student himself sets the optimal speed and intensity of the learning process; this method helps student to discipline himself and learn to work independently.

Modeling Language for Machine Learning

For a given specific problem an efficient algorithm has been the matter of study. However, an alternative approach orthogonal to this approach comes out, which is called a reduction. In general for a given specific problem this reduction approach studies how to convert an original problem into subproblems. This paper proposes a formal modeling language to support this reduction approach. We show three examples from the wide area of learning problems. The benefit is a fast prototyping of algorithms for a given new problem.

Thai Prosody Problems with First Year Students

Thai language is difficult in all four language skills, especially reading. The first year students may have different abilities in reading, so a teacher is required to find out a student’s reading level so that the teacher can help and support them till they can develop and resolve each problem themselves. This research is aimed to study the prosody problem among Thai students and will be focused on first year Thai students in the second semester. A total of 58 students were involved in this study. Four obstacles were found: 1. Interpretation from what they read and write 2. Incorrectness Pronunciation of Prosody 3. Incorrectness in Rhythm of the Poem 4. Incorrectness of the Thai Poem Pronunciation

A Proposed Framework for Visualization to Teach Computer Science

Computer programming is considered a very difficult course by many computer science students. The reasons for the difficulties include cognitive load involved in programming, different learning styles of students, instructional methodology and the choice of the programming languages. To reduce the difficulties the following have been tried: pair programming, program visualization, different learning styles etc. However, these efforts have produced limited success. This paper reviews the problem and proposes a framework to help students overcome the difficulties involved.

Database Modelling Using WSML in the Specification of a Banking Application

We demonstrate through a sample application, Ebanking, that the Web Service Modelling Language Ontology component can be used as a very powerful object-oriented database design language with logic capabilities. Its conceptual syntax allows the definition of class hierarchies, and logic syntax allows the definition of constraints in the database. Relations, which are available for modelling relations of three or more concepts, can be connected to logical expressions, allowing the implicit specification of database content. Using a reasoning tool, logic queries can also be made against the database in simulation mode.

Free-Form Query for Cell Phones

It is a challenge to provide a wide range of queries to database query systems for small mobile devices, such as the PDAs and cell phones. Currently, due to the physical and resource limitations of these devices, most reported database querying systems developed for them are only offering a small set of pre-determined queries for users to possibly pose. The above can be resolved by allowing free-form queries to be entered on the devices. Hence, a query language that does not restrict the combination of query terms entered by users is proposed. This paper presents the free-form query language and the method used in translating free-form queries to their equivalent SQL statements.

Making Computer Learn Color

Color categorization is shared among members in a society. This allows communication of color, especially when using natural language such as English. Hence sociable robot, to live coexist with human in human society, must also have the shared color categorization. To achieve this, many works have been done relying on modeling of human color perception and mathematical complexities. In contrast, in this work, the computer as brain of the robot learns color categorization through interaction with humans without much mathematical complexities.

Named Entity Recognition using Support Vector Machine: A Language Independent Approach

Named Entity Recognition (NER) aims to classify each word of a document into predefined target named entity classes and is now-a-days considered to be fundamental for many Natural Language Processing (NLP) tasks such as information retrieval, machine translation, information extraction, question answering systems and others. This paper reports about the development of a NER system for Bengali and Hindi using Support Vector Machine (SVM). Though this state of the art machine learning technique has been widely applied to NER in several well-studied languages, the use of this technique to Indian languages (ILs) is very new. The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the four different named (NE) classes, such as Person name, Location name, Organization name and Miscellaneous name. We have used the annotated corpora of 122,467 tokens of Bengali and 502,974 tokens of Hindi tagged with the twelve different NE classes 1, defined as part of the IJCNLP-08 NER Shared Task for South and South East Asian Languages (SSEAL) 2. In addition, we have manually annotated 150K wordforms of the Bengali news corpus, developed from the web-archive of a leading Bengali newspaper. We have also developed an unsupervised algorithm in order to generate the lexical context patterns from a part of the unlabeled Bengali news corpus. Lexical patterns have been used as the features of SVM in order to improve the system performance. The NER system has been tested with the gold standard test sets of 35K, and 60K tokens for Bengali, and Hindi, respectively. Evaluation results have demonstrated the recall, precision, and f-score values of 88.61%, 80.12%, and 84.15%, respectively, for Bengali and 80.23%, 74.34%, and 77.17%, respectively, for Hindi. Results show the improvement in the f-score by 5.13% with the use of context patterns. Statistical analysis, ANOVA is also performed to compare the performance of the proposed NER system with that of the existing HMM based system for both the languages.

A Quantitative Approach to Strategic Design of Component-Based Business Process Models

A new paradigm for software design and development models software by its business process, translates the model into a process execution language, and has it run by a supporting execution engine. This process-oriented paradigm promotes modeling of software by less technical users or business analysts as well as rapid development. Since business process models may be shared by different organizations and sometimes even by different business domains, it is interesting to apply a technique used in traditional software component technology to design reusable business processes. This paper discusses an approach to apply a technique for software component fabrication to the design of process-oriented software units, called process components. These process components result from decomposing a business process of a particular application domain into subprocesses with an aim that the process components can be reusable in different process-based software models. The approach is quantitative because the quality of process component design is measured from technical features of the process components. The approach is also strategic because the measured quality is determined against business-oriented component management goals. A software tool has been developed to measure how good a process component design is, according to the required managerial goals and comparing to other designs. We also discuss how we benefit from reusable process components.

A New Vector Quantization Front-End Process for Discrete HMM Speech Recognition System

The paper presents a complete discrete statistical framework, based on a novel vector quantization (VQ) front-end process. This new VQ approach performs an optimal distribution of VQ codebook components on HMM states. This technique that we named the distributed vector quantization (DVQ) of hidden Markov models, succeeds in unifying acoustic micro-structure and phonetic macro-structure, when the estimation of HMM parameters is performed. The DVQ technique is implemented through two variants. The first variant uses the K-means algorithm (K-means- DVQ) to optimize the VQ, while the second variant exploits the benefits of the classification behavior of neural networks (NN-DVQ) for the same purpose. The proposed variants are compared with the HMM-based baseline system by experiments of specific Arabic consonants recognition. The results show that the distributed vector quantization technique increase the performance of the discrete HMM system.

An HCI Template for Distributed Applications

Both software applications and their development environment are becoming more and more distributed. This trend impacts not only the way software computes, but also how it looks. This article proposes a Human Computer Interface (HCI) template from three representative applications we have developed. These applications include a Multi-Agent System based software, a 3D Internet computer game with distributed game world logic, and a programming language environment used in constructing distributed neural network and its visualizations. HCI concepts that are common to these applications are described in abstract terms in the template. These include off-line presentation of global entities, entities inside a hierarchical namespace, communication and languages, reconfiguration of entity references in a graph, impersonation and access right, etc. We believe the metaphor that underlies an HCI concept as well as the relationships between a bunch of HCI concepts are crucial to the design of software systems and vice versa.

Key Frames Extraction for Sign Language Video Analysis and Recognition

In this paper we proposed a method for finding video frames representing one sign in the finger alphabet. The method is based on determining hands location, segmentation and the use of standard video quality evaluation metrics. Metric calculation is performed only in regions of interest. Sliding mechanism for finding local extrema and adaptive threshold based on local averaging is used for key frames selection. The success rate is evaluated by recall, precision and F1 measure. The method effectiveness is compared with metrics applied to all frames. Proposed method is fast, effective and relatively easy to realize by simple input video preprocessing and subsequent use of tools designed for video quality measuring.

Novelist Calls Out Poemist: A Psycholinguistic and Contrastive Analysis of the Errors in Turkish EFL Learners- Interlanguage

This study is designed to investigate errors emerged in written texts produced by 30 Turkish EFL learners with an explanatory, and thus, qualitative perspective. Erroneous language elements were identified by the researcher first and then their grammaticality and intelligibility were checked by five native speakers of English. The analysis of the data showed that it is difficult to claim that an error stems from only one single factor since different features of an error are triggered by different factors. Our findings revealed two different types of errors: those which stem from the interference of L1 with L2 and those which are developmental ones. The former type contains more global errors whereas the errors in latter type are more intelligible.

A Multilingual Virtual Simulated Patient Framework for Training Primary Health Care Students

This paper describes the Multilingual Virtual Simulated Patient framework. It has been created to train the social skills and testing the knowledge of primary health care medical students. The framework generates conversational agents which perform in serveral languages as virtual simulated patients that help to improve the communication and diagnosis skills of the students complementing their training process.

Detecting Interactions between Behavioral Requirements with OWL and SWRL

High quality requirements analysis is one of the most crucial activities to ensure the success of a software project, so that requirements verification for software system becomes more and more important in Requirements Engineering (RE) and it is one of the most helpful strategies for improving the quality of software system. Related works show that requirement elicitation and analysis can be facilitated by ontological approaches and semantic web technologies. In this paper, we proposed a hybrid method which aims to verify requirements with structural and formal semantics to detect interactions. The proposed method is twofold: one is for modeling requirements with the semantic web language OWL, to construct a semantic context; the other is a set of interaction detection rules which are derived from scenario-based analysis and represented with semantic web rule language (SWRL). SWRL based rules are working with rule engines like Jess to reason in semantic context for requirements thus to detect interactions. The benefits of the proposed method lie in three aspects: the method (i) provides systematic steps for modeling requirements with an ontological approach, (ii) offers synergy of requirements elicitation and domain engineering for knowledge sharing, and (3)the proposed rules can systematically assist in requirements interaction detection.