Exploring Perceptions and Practices About Information and Communication Technologies in Business English Teaching in Pakistan

Language Reforms and potential use of ICTs has been a focal area of Higher Education Commission of Pakistan. Efforts are being accelerated to incorporate fast expanding ICTs to bring qualitative improvement in language instruction in higher education. This paper explores how university teachers are benefitting from ICTs to make their English class effective and what type of problems they face in practicing ICTs during their lectures. An in-depth qualitative study was employed to understand why language teachers tend to use ICTs in their instruction and how they are practicing it. A sample of twenty teachers from five universities located in Islamabad, three from public sector and two from private sector, was selected on non-random (Snowball) sampling basis. An interview with 15 semi-structured items was used as research instruments to collect data. The findings reveal that business English teaching is facilitated and improved through the use of ICTs. The language teachers need special training regarding the practices and implementation of ICTs. It is recommended that initiatives might be taken to equip university language teachers with modern methodology incorporating ICTs as focal area and efforts might be made to remove barriers regarding the training of language teachers and proper usage of ICTs.

Learning Bridge: A Reading Comprehension Platform with Rich Media

A Reading Comprehend (RC) Platform has been constructed and developed to facilitate children-s English reading comprehension. Like a learning bridge, the RC Platform focuses on the integration of rich media and picture-book texts. The study is to examine the effects of the project within the RC Platform for children. Two classes of fourth graders were selected from a public elementary school in an urban area of central Taiwan. The findings taken from the survey showed that the students demonstrated high interest in the RC Platform. The students benefited greatly and enjoyed reading via the technology-enhanced project within the RC Platform. This Platform is a good reading bridge to enrich students- learning experiences and enhance their performance in English reading comprehension.

A Thai to English Machine Translation System Using Thai LFG Tree Structure as Interlingua

Machine Translation (MT) between the Thai and English languages has been a challenging research topic in natural language processing. Most research has been done on English to Thai machine translation, but not the other way around. This paper presents a Thai to English Machine Translation System that translates a Thai sentence into interlingua of a Thai LFG tree using LFG grammar and a bottom up parser. The Thai LFG tree is then transformed into the corresponding English LFG tree by pattern matching and node transformation. Finally, an equivalent English sentence is created using structural information prescribed by the English LFG tree. Based on results of experiments designed to evaluate the performance of the proposed system, it can be stated that the system has been proven to be effective in providing a useful translation from Thai to English.

Automatic Text Summarization

This work proposes an approach to address automatic text summarization. This approach is a trainable summarizer, which takes into account several features, including sentence position, positive keyword, negative keyword, sentence centrality, sentence resemblance to the title, sentence inclusion of name entity, sentence inclusion of numerical data, sentence relative length, Bushy path of the sentence and aggregated similarity for each sentence to generate summaries. First we investigate the effect of each sentence feature on the summarization task. Then we use all features score function to train genetic algorithm (GA) and mathematical regression (MR) models to obtain a suitable combination of feature weights. The proposed approach performance is measured at several compression rates on a data corpus composed of 100 English religious articles. The results of the proposed approach are promising.

Identification of Printed Punjabi Words and English Numerals Using Gabor Features

Script identification is one of the challenging steps in the development of optical character recognition system for bilingual or multilingual documents. In this paper an attempt is made for identification of English numerals at word level from Punjabi documents by using Gabor features. The support vector machine (SVM) classifier with five fold cross validation is used to classify the word images. The results obtained are quite encouraging. Average accuracy with RBF kernel, Polynomial and Linear Kernel functions comes out to be greater than 99%.

Information Filtering using Index Word Selection based on the Topics

We have proposed an information filtering system using index word selection from a document set based on the topics included in a set of documents. This method narrows down the particularly characteristic words in a document set and the topics are obtained by Sparse Non-negative Matrix Factorization. In information filtering, a document is often represented with the vector in which the elements correspond to the weight of the index words, and the dimension of the vector becomes larger as the number of documents is increased. Therefore, it is possible that useless words as index words for the information filtering are included. In order to address the problem, the dimension needs to be reduced. Our proposal reduces the dimension by selecting index words based on the topics included in a document set. We have applied the Sparse Non-negative Matrix Factorization to the document set to obtain these topics. The filtering is carried out based on a centroid of the learning document set. The centroid is regarded as the user-s interest. In addition, the centroid is represented with a document vector whose elements consist of the weight of the selected index words. Using the English test collection MEDLINE, thus, we confirm the effectiveness of our proposal. Hence, our proposed selection can confirm the improvement of the recommendation accuracy from the other previous methods when selecting the appropriate number of index words. In addition, we discussed the selected index words by our proposal and we found our proposal was able to select the index words covered some minor topics included in the document set.

Fingerprint Identification using Discretization Technique

Fingerprint based identification system; one of a well known biometric system in the area of pattern recognition and has always been under study through its important role in forensic science that could help government criminal justice community. In this paper, we proposed an identification framework of individuals by means of fingerprint. Different from the most conventional fingerprint identification frameworks the extracted Geometrical element features (GEFs) will go through a Discretization process. The intention of Discretization in this study is to attain individual unique features that could reflect the individual varianceness in order to discriminate one person from another. Previously, Discretization has been shown a particularly efficient identification on English handwriting with accuracy of 99.9% and on discrimination of twins- handwriting with accuracy of 98%. Due to its high discriminative power, this method is adopted into this framework as an independent based method to seek for the accuracy of fingerprint identification. Finally the experimental result shows that the accuracy rate of identification of the proposed system using Discretization is 100% for FVC2000, 93% for FVC2002 and 89.7% for FVC2004 which is much better than the conventional or the existing fingerprint identification system (72% for FVC2000, 26% for FVC2002 and 32.8% for FVC2004). The result indicates that Discretization approach manages to boost up the classification effectively, and therefore prove to be suitable for other biometric features besides handwriting and fingerprint.

Convergence and Divergence in Telephone Conversations: A Case of Persian

People usually have a telephone voice, which means they adjust their speech to fit particular situations and to blend in with other interlocutors. The question is: Do we speak differently to different people? This possibility has been suggested by social psychologists within Accommodation Theory [1]. Converging toward the speech of another person can be regarded as a polite speech strategy while choosing a language not used by the other interlocutor can be considered as the clearest example of speech divergence [2]. The present study sets out to investigate such processes in the course of everyday telephone conversations. Using Joos-s [3] model of formality in spoken English, the researchers try to explore convergence to or divergence from the addressee. The results propound the actuality that lexical choice, and subsequently, patterns of style vary intriguingly in concordance with the person being addressed.

How Valid Are Our Language Test Interpretations? A Demonstrative Example

Validity is an overriding consideration in language testing. If a test score is intended for a particular purpose, this must be supported through empirical evidence. This article addresses the validity of a multiple-choice achievement test (MCT). The test is administered at the end of each semester to decide about students' mastery of a course in general English. To provide empirical evidence pertaining to the validity of this test, two criterion measures were used. In so doing, a Cloze test and a C-test which are reported to gauge general English proficiency were utilized. The results of analyses show that there is a statistically significant correlation among participants' scores on the MCT, Cloze, and Ctest. Drawing on the findings of the study, it can be cautiously deduced that these tests measure the same underlying trait. However, allowing for the limitations of using criterion measures to validate tests, we cannot make any absolute claim as to the validity of this MCT test.

Humanoid Personalized Avatar Through Multiple Natural Language Processing

There has been a growing interest in implementing humanoid avatars in networked virtual environment. However, most existing avatar communication systems do not take avatars- social backgrounds into consideration. This paper proposes a novel humanoid avatar animation system to represent personalities and facial emotions of avatars based on culture, profession, mood, age, taste, and so forth. We extract semantic keywords from the input text through natural language processing, and then the animations of personalized avatars are retrieved and displayed according to the order of the keywords. Our primary work is focused on giving avatars runtime instruction from multiple natural languages. Experiments with Chinese, Japanese and English input based on the prototype show that interactive avatar animations can be displayed in real time and be made available online. This system provides a more natural and interesting means of human communication, and therefore is expected to be used for cross-cultural communication, multiuser online games, and other entertainment applications.

Teaching English under the LMD Reform: The Algerian Experience

Since its independence in 1962, Algeria has struggled to establish an educational system tailored to the needs of the population it may address. Considering the historical connection with France, Algeria has always looked at the French language as a cultural imperative until late in the seventies. After the Arabization policy of 1971 and the socioeconomic changes taking place worldwide, the use of English as a communicating vehicle started to gain more space within globalized Algeria. Consequently, disparities in the use of French started to fade away at the cross-roads leaving more space to the teaching of English as a second foreign language. Moreover, the introduction of the Bologna Process and the European Credit Transfer System in Higher Education has necessitated some innovations in the design and development of new curricula adapted to the socioeconomic market. In this paper, I will try to highlight the important historical dimensions Algeria has taken towards the implementation of an English language methodology and to the status it acquired from second foreign language, to first foreign language to “the language of knowledge and sciences". I will also propose new pedagogical perspectives for a better treatment of the English language in order to encourage independent and autonomous learning.

The Design of English Materials to communication the Identity of Amphawa District, Samut Songkram Province, for Sustainable Tourism

The main purpose of this research was to study how to communicate the identity of the Amphawa district, Samut Songkram province for sustainable tourism. The qualitative data was collected through studying related materials, exploring the area, in-depth interviews with three groups of people: three directly responsible officers who were key informants of the district, twenty foreign tourists and five Thai tourist guides. A content analysis was used to analyze the qualitative data. The two main findings of the study were as follows: 1. The identity of the Amphawa District, Samut Songkram province is the area controlled by Amphawa sub district (submunicipality). The working unit which runs and looks after Amphawa sub district administration is known as the Amphawa mayor. This establishment was built to be a resort for normal people and tourists visiting the Amphawa district near the Maekong River consisting of rest accommodations. Along the river there is a restaurant where food and drinks are served, rich mangrove forests, a learning center, fireflies and cork trees. The Amphawa district was built to honor and commemorate King Rama II and is where the greatest number of fireflies and cork trees can be seen in Thailand from May to October each year. 2. The communication of the identity of Amphawa District, Samut Songkram Province which the researcher could find and design to present in English materials can be summed up in 5 items: 1) The history of the Amphawa District, Samut Songkram province 2) The history of King Rama II Memorial Park 3) The identity of Amphawa Floating Market 4) The Learning center of Ecosystem: Fireflies and Cork Trees 5) How to keep Amphawa District, Samut Songkram Province for sustainable tourism.

Automatic Building an Extensive Arabic FA Terms Dictionary

Field Association (FA) terms are a limited set of discriminating terms that give us the knowledge to identify document fields which are effective in document classification, similar file retrieval and passage retrieval. But the problem lies in the lack of an effective method to extract automatically relevant Arabic FA Terms to build a comprehensive dictionary. Moreover, all previous studies are based on FA terms in English and Japanese, and the extension of FA terms to other language such Arabic could be definitely strengthen further researches. This paper presents a new method to extract, Arabic FA Terms from domain-specific corpora using part-of-speech (POS) pattern rules and corpora comparison. Experimental evaluation is carried out for 14 different fields using 251 MB of domain-specific corpora obtained from Arabic Wikipedia dumps and Alhyah news selected average of 2,825 FA Terms (single and compound) per field. From the experimental results, recall and precision are 84% and 79% respectively. Therefore, this method selects higher number of relevant Arabic FA Terms at high precision and recall.

Arabic Word Semantic Similarity

This paper is concerned with the production of an Arabic word semantic similarity benchmark dataset. It is the first of its kind for Arabic which was particularly developed to assess the accuracy of word semantic similarity measurements. Semantic similarity is an essential component to numerous applications in fields such as natural language processing, artificial intelligence, linguistics, and psychology. Most of the reported work has been done for English. To the best of our knowledge, there is no word similarity measure developed specifically for Arabic. In this paper, an Arabic benchmark dataset of 70 word pairs is presented. New methods and best possible available techniques have been used in this study to produce the Arabic dataset. This includes selecting and creating materials, collecting human ratings from a representative sample of participants, and calculating the overall ratings. This dataset will make a substantial contribution to future work in the field of Arabic WSS and hopefully it will be considered as a reference basis from which to evaluate and compare different methodologies in the field.

Using Heuristic Rules from Sentence Decomposition of Experts- Summaries to Detect Students- Summarizing Strategies

Summarizing skills have been introduced to English syllabus in secondary school in Malaysia to evaluate student-s comprehension for a given text where it requires students to employ several strategies to produce the summary. This paper reports on our effort to develop a computer-based summarization assessment system that detects the strategies used by the students in producing their summaries. Sentence decomposition of expert-written summaries is used to analyze how experts produce their summary sentences. From the analysis, we identified seven summarizing strategies and their rules which are then transformed into a set of heuristic rules on how to determine the summarizing strategies. We developed an algorithm based on the heuristic rules and performed some experiments to evaluate and support the technique proposed.

Do C-Test and Cloze Procedure Measure what they Purport to be Measuring? A Case of Criterion-Related Validity

This article investigated the validity of C-test and Cloze test which purport to measure general English proficiency. To provide empirical evidence pertaining to the validity of the interpretations based on the results of these integrative language tests, their criterion-related validity was investigated. In doing so, the test of English as a foreign language (TOEFL) which is an established, standardized, and internationally administered test of general English proficiency was used as the criterion measure. Some 90 Iranian English majors participated in this study. They were seniors studying English at a university in Tehran, Iran. The results of analyses showed that there is a statistically significant correlation among participants- scores on Cloze test, C-test, and the TOEFL. Building on the findings of the study and considering criterion-related validity as the evidential basis of the validity argument, it was cautiously deducted that these tests measure the same underlying trait. However, considering the limitations of using criterion measures to validate tests, no absolute claims can be made as to the construct validity of these integrative tests.

Correction of Frequent English Writing Errors by Using Coded Indirect Corrective Feedback and Error Treatment

The purposes of this study are 1) to study the frequent English writing errors of students registering the course: Reading and Writing English for Academic Purposes II, and 2) to find out the results of writing error correction by using coded indirect corrective feedback and writing error treatments. Samples include 28 2nd year English Major students, Faculty of Education, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the course; Reading and Writing English for Academic Purposes II, and tool for data collection includes 4 writing tests of short texts. The research findings disclose that frequent English writing errors found in this course comprise 7 types of grammatical errors, namely Fragment sentence, Subject-verb agreement, Wrong form of verb tense, Singular or plural noun endings, Run-ons sentence, Wrong form of verb pattern and Lack of parallel structure. Moreover, it is found that the results of writing error correction by using coded indirect corrective feedback and error treatment reveal the overall reduction of the frequent English writing errors and the increase of students’ achievement in the writing of short texts with the significance at .05.

Skew Detection Technique for Binary Document Images based on Hough Transform

Document image processing has become an increasingly important technology in the automation of office documentation tasks. During document scanning, skew is inevitably introduced into the incoming document image. Since the algorithm for layout analysis and character recognition are generally very sensitive to the page skew. Hence, skew detection and correction in document images are the critical steps before layout analysis. In this paper, a novel skew detection method is presented for binary document images. The method considered the some selected characters of the text which may be subjected to thinning and Hough transform to estimate skew angle accurately. Several experiments have been conducted on various types of documents such as documents containing English Documents, Journals, Text-Book, Different Languages and Document with different fonts, Documents with different resolutions, to reveal the robustness of the proposed method. The experimental results revealed that the proposed method is accurate compared to the results of well-known existing methods.

An Approach of Quantum Steganography through Special SSCE Code

Encrypted messages sending frequently draws the attention of third parties, perhaps causing attempts to break and reveal the original messages. Steganography is introduced to hide the existence of the communication by concealing a secret message in an appropriate carrier like text, image, audio or video. Quantum steganography where the sender (Alice) embeds her steganographic information into the cover and sends it to the receiver (Bob) over a communication channel. Alice and Bob share an algorithm and hide quantum information in the cover. An eavesdropper (Eve) without access to the algorithm can-t find out the existence of the quantum message. In this paper, a text quantum steganography technique based on the use of indefinite articles (a) or (an) in conjunction with the nonspecific or non-particular nouns in English language and quantum gate truth table have been proposed. The authors also introduced a new code representation technique (SSCE - Secret Steganography Code for Embedding) at both ends in order to achieve high level of security. Before the embedding operation each character of the secret message has been converted to SSCE Value and then embeds to cover text. Finally stego text is formed and transmits to the receiver side. At the receiver side different reverse operation has been carried out to get back the original information.

N-Grams: A Tool for Repairing Word Order Errors in Ill-formed Texts

This paper presents an approach for repairing word order errors in English text by reordering words in a sentence and choosing the version that maximizes the number of trigram hits according to a language model. A possible way for reordering the words is to use all the permutations. The problem is that for a sentence with length N words the number of all permutations is N!. The novelty of this method concerns the use of an efficient confusion matrix technique for reordering the words. The confusion matrix technique has been designed in order to reduce the search space among permuted sentences. The limitation of search space is succeeded using the statistical inference of N-grams. The results of this technique are very interesting and prove that the number of permuted sentences can be reduced by 98,16%. For experimental purposes a test set of TOEFL sentences was used and the results show that more than 95% can be repaired using the proposed method.