Abstract: Since its independence in 1962, Algeria has struggled
to establish an educational system tailored to the needs of the
population it may address. Considering the historical connection with
France, Algeria has always looked at the French language as a
cultural imperative until late in the seventies. After the Arabization
policy of 1971 and the socioeconomic changes taking place
worldwide, the use of English as a communicating vehicle started to
gain more space within globalized Algeria. Consequently, disparities
in the use of French started to fade away at the cross-roads leaving
more space to the teaching of English as a second foreign language.
Moreover, the introduction of the Bologna Process and the
European Credit Transfer System in Higher Education has
necessitated some innovations in the design and development of new
curricula adapted to the socioeconomic market. In this paper, I will
try to highlight the important historical dimensions Algeria has taken
towards the implementation of an English language methodology and
to the status it acquired from second foreign language, to first foreign
language to “the language of knowledge and sciences". I will also
propose new pedagogical perspectives for a better treatment of the
English language in order to encourage independent and autonomous
learning.
Abstract: Investigating language acquisition is one of the most
challenging problems in the area of studying language. Syllable
learning as a level of language acquisition has a considerable
significance since it plays an important role in language acquisition.
Because of impossibility of studying language acquisition directly
with children, especially in its developmental phases, computer
models will be useful in examining language acquisition. In this
paper a computer model of early language learning for syllable
learning is proposed. It is guided by a conceptual model of syllable
learning which is named Directions Into Velocities of Articulators
model (DIVA). The computer model uses simple associational and
reinforcement learning rules within neural network architecture
which are inspired by neuroscience. Our simulation results verify the
ability of the proposed computer model in producing phonemes
during babbling and early speech. Also, it provides a framework for
examining the neural basis of language learning and communication
disorders.
Abstract: This article gives a short preview of the new software
created especially for palletizing process in automated production
systems. Each chapter of this article is about problem solving in
development of modules in Java programming language. First part
describes structure of the software, its modules and data flow
between them. Second part describes all deployment methods, which
are implemented in the software. Next chapter is about twodimensional
editor created for manipulation with objects in each
layer of the load and gives calculations for collision control. Module
of virtual reality used for three-dimensional preview and creation of
the load is described in the fifth chapter. The last part of this article
describes communication and data flow between control system of
the robot, vision system and software.
Abstract: This paper presents a new methodology to select test
cases from regression test suites. The selection strategy is based on
analyzing the dynamic behavior of the applications that written in
any programming language. Methods based on dynamic analysis are
more safe and efficient. We design a technique that combine the code
based technique and model based technique, to allow comparing the
object oriented of an application that written in any programming
language. We have developed a prototype tool that detect changes
and select test cases from test suite.
Abstract: The electronically available Urdu data is in image form
which is very difficult to process. Printed Urdu data is the root cause
of problem. So for the rapid progress of Urdu language we need an
OCR systems, which can help us to make Urdu data available for the
common person. Research has been carried out for years to automata
Arabic and Urdu script. But the biggest hurdle in the development of
Urdu OCR is the challenge to recognize Nastalique Script which is
taken as standard for writing Urdu language. Nastalique script is
written diagonally with no fixed baseline which makes the script
somewhat complex. Overlap is present not only in characters but in
the ligatures as well. This paper proposes a method which allows
successful recognition of Nastalique Script.
Abstract: In this paper the FPGA implementations for four
stream ciphers are presented. The two stream ciphers, MUGI and
SNOW 2.0 are recently adopted by the International Organization for
Standardization ISO/IEC 18033-4:2005 standard. The other two
stream ciphers, MICKEY 128 and TRIVIUM have been submitted
and are under consideration for the eSTREAM, the ECRYPT
(European Network of Excellence for Cryptology) Stream Cipher
project. All ciphers were coded using VHDL language. For the
hardware implementation, an FPGA device was used. The proposed
implementations achieve throughputs range from 166 Mbps for
MICKEY 128 to 6080 Mbps for MUGI.
Abstract: Formal Specification languages are being widely used
for system specification and testing. Highly critical systems such as
real time systems, avionics, and medical systems are represented
using Formal specification languages. Formal specifications based
testing is mostly performed using black box testing approaches thus
testing only the set of inputs and outputs of the system. The formal
specification language such as VDMµ can be used for white box
testing as they provide enough constructs as any other high level
programming language. In this work, we perform data and control
flow analysis of VDMµ class specifications. The proposed work is
discussed with an example of SavingAccount.
Abstract: Source code retrieval is of immense importance in the software engineering field. The complex tasks of retrieving and extracting information from source code documents is vital in the development cycle of the large software systems. The two main subtasks which result from these activities are code duplication prevention and plagiarism detection. In this paper, we propose a Mohamed Amine Ouddan, and Hassane Essafi source code retrieval system based on two-level fingerprint representation, respectively the structural and the semantic information within a source code. A sequence alignment technique is applied on these fingerprints in order to quantify the similarity between source code portions. The specific purpose of the system is to detect plagiarism and duplicated code between programs written in different programming languages belonging to the same class, such as C, Cµ, Java and CSharp. These four languages are supported by the actual version of the system which is designed such that it may be easily adapted for any programming language.
Abstract: Regression testing is a maintenance activity applied to
modified software to provide confidence that the changed parts are
correct and that the unchanged parts have not been adversely affected
by the modifications. Regression test selection techniques reduce the
cost of regression testing, by selecting a subset of an existing test
suite to use in retesting modified programs. This paper presents the
first general regression-test-selection technique, which based on code
and allows selecting test cases for any programs written in any
programming language. Then it handles incomplete program. We
also describe RTSDiff, a regression-test-selection system that
implements the proposed technique. The results of the empirical
studied that performed in four programming languages java, C#, Cµ
and Visual basic show that the efficiency and effective in reducing
the size of test suit.
Abstract: This paper attempts to explore a new method to
improve the teaching of algorithmic for beginners. It is well known
that algorithmic is a difficult field to teach for teacher and complex to
assimilate for learner. These difficulties are due to intrinsic
characteristics of this field and to the manner that teachers (the
majority) apprehend its bases. However, in a Technology Enhanced
Learning environment (TEL), assessment, which is important and
indispensable, is the most delicate phase to implement, for all
problems that generate (noise...). Our objective registers in the
confluence of these two axes. For this purpose, EASEL focused
essentially to elaborate an assessment approach of algorithmic
competences in a TEL environment. This approach consists in
modeling an algorithmic solution according to basic and elementary
operations which let learner draw his/her own step with all autonomy
and independently to any programming language. This approach
assures a trilateral assessment: summative, formative and diagnostic
assessment.
Abstract: Performance of any continuous speech recognition system is highly dependent on performance of the acoustic models. Generally, development of the robust spoken language technology relies on the availability of large amounts of data. Common way to cope with little data for training each state of Markov models is treebased state tying. This tying method applies contextual questions to tie states. Manual procedure for question generation suffers from human errors and is time consuming. Various automatically generated questions are used to construct decision tree. There are three approaches to generate questions to construct HMMs based on decision tree. One approach is based on misrecognized phonemes, another approach basically uses feature table and the other is based on state distributions corresponding to context-independent subword units. In this paper, all these methods of automatic question generation are applied to the decision tree on FARSDAT corpus in Persian language and their results are compared with those of manually generated questions. The results show that automatically generated questions yield much better results and can replace manually generated questions in Persian language.
Abstract: A highly optimized implementation of binary mixture
diffusion with no initial bulk velocity on graphics processors is
presented. The lattice Boltzmann model is employed for simulating
the binary diffusion of oxygen and nitrogen into each other with
different initial concentration distributions. Simulations have been
performed using the latest proposed lattice Boltzmann model that
satisfies both the indifferentiability principle and the H-theorem for
multi-component gas mixtures. Contemporary numerical
optimization techniques such as memory alignment and increasing
the multiprocessor occupancy are exploited along with some novel
optimization strategies to enhance the computational performance on
graphics processors using the C for CUDA programming language.
Speedup of more than two orders of magnitude over single-core
processors is achieved on a variety of Graphical Processing Unit
(GPU) devices ranging from conventional graphics cards to
advanced, high-end GPUs, while the numerical results are in
excellent agreement with the available analytical and numerical data
in the literature.
Abstract: The main purpose of this research aimed to create tactile texture designed media for the blind used for extra learning outside classrooms in order to enhance imagination of the blind about Himmapan creatures, furthermore, the main objective of the research focused on improving the visual disabled perception to be equal to normal people. The target group of the research is blinded students studying in The Bangkok school for the blind between grade 4-6 in the second semester of 2011 who are able to read the braille language. The research methodology consisted of the field study and the documentary study related to the blind, tactile texture designed media and Himmapan creatures. 10 pictures of tactile texture designed media were created in the designing process which began after the analysis had conducted based the primary and secondary data. The works had presented to experts in the visual disabled field who evaluated the works. After approval, the works used as prototype to teach the blind. KeywordsBlind, Himmapan Creatures, Tactile Texture.
Abstract: CEMTool is a command style design and analyzing
package for scientific and technological algorithm and a matrix based
computation language. In this paper, we present new 2D & 3D
finite element method (FEM) packages for CEMTool. We discuss
the detailed structures and the important features of pre-processor,
solver, and post-processor of CEMTool 2D & 3D FEM packages. In
contrast to the existing MATLAB PDE Toolbox, our proposed FEM
packages can deal with the combination of the reserved words. Also,
we can control the mesh in a very effective way. With the introduction
of new mesh generation algorithm and fast solving technique, our
FEM packages can guarantee the shorter computational time than
MATLAB PDE Toolbox. Consequently, with our new FEM packages,
we can overcome some disadvantages or limitations of the existing
MATLAB PDE Toolbox.
Abstract: Various security APIs (Application Programming
Interfaces) are being used in a variety of application areas requiring
the information security function. However, these standards are not
compatible, and the developer must use those APIs selectively
depending on the application environment or the programming
language. To resolve this problem, we propose the standard draft of
the information security component, while SSL (Secure Sockets
Layer) using the confidentiality and integrity component interface has
been implemented to verify validity of the standard proposal. The
implemented SSL uses the lower-level SSL component when
establishing the RMI (Remote Method Invocation) communication
between components, as if the security algorithm had been
implemented by adding one more layer on the TCP/IP.
Abstract: The iris recognition technology is the most accurate,
fast and less invasive one compared to other biometric techniques
using for example fingerprints, face, retina, hand geometry, voice or
signature patterns. The system developed in this study has the
potential to play a key role in areas of high-risk security and can
enable organizations with means allowing only to the authorized
personnel a fast and secure way to gain access to such areas. The
paper aim is to perform the iris region detection and iris inner and
outer boundaries localization. The system was implemented on
windows platform using Visual C# programming language. It is easy
and efficient tool for image processing to get great performance
accuracy. In particular, the system includes two main parts. The first
is to preprocess the iris images by using Canny edge detection
methods, segments the iris region from the rest of the image and
determine the location of the iris boundaries by applying Hough
transform. The proposed system tested on 756 iris images from 60
eyes of CASIA iris database images.
Abstract: The purpose of this study is to investigate the effects
of modality principles in instructional software among first grade
pupils- achievements in the learning of Arabic Language. Two modes
of instructional software were systematically designed and
developed, audio with images (AI), and text with images (TI). The
quasi-experimental design was used in the study. The sample
consisted of 123 male and female pupils from IRBED Education
Directorate, Jordan. The pupils were randomly assigned to any one of
the two modes. The independent variable comprised the two modes
of the instructional software, the students- achievement levels in the
Arabic Language class and gender. The dependent variable was the
achievements of the pupils in the Arabic Language test. The
theoretical framework of this study was based on Mayer-s Cognitive
Theory of Multimedia Learning. Four hypotheses were postulated
and tested. Analyses of Variance (ANOVA) showed that pupils using
the (AI) mode performed significantly better than those using (TI)
mode. This study concluded that the audio with images mode was an
important aid to learning as compared to text with images mode.
Abstract: One of the purposes of the robust method of
estimation is to reduce the influence of outliers in the data, on the
estimates. The outliers arise from gross errors or contamination from
distributions with long tails. The trimmed mean is a robust estimate.
This means that it is not sensitive to violation of distributional
assumptions of the data. It is called an adaptive estimate when the
trimming proportion is determined from the data rather than being
fixed a “priori-.
The main objective of this study is to find out the robustness
properties of the adaptive trimmed means in terms of efficiency, high
breakdown point and influence function. Specifically, it seeks to find
out the magnitude of the trimming proportion of the adaptive
trimmed mean which will yield efficient and robust estimates of the
parameter for data which follow a modified Weibull distribution with
parameter λ = 1/2 , where the trimming proportion is determined by a
ratio of two trimmed means defined as the tail length. Secondly, the
asymptotic properties of the tail length and the trimmed means are
also investigated. Finally, a comparison is made on the efficiency of
the adaptive trimmed means in terms of the standard deviation for the
trimming proportions and when these were fixed a “priori".
The asymptotic tail lengths defined as the ratio of two trimmed
means and the asymptotic variances were computed by using the
formulas derived. While the values of the standard deviations for the
derived tail lengths for data of size 40 simulated from a Weibull
distribution were computed for 100 iterations using a computer
program written in Pascal language.
The findings of the study revealed that the tail lengths of the
Weibull distribution increase in magnitudes as the trimming
proportions increase, the measure of the tail length and the adaptive
trimmed mean are asymptotically independent as the number of
observations n becomes very large or approaching infinity, the tail
length is asymptotically distributed as the ratio of two independent
normal random variables, and the asymptotic variances decrease as
the trimming proportions increase. The simulation study revealed
empirically that the standard error of the adaptive trimmed mean
using the ratio of tail lengths is relatively smaller for different values
of trimming proportions than its counterpart when the trimming
proportions were fixed a 'priori'.
Abstract: In this paper a modified version NXM of traditional 5X5 playfair cipher is introduced which enable the user to encrypt message of any Natural language by taking appropriate size of the matrix depending upon the size of the natural language. 5X5 matrix has the capability of storing only 26 characters of English language and unable to store characters of any language having more than 26 characters. To overcome this limitation NXM matrix is introduced which solve this limitation. In this paper a special case of Urdu language is discussed. Where # is used for completing odd pair and * is used for repeating letters.
Abstract: Set covering problem is a classical problem in
computer science and complexity theory. It has many applications,
such as airline crew scheduling problem, facilities location problem,
vehicle routing, assignment problem, etc. In this paper, three
different techniques are applied to solve set covering problem.
Firstly, a mathematical model of set covering problem is introduced
and solved by using optimization solver, LINGO. Secondly, the
Genetic Algorithm Toolbox available in MATLAB is used to solve
set covering problem. And lastly, an ant colony optimization method
is programmed in MATLAB programming language. Results
obtained from these methods are presented in tables. In order to
assess the performance of the techniques used in this project, the
benchmark problems available in open literature are used.