Abstract: This research study is an exploration of the selfdirected
professional development of teachers who teach in public
schools in an era of democracy and educational change in South
Africa. Amidst an ever-changing educational system, the teachers in
this study position themselves as self-directed teacher-learners where
they adopt particular learning practices which enable change within
the broader discourses of public schooling. Life-story interviews
were used to enter into the private and public spaces of five teachers
which offer glimpses of how particular systems shaped their
identities, and how the meanings of self-directed teacher-learner
shaped their learning practices. Through the Multidimensional
Framework of Analysis and Interpretation the teachers’ stories were
analysed through three lenses: restorying the field texts - the self
through story; the teacher-learner in relation to social contexts, and
practices of self-directed learning. This study shows that as teacherlearners
learn for change through self-directed learning practices,
they develop their agency as transformative intellectuals, which is
necessary for the reworking of South African public schools.
Abstract: This paper presents an evolutionary algorithm for
solving multi-objective optimization problems-based artificial neural
network (ANN). The multi-objective evolutionary algorithm used in
this study is genetic algorithm while ANN used is radial basis
function network (RBFN). The proposed algorithm named memetic
elitist Pareto non-dominated sorting genetic algorithm-based RBFN
(MEPGAN). The proposed algorithm is implemented on medical
diseases problems. The experimental results indicate that the
proposed algorithm is viable, and provides an effective means to
design multi-objective RBFNs with good generalization capability
and compact network structure. This study shows that MEPGAN
generates RBFNs coming with an appropriate balance between
accuracy and simplicity, comparing to the other algorithms found in
literature.
Abstract: The aim of this study was to design and simulate a
particular type of Asynchronous State Machine (ASM), namely a
‘traffic light controller’ (TLC), operated at a frequency of 0.5 Hz.
The design task involved two main stages: firstly, designing a 4-bit
binary counter using J-K flip flops as the timing signal and,
subsequently, attaining the digital logic by deploying ASM design
process. The TLC was designed such that it showed a sequence of
three different colours, i.e. red, yellow and green, corresponding to
set thresholds by deploying the least number of AND, OR and NOT
gates possible. The software Multisim was deployed to design such
circuit and simulate it for circuit troubleshooting in order for it to
display the output sequence of the three different colours on the
traffic light in the correct order. A clock signal, an asynchronous 4-
bit binary counter that was designed through the use of J-K flip flops
along with an ASM were used to complete this sequence, which was
programmed to be repeated indefinitely. Eventually, the circuit was
debugged and optimized, thus displaying the correct waveforms of
the three outputs through the logic analyser. However, hazards
occurred when the frequency was increased to 10 MHz. This was
attributed to delays in the feedback being too high.
Abstract: Assembly line balancing problem is aimed to divide
the tasks among the stations in assembly lines and optimize some
objectives. In assembly lines the workload on stations is different
from each other due to different tasks times and the difference in
workloads between stations can cause blockage or starvation in some
stations in assembly lines. Buffers are used to store the semi-finished
parts between the stations and can help to smooth the assembly
production. The assembly line balancing and buffer sizing problem
can affect the throughput of the assembly lines. Assembly line
balancing and buffer sizing problems have been studied separately in
literature and due to their collective contribution in throughput rate of
assembly lines, balancing and buffer sizing problem are desired to
study simultaneously and therefore they are considered concurrently
in current research. Current research is aimed to maximize
throughput, minimize total size of buffers in assembly line and
minimize workload variations in assembly line simultaneously. A
multi objective optimization objective is designed which can give
better Pareto solutions from the Pareto front and a simple example
problem is solved for assembly line balancing and buffer sizing
simultaneously. Current research is significant for assembly line
balancing research and it can be significant to introduce optimization
approaches which can optimize current multi objective problem in
future.
Abstract: The research was conducted to empirically validate
the proposed maturity model of e-Government implementation,
composed of four dimensions, further specified by 54 success factors
as attributes. To do so, there are two steps were performed. First,
expert’s judgment was conducted to test its content validity. The
second, reliability study was performed to evaluate inter-rater
agreement by using Fleiss Kappa approach. The kappa statistic
(kappa coefficient) is the most commonly used method for testing the
consistency among raters. Fleiss Kappa was a generalization of
Kappa in extensions to the case of more than two raters (multiple
raters) with multi-categorical ratings. Our findings show that most
attributes of the proposed model were related to their corresponding
dimensions. According to our results, The percentage of agree
answers given by the experts was 73.69% in dimension A, 89.76% in
B, 81.5% in C and 60.37% in D. This means that more than half of
the attributes of each dimensions were appropriate or relevant to the
dimensions they were supposed to measure, while 85% of attributes
were relevant enough to their corresponding dimensions. Inter-rater
reliability coefficient also showed satisfactory result and interpreted
as substantial agreement among raters. Therefore, the proposed
model in this paper was valid and reliable to measure the maturity of
e-Government implementation.
Abstract: The idea of the asynchronous transmission in
wavelength division multiplexing (WDM) ring MANs is studied in
this paper. Especially, we present an efficient access technique to
coordinate the collisions-free transmission of the variable sizes of IP
traffic in WDM ring core networks. Each node is equipped with a
tunable transmitter and a tunable receiver. In this way, all the
wavelengths are exploited for both transmission and reception. In
order to evaluate the performance measures of average throughput,
queuing delay and packet dropping probability at the buffers, a
simulation model that assumes symmetric access rights among the
nodes is developed based on Poisson statistics. Extensive numerical
results show that the proposed protocol achieves apart from high
bandwidth exploitation for a wide range of offered load, fairness of
queuing delay and dropping events among the different packets size
categories.
Abstract: In the last few decades, many southeast-Asia women
migrate to Taiwan by marriage, and it usually takes several years for
them to acquire Taiwanese citizenship. This study investigates the
relationship between their citizenship acquisition and whether they
develop Taiwanese identities, and how does it affect their ethnical
identity towards their original ethnics. Furthermore, the present study
also explores that whether citizenship acquisition help the immigrant
women to explore the host society further and make commitment to it,
or the identification towards mainstream Taiwanese society is only
symbolic and superficial? One hundred and ninety-two immigrant
women were measured using Multigroup Ethnic Identity
Measure-Revised and a global 10-point ethnic identity question.
Correlation tests, t-test, and hierarchical regression were performed to
answer the above questions. The results revealed that citizenship
acquisition does help immigrant women to identify with Taiwanese
society, but it does not affect how they identify with their own ethnics.
Furthermore, the results also indicated that acquiring citizenship
would not help these immigrant women become involved in deeper
cultural exploration of Taiwan nor would it encourage them to make
commitments to the host society.
Abstract: A Distributed Denial of Service (DDoS) attack is a
major threat to cyber security. It originates from the network layer or
the application layer of compromised/attacker systems which are
connected to the network. The impact of this attack ranges from the
simple inconvenience to use a particular service to causing major
failures at the targeted server. When there is heavy traffic flow to a
target server, it is necessary to classify the legitimate access and
attacks. In this paper, a novel method is proposed to detect DDoS
attacks from the traces of traffic flow. An access matrix is created
from the traces. As the access matrix is multi dimensional, Principle
Component Analysis (PCA) is used to reduce the attributes used for
detection. Two classifiers Naive Bayes and K-Nearest neighborhood
are used to classify the traffic as normal or abnormal. The
performance of the classifier with PCA selected attributes and actual
attributes of access matrix is compared by the detection rate and
False Positive Rate (FPR).
Abstract: We present a solution to the Maxmin u/E parameters
estimation problem of possibility distributions in m-dimensional
case. Our method is based on geometrical approach, where minimal
area enclosing ellipsoid is constructed around the sample. Also we
demonstrate that one can improve results of well-known algorithms
in fuzzy model identification task using Maxmin u/E parameters
estimation.
Abstract: Recently, increasing the quality of experience (QoE) is
an important issue. Since performance degradation at cell edge
extremely reduces the QoE, several techniques are defined at
LTE/LTE-A standard to remove inter-cell interference (ICI). However,
the conventional techniques have disadvantage because there is a
trade-off between resource allocation and reliable communication.
The proposed scheme reduces the ICI more efficiently by using
channel state information (CSI) smartly. It is shown that the proposed
scheme can reduce the ICI with fewer resources.
Abstract: Wireless mesh networking is rapidly gaining in
popularity with a variety of users: from municipalities to enterprises,
from telecom service providers to public safety and military
organizations. This increasing popularity is based on two basic facts:
ease of deployment and increase in network capacity expressed in
bandwidth per footage; WMNs do not rely on any fixed
infrastructure. Many efforts have been used to maximizing
throughput of the network in a multi-channel multi-radio wireless
mesh network. Current approaches are purely based on either static or
dynamic channel allocation approaches. In this paper, we use a
hybrid multichannel multi radio wireless mesh networking
architecture, where static and dynamic interfaces are built in the
nodes. Dynamic Adaptive Channel Allocation protocol (DACA), it
considers optimization for both throughput and delay in the channel
allocation. The assignment of the channel has been allocated to be codependent
with the routing problem in the wireless mesh network and
that should be based on passage flow on every link. Temporal and
spatial relationship rises to re compute the channel assignment every
time when the pattern changes in mesh network, channel assignment
algorithms assign channels in network. In this paper a computing
path which captures the available path bandwidth is the proposed
information and the proficient routing protocol based on the new path
which provides both static and dynamic links. The consistency
property guarantees that each node makes an appropriate packet
forwarding decision and balancing the control usage of the network,
so that a data packet will traverse through the right path.
Abstract: Worldwide, most PILC MV underground cables in use
are approaching the end of their design life; hence, failures are likely
to increase. This paper studies the electric field and potential
distributions within the PILC insulted cable containing common
void-defect. The finite element model of the performance of the
belted PILC MV underground cable is presented. The variation of the
electric field stress within the cable using the Finite Element Method
(FEM) is concentrated. The effects of the void-defect within the
insulation are given. Outcomes will lead to deeper understanding of
the modeling of Paper Insulated Lead Covered (PILC) and electric
field response of belted PILC insulted cable containing void defect.
Abstract: In this study, data loss tolerance of Support Vector Machines (SVM) based activity recognition model and multi activity classification performance when data are received over a lossy wireless sensor network is examined. Initially, the classification algorithm we use is evaluated in terms of resilience to random data loss with 3D acceleration sensor data for sitting, lying, walking and standing actions. The results show that the proposed classification method can recognize these activities successfully despite high data loss. Secondly, the effect of differentiated quality of service performance on activity recognition success is measured with activity data acquired from a multi hop wireless sensor network, which introduces high data loss. The effect of number of nodes on the reliability and multi activity classification success is demonstrated in simulation environment. To the best of our knowledge, the effect of data loss in a wireless sensor network on activity detection success rate of an SVM based classification algorithm has not been studied before.
Abstract: In-memory database systems are becoming popular
due to the availability and affordability of sufficiently large RAM and
processors in modern high-end servers with the capacity to manage
large in-memory database transactions. While fast and reliable inmemory
systems are still being developed to overcome cache misses,
CPU/IO bottlenecks and distributed transaction costs, disk-based data
stores still serve as the primary persistence. In addition, with the
recent growth in multi-tenancy cloud applications and associated
security concerns, many organisations consider the trade-offs and
continue to require fast and reliable transaction processing of diskbased
database systems as an available choice. For these
organizations, the only way of increasing throughput is by improving
the performance of disk-based concurrency control. This warrants a
hybrid database system with the ability to selectively apply an
enhanced disk-based data management within the context of inmemory
systems that would help improve overall throughput.
The general view is that in-memory systems substantially
outperform disk-based systems. We question this assumption and
examine how a modified variation of access invariance that we call
enhanced memory access, (EMA) can be used to allow very high
levels of concurrency in the pre-fetching of data in disk-based
systems. We demonstrate how this prefetching in disk-based systems
can yield close to in-memory performance, which paves the way for
improved hybrid database systems. This paper proposes a novel EMA
technique and presents a comparative study between disk-based EMA
systems and in-memory systems running on hardware configurations
of equivalent power in terms of the number of processors and their
speeds. The results of the experiments conducted clearly substantiate
that when used in conjunction with all concurrency control
mechanisms, EMA can increase the throughput of disk-based systems
to levels quite close to those achieved by in-memory system. The
promising results of this work show that enhanced disk-based
systems facilitate in improving hybrid data management within the
broader context of in-memory systems.
Abstract: In this paper we describe the Levenvberg-Marquardt
(LM) algorithm for identification and equalization of CDMA
signals received by an antenna array in communication channels.
The synthesis explains the digital separation and equalization of
signals after propagation through multipath generating intersymbol
interference (ISI). Exploiting discrete data transmitted and three
diversities induced at the reception, the problem can be composed
by the Block Component Decomposition (BCD) of a tensor of
order 3 which is a new tensor decomposition generalizing the
PARAFAC decomposition. We optimize the BCD decomposition by
Levenvberg-Marquardt method gives encouraging results compared to
classical alternating least squares algorithm (ALS). In the equalization
part, we use the Minimum Mean Square Error (MMSE) to perform
the presented method. The simulation results using the LM algorithm
are important.
Abstract: This paper impart the design and testing of
Nanotechnology based sequential circuits using multiplexer
conservative QCA (MX-CQCA) logic gates, which is easily testable
using only two vectors. This method has great prospective in the
design of sequential circuits based on reversible conservative logic
gates and also smashes the sequential circuits implemented in
traditional gates in terms of testability. Reversible circuits are similar
to usual logic circuits except that they are built from reversible gates.
Designs of multiplexer conservative QCA logic based two vectors
testable double edge triggered (DET) sequential circuits in VHDL
language are also accessible here; it will also diminish intricacy in
testing side. Also other types of sequential circuits such as D, SR, JK
latches are designed using this MX-CQCA logic gate. The objective
behind the proposed design methodologies is to amalgamate
arithmetic and logic functional units optimizing key metrics such as
garbage outputs, delay, area and power. The projected MX-CQCA
gate outshines other reversible gates in terms of the intricacy, delay.
Abstract: Environmental impacts of six 3D printers using
various materials were compared to determine if material choice
drove sustainability, or if other factors such as machine type, machine
size, or machine utilization dominate. Cradle-to-grave life-cycle
assessments were performed, comparing a commercial-scale FDM
machine printing in ABS plastic, a desktop FDM machine printing in
ABS, a desktop FDM machine printing in PET and PLA plastics, a
polyjet machine printing in its proprietary polymer, an SLA machine
printing in its polymer, and an inkjet machine hacked to print in salt
and dextrose. All scenarios were scored using ReCiPe Endpoint H
methodology to combine multiple impact categories, comparing
environmental impacts per part made for several scenarios per
machine. Results showed that most printers’ ecological impacts were
dominated by electricity use, not materials, and the changes in
electricity use due to different plastics was not significant compared
to variation from one machine to another. Variation in machine idle
time determined impacts per part most strongly. However, material
impacts were quite important for the inkjet printer hacked to print in
salt: In its optimal scenario, it had up to 1/38th the impacts coreper
part as the worst-performing machine in the same scenario. If salt
parts were infused with epoxy to make them more physically robust,
then much of this advantage disappeared, and material impacts
actually dominated or equaled electricity use. Future studies should
also measure DMLS and SLS processes / materials.
Abstract: Measurements and quantitative analysis of kinematic
parameters of human hand movements have an important role in
different areas such as hand function rehabilitation, modeling of
multi-digits robotic hands, and the development of machine-man
interfaces. In this paper the assessment and evaluation of the reachto-
grasp movement by using computerized and robot-assisted method
is described. Experiment involved the measurements of hand
positions of seven healthy subjects during grasping three objects of
different shapes and sizes. Results showed that three dominant phases
of reach-to-grasp movements could be clearly identified.
Abstract: The Simulation based VLSI Implementation of
FELICS (Fast Efficient Lossless Image Compression System)
Algorithm is proposed to provide the lossless image compression and
is implemented in simulation oriented VLSI (Very Large Scale
Integrated). To analysis the performance of Lossless image
compression and to reduce the image without losing image quality
and then implemented in VLSI based FELICS algorithm. In FELICS
algorithm, which consists of simplified adjusted binary code for
Image compression and these compression image is converted in
pixel and then implemented in VLSI domain. This parameter is used
to achieve high processing speed and minimize the area and power.
The simplified adjusted binary code reduces the number of arithmetic
operation and achieved high processing speed. The color difference
preprocessing is also proposed to improve coding efficiency with
simple arithmetic operation. Although VLSI based FELICS
Algorithm provides effective solution for hardware architecture
design for regular pipelining data flow parallelism with four stages.
With two level parallelisms, consecutive pixels can be classified into
even and odd samples and the individual hardware engine is
dedicated for each one. This method can be further enhanced by
multilevel parallelisms.
Abstract: Artificial Neural Network (ANN) can be trained using
back propagation (BP). It is the most widely used algorithm for
supervised learning with multi-layered feed-forward networks.
Efficient learning by the BP algorithm is required for many practical
applications. The BP algorithm calculates the weight changes of
artificial neural networks, and a common approach is to use a twoterm
algorithm consisting of a learning rate (LR) and a momentum
factor (MF). The major drawbacks of the two-term BP learning
algorithm are the problems of local minima and slow convergence
speeds, which limit the scope for real-time applications. Recently the
addition of an extra term, called a proportional factor (PF), to the
two-term BP algorithm was proposed. The third increases the speed
of the BP algorithm. However, the PF term also reduces the
convergence of the BP algorithm, and criteria for evaluating
convergence are required to facilitate the application of the three
terms BP algorithm. Although these two seem to be closely related,
as described later, we summarize various improvements to overcome
the drawbacks. Here we compare the different methods of
convergence of the new three-term BP algorithm.