Abstract: Teachers form the backbone of any educational system, hence selecting qualified candidates is very crucial. In Malaysia, the decision making in the selection process involves a few stages: Initial filtering through academic achievement, taking entry examination and going through an interview session. The last stage is the most challenging since it highly depends on human judgment. Therefore, this study sought to identify the selection criteria for teacher candidates that form the basis for an efficient multi-criteria teacher-candidate selection model for that last stage. The relevant criteria were determined from the literature and also based on expert input that is those who were involved in interviewing teacher candidates from a public university offering the formal training program. There are three main competency criteria that were identified which are content of knowledge, communication skills and personality. Further, each main criterion was divided into a few subcriteria. The Analytical Hierarchy Process (AHP) technique was employed to allocate weights for the criteria and later, integrated a Simple Weighted Average (SWA) scoring approach to develop the selection model. Subsequently, a web-based Decision Support System was developed to assist in the process of selecting the qualified teacher candidates. The Teacher-Candidate Selection (TeCaS) system is able to assist the panel of interviewers during the selection process which involves a large amount of complex qualitative judgments.
Abstract: This paper aims to address the new trend of social
commerce as electronic commerce leverages Web 2.0 technologies
and online social media. The infusions of new technologies on the
World Wide Web connect users in their homes and workplaces,
thus transforming social formations and business transactions. An
in-depth study of the growth and success of a social commerce site,
Facebook was conducted. The investigation is finalized with a triad
relational model which reflects socioeconomic life in the Internet
today. The following three concepts work jointly to form a global
community that has already started to take the place of traditional
commerce and socialization: Web 2.0 technology, E-commerce,
and online social media. A discussion of the research findings
indicates that social commerce networks are sustainable because of
the various incentives given to users as they collaborate with others
regardless of their identity and location. The focus of this article is
to increase understanding on quickly developing Web 2.0 based
social media and their subsequent effects on the emerging social
commerce.
Abstract: Landscape connectivity combines a description of the
physical structure of the landscape with special species- response to
that structure, which forms the theoretical background of applying
landscape connectivity principles in the practices of landscape
planning and design. In this study, a residential development project in
the southern United States was used to explore the meaning of
landscape connectivity and its application in town planning. The vast
rural landscape in the southern United States is conspicuously
characterized by the hedgerow trees or groves. The patchwork
landscape of fields surrounded by high hedgerows is a traditional and
familiar feature of the American countryside. Hedgerows are in effect
linear strips of trees, groves, or woodlands, which are often critical
habitats for wildlife and important for the visual quality of the
landscape. Based on geographic information system (GIS) and
statistical analysis (FRAGSTAT), this study attempts to quantify the
landscape connectivity characterized by hedgerows in south Alabama
where substantial areas of authentic hedgerow landscape are being
urbanized due to the ever expanding real estate industry and high
demand for new residential development. The results of this study
shed lights on how to balance the needs of new urban development and
biodiversity conservation by maintaining a higher level of landscape
connectivity, thus will inform the design intervention.
Abstract: A suspension bridge is the most suitable type of structure for a long-span bridge due to rational use of structural materials. Increased deformability, which is conditioned by appearance of the elastic and kinematic displacements, is the major disadvantage of suspension bridges. The problem of increased kinematic displacements under the action of non-symmetrical load can be solved by prestressing. The prestressed suspension bridge with the span of 200 m was considered as an object of investigations. The cable truss with the cross web was considered as the main load carrying structure of the prestressed suspension bridge. The considered cable truss was optimized by 47 variable factors using Genetic algorithm and FEM program ANSYS. It was stated, that the maximum total displacements are reduced up to 29.9% by using of the cable truss with the rational characteristics instead of the single cable in the case of the worst situated load.
Abstract: The tray/multi-tray distillation process is a topic that
has been investigated to great detail over the last decade by many
teams such as Jubran et al. [1], Adhikari et al. [2], Mowla et al. [3],
Shatat et al. [4] and Fath [5] to name a few. A significant amount of
work and effort was spent focusing on modeling and/simulation of
specific distillation hardware designs. In this work, we have focused
our efforts on investigating and gathering experimental data on
several engineering and design variables to quantify their influence
on the yield of the multi-tray distillation process. Our goals are to
generate experimental performance data to bridge some existing gaps
in the design, engineering, optimization and theoretical modeling
aspects of the multi-tray distillation process.
Abstract: This paper deals with a novel approach of power
transformers diagnostics. This approach identifies the exact location
and the range of a fault in the transformer and helps to reduce
operation costs related to handling of the faulty transformer, its
disassembly and repair. The advantage of the approach is a
possibility to simulate healthy transformer and also all faults, which
can occur in transformer during its operation without its
disassembling, which is very expensive in practice. The approach is
based on creating frequency dependent impedance of the transformer
by sweep frequency response analysis measurements and by 3D FE
parametrical modeling of the fault in the transformer. The parameters
of the 3D FE model are the position and the range of the axial short
circuit. Then, by comparing the frequency dependent impedances of
the parametrical models with the measured ones, the location and the
range of the fault is identified. The approach was tested on a real
transformer and showed high coincidence between the real fault and
the simulated one.
Abstract: The Boundary Representation of a 3D manifold contains
FACES (connected subsets of a parametric surface S : R2 -!
R3). In many science and engineering applications it is cumbersome
and algebraically difficult to deal with the polynomial set and
constraints (LOOPs) representing the FACE. Because of this reason, a
Piecewise Linear (PL) approximation of the FACE is needed, which is
usually represented in terms of triangles (i.e. 2-simplices). Solving the
problem of FACE triangulation requires producing quality triangles
which are: (i) independent of the arguments of S, (ii) sensitive to the
local curvatures, and (iii) compliant with the boundaries of the FACE
and (iv) topologically compatible with the triangles of the neighboring
FACEs. In the existing literature there are no guarantees for the point
(iii). This article contributes to the topic of triangulations conforming
to the boundaries of the FACE by applying the concept of parameterindependent
Gabriel complex, which improves the correctness of the
triangulation regarding aspects (iii) and (iv). In addition, the article
applies the geometric concept of tangent ball to a surface at a point to
address points (i) and (ii). Additional research is needed in algorithms
that (i) take advantage of the concepts presented in the heuristic
algorithm proposed and (ii) can be proved correct.
Abstract: In this paper, a two factor scheme is proposed to
generate cryptographic keys directly from biometric data, which
unlike passwords, are strongly bound to the user. Hash value of the
reference iris code is used as a cryptographic key and its length
depends only on the hash function, being independent of any other
parameter. The entropy of such keys is 94 bits, which is much higher
than any other comparable system. The most important and distinct
feature of this scheme is that it regenerates the reference iris code by
providing a genuine iris sample and the correct user password. Since
iris codes obtained from two images of the same eye are not exactly
the same, error correcting codes (Hadamard code and Reed-Solomon
code) are used to deal with the variability. The scheme proposed here
can be used to provide keys for a cryptographic system and/or for
user authentication. The performance of this system is evaluated on
two publicly available databases for iris biometrics namely CBS and
ICE databases. The operating point of the system (values of False
Acceptance Rate (FAR) and False Rejection Rate (FRR)) can be set
by properly selecting the error correction capacity (ts) of the Reed-
Solomon codes, e.g., on the ICE database, at ts = 15, FAR is 0.096%
and FRR is 0.76%.
Abstract: Video watermarking is usually considered as watermarking of a set of still images. In frame-by-frame watermarking approach, each video frame is seen as a single watermarked image, so collusion attack is more critical in video watermarking. If the same or redundant watermark is used for embedding in every frame of video, the watermark can be estimated and then removed by watermark estimate remodolulation (WER) attack. Also if uncorrelated watermarks are used for every frame, these watermarks can be washed out with frame temporal filtering (FTF). Switching watermark system or so-called SS-N system has better performance against WER and FTF attacks. In this system, for each frame, the watermark is randomly picked up from a finite pool of watermark patterns. At first SS-N system will be surveyed and then a new collusion attack for SS-N system will be proposed using a new algorithm for separating video frame based on watermark pattern. So N sets will be built in which every set contains frames carrying the same watermark. After that, using WER attack in every set, N different watermark patterns will be estimated and removed later.
Abstract: Cheating on standardized tests has been a major
concern as it potentially minimizes measurement precision. One
major way to reduce cheating by collusion is to administer multiple
forms of a test. Even with this approach, potential collusion is still
quite large. A Latin-square treatment structure for distributing
multiple forms is proposed to further reduce the colluding potential.
An index to measure the extent of colluding potential is also
proposed. Finally, with a simple algorithm, the various Latin-squares
were explored to find the best structure to keep the colluding
potential to a minimum.
Abstract: This paper proposes a new algebraic scheme to design a PID controller for higher order linear time invariant continuous systems. Modified PSO (MPSO) based model order formulation techniques have applied to obtain the effective formulated second order system. A controller is tuned to meet the desired performance specification by using pole-zero cancellation method. Proposed PID controller is attached with both higher order system and formulated second order system. The closed loop response is observed for stabilization process and compared with general PSO based formulated second order system. The proposed method is illustrated through numerical example from literature.
Abstract: Because of architectural condition and structure application, sometimes mass source and stiffness source are not coincidence, and the structure is irregular. The structure is also might be asymmetric as an asymmetric bracing in plan which leads to unbalance distribution of stiffness or because of unbalance distribution of the mass. Both condition lead to eccentricity and torsion in the structure. The deficiency of ordinary code to evaluate the performance of steel structures against earthquake has been caused designing based on performance level or capacity spectrum be used. By using the mentioned methods it is possible to design a structure that its behavior against different earthquakes be predictive. In this article 5- story buildings with different percentage of asymmetric which is because of stiffness changes have been designed. The static and dynamic nonlinear analysis under three acceleration recording has been done. Finally performance level of the structure has been evaluated.
Abstract: This paper presents the benchmarking results and
performance evaluation of differentclustersbuilt atthe National Center
for High-Performance Computingin Taiwan. Performance of
processor, memory subsystem andinterconnect is a critical factor in the
overall performance of high performance computing platforms. The
evaluation compares different system architecture and software
platforms. Most supercomputer used HPL to benchmark their system
performance, in accordance with the requirement of the TOP500 List.
In this paper we consider system memory access factors that affect
benchmark performance, such as processor and memory
performance.We hope these works will provide useful information for
future development and construct cluster system.
Abstract: Cybercrime is now becoming a big challenge in Nigeria apart from the traditional crime. Inability to identify perpetrators is one of the reasons for the growing menace. This paper proposes a design for monitoring internet users’ activities in order to curbing cybercrime. It requires redefining the operations of Internet Service Providers (ISPs) which will now mandate users to be authenticated before accessing the internet. In implementing this work which can be adapted to a larger scale, a virtual router application is developed and configured to mimic a real router device. A sign-up portal is developed to allow users to register with the ISP. The portal asks for identification information which will include bio-data and government issued identification data like National Identity Card number, et cetera. A unique username and password are chosen by the user to enable access to the internet which will be used to reference him to an Internet Protocol Address (IP Address) of any system he uses on the internet and thereby associating him to any criminal act related to that IP address at that particular time. Questions such as “What happen when another user knows the password and uses it to commit crime?” and other pertinent issues are addressed.
Abstract: Electron multiplying charge coupled devices (EMCCDs) have revolutionized the world of low light imaging by introducing on-chip multiplication gain based on the impact ionization effect in the silicon. They combine the sub-electron readout noise with high frame rates. Signal-to-noise Ratio (SNR) is an important performance parameter for low-light-level imaging systems. This work investigates the SNR performance of an EMCCD operated in Non-inverted Mode (NIMO) and Inverted Mode (IMO). The theory of noise characteristics and operation modes is presented. The results show that the SNR of is determined by dark current and clock induced charge at high gain level. The optimum SNR performance is provided by an EMCCD operated in NIMO in short exposure and strong cooling applications. In contrast, an IMO EMCCD is preferable.
Abstract: Morgan-s refinement calculus (MRC) is one of the
well-known methods allowing the formality presented in the program
specification to be continued all the way to code. On the other hand,
Object-Z (OZ) is an extension of Z adding support for classes and
objects. There are a number of methods for obtaining code from OZ
specifications that can be categorized into refinement and animation
methods. As far as we know, only one refinement method exists
which refines OZ specifications into code. However, this method
does not have fine-grained refinement rules and thus cannot be
automated. On the other hand, existing animation methods do not
present mapping rules formally and do not support the mapping of
several important constructs of OZ, such as all cases of operation
expressions and most of constructs in global paragraph. In this paper,
with the aim of providing an automatic path from OZ specifications
to code, we propose an approach to map OZ specifications into their
counterparts in MRC in order to use fine-grained refinement rules of
MRC. In this way, having counterparts of our specifications in MRC,
we can refine them into code automatically using MRC tools such as
RED. Other advantages of our work pertain to proposing mapping
rules formally, supporting the mapping of all important constructs of
Object-Z, and considering dynamic instantiation of objects while OZ
itself does not cover this facility.
Abstract: In this paper, we consider the design of pulse shaping
filter using orthogonal Hermite-Rodriguez basis functions. The pulse
shaping filter design problem has been formulated and solved as a
quadratic programming problem with linear inequality constraints.
Compared with the existing approaches reported in the literature, the
use of Hermite-Rodriguez functions offers an effective alternative to
solve the constrained filter synthesis problem. This is demonstrated
through a numerical example which is concerned with the design of
an equalization filter for a digital transmission channel.
Abstract: This paper presents an optimal design of poly-phase induction motor using Quadratic Interpolation based Particle Swarm Optimization (QI-PSO). The optimization algorithm considers the efficiency, starting torque and temperature rise as objective function (which are considered separately) and ten performance related items including harmonic current as constraints. The QI-PSO algorithm was implemented on a test motor and the results are compared with the Simulated Annealing (SA) technique, Standard Particle Swarm Optimization (SPSO), and normal design. Some benchmark problems are used for validating QI-PSO. From the test results QI-PSO gave better results and more suitable to motor-s design optimization. Cµ code is used for implementing entire algorithms.
Abstract: Optimization of extraction of phenolic compounds
from Avicennia marina using response surface methodology was
carried out during the present study. Five levels, three factors
rotatable design (CCRD) was utilized to examine the optimum
combination of extraction variables based on the TPC of Avicennia
marina leaves. The best combination of response function was 78.41
°C, drying temperature; 26.18°C; extraction temperature and 36.53
minutes of extraction time. However, the procedure can be promptly
extended to the study of several others pharmaceutical processes like
purification of bioactive substances, drying of extracts and
development of the pharmaceutical dosage forms for the benefit of
consumers.
Abstract: Modern applications realized onto FPGAs exhibit high connectivity demands. Throughout this paper we study the routing constraints of Virtex devices and we propose a systematic methodology for designing a novel general-purpose interconnection network targeting to reconfigurable architectures. This network consists of multiple segment wires and SB patterns, appropriately selected and assigned across the device. The goal of our proposed methodology is to maximize the hardware utilization of fabricated routing resources. The derived interconnection scheme is integrated on a Virtex style FPGA. This device is characterized both for its high-performance, as well as for its low-energy requirements. Due to this, the design criterion that guides our architecture selections was the minimal Energy×Delay Product (EDP). The methodology is fully-supported by three new software tools, which belong to MEANDER Design Framework. Using a typical set of MCNC benchmarks, extensive comparison study in terms of several critical parameters proves the effectiveness of the derived interconnection network. More specifically, we achieve average Energy×Delay Product reduction by 63%, performance increase by 26%, reduction in leakage power by 21%, reduction in total energy consumption by 11%, at the expense of increase of channel width by 20%.