Abstract: Existing experiences indicate that one of the most
prominent reasons that some ERP implementations fail is related to
selecting an improper ERP package. Among those important factors
resulting in inappropriate ERP selections, one is to ignore preliminary
activities that should be done before the evaluation of ERP packages.
Another factor yielding these unsuitable selections is that usually
organizations employ prolonged and costly selection processes in
such extent that sometimes the process would never be finalized
or sometimes the evaluation team might perform many key final
activities in an incomplete or inaccurate way due to exhaustion, lack
of interest or out-of-date data. In this paper, a systematic approach
that recommends some activities to be done before and after the
main selection phase is introduced for choosing an ERP package. On
the other hand, the proposed approach has utilized some ideas that
accelerates the selection process at the same time that reduces the
probability of an erroneous final selection.
Abstract: A number of studies highlighted problems related to
ERP systems, yet, most of these studies focus on the problems during
the project and implementation stages but not during the postimplementation
use process. Problems encountered in the process of
using ERP would hinder the effective exploitation and the extended
and continued use of ERP systems and their value to organisations.
This paper investigates the different types of problems users
(operational, supervisory and managerial) faced in using ERP and
how 'feral system' is used as the coping mechanism. The paper
adopts a qualitative method and uses data collected from two cases
and 26 interviews, to inductively develop a casual network model of
ERP usage problem and its coping mechanism. This model classified
post ERP usage problems as data quality, system quality, interface
and infrastructure. The model is also categorised the different coping
mechanism through use of 'feral system' inclusive of feral
information system, feral data and feral use of technology.
Abstract: This paper presents a new configurable decimation
filter for sigma-delta modulators. The filter employs the Pascal-s
triangle-s theorem for building the coefficients of non-recursive
decimation filters. The filter can be connected to the back-end of
various modulators with different output accuracy. In this work two
methods are shown and then compared from area occupation
viewpoint. First method uses the memory and the second one
employs Pascal-s triangle-s method, aiming to reduce required gates.
XILINX ISE v10 is used for implementation and confirmation the
filter.
Abstract: ERP systems are the largest software applications adopted by universities, along with quite significant investments in their implementation. However, unlike other applications little research has been conducted regarding these systems in a university environment. This paper aims at providing a critical review of previous research in ERP system in higher education with a special focus on higher education in Australia. The research not only forms the basis of an evaluation of previous research and research needs, it also makes inroads in identifying the payoff of ERPs in the sector from different perspectives with particular reference to the user. The paper is divided into two parts, the first part focuses on ERP literature in higher education at large, while the second focuses on ERP literature in higher education in Australia.
Abstract: At present, web Service is the first choice to reuse the
legacy system for the implementation of SOA. According to the status
of the implementation of SOA and the status of the legacy systems, we propose four encapsulating strategies. Base on the strategies, we
proposal the service-oriented encapsulating framework, the legacy system can be encapsulated by the service-oriented encapsulating
layer in three aspects, communication protocols, data and program.
The reuse rate of the legacy systems can be increased by using this framework
Abstract: Support Vector Machine (SVM) is a recent class of statistical classification and regression techniques playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM is applied to an infrared (IR) binary communication system with different types of channel models including Ricean multipath fading and partially developed scattering channel with additive white Gaussian noise (AWGN) at the receiver. The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these channel stochastic models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to classical binary signal maximum likelihood detection using a matched filter driven by On-Off keying (OOK) modulation. We found that the performance of SVM is superior to that of the traditional optimal detection schemes used in statistical communication, especially for very low signal-to-noise ratio (SNR) ranges. For large SNR, the performance of the SVM is similar to that of the classical detectors. The implication of these results is that SVM can prove very beneficial to IR communication systems that notoriously suffer from low SNR at the cost of increased computational complexity.
Abstract: This paper presents a design and prototype
implementation of new home automation system that uses WiFi
technology as a network infrastructure connecting its parts. The
proposed system consists of two main components; the first part is
the server (web server), which presents system core that manages,
controls, and monitors users- home. Users and system administrator
can locally (LAN) or remotely (internet) manage and control system
code. Second part is hardware interface module, which provides
appropriate interface to sensors and actuator of home automation
system. Unlike most of available home automation system in the
market the proposed system is scalable that one server can manage
many hardware interface modules as long as it exists on WiFi
network coverage. System supports a wide range of home
automation devices like power management components, and
security components. The proposed system is better from the
scalability and flexibility point of view than the commercially
available home automation systems.
Abstract: This paper introduces and studies new indexing techniques for content-based queries in images databases. Indexing is the key to providing sophisticated, accurate and fast searches for queries in image data. This research describes a new indexing approach, which depends on linear modeling of signals, using bases for modeling. A basis is a set of chosen images, and modeling an image is a least-squares approximation of the image as a linear combination of the basis images. The coefficients of the basis images are taken together to serve as index for that image. The paper describes the implementation of the indexing scheme, and presents the findings of our extensive evaluation that was conducted to optimize (1) the choice of the basis matrix (B), and (2) the size of the index A (N). Furthermore, we compare the performance of our indexing scheme with other schemes. Our results show that our scheme has significantly higher performance.
Abstract: The increasing complexity of software development based on peer to peer networks makes necessary the creation of new frameworks in order to simplify the developer-s task. Additionally, some applications, e.g. fire detection or security alarms may require real-time constraints and the high level definition of these features eases the application development. In this paper, a service model based on a component model with real-time features is proposed. The high-level model will abstract developers from implementation tasks, such as discovery, communication, security or real-time requirements. The model is oriented to deploy services on small mobile devices, such as sensors, mobile phones and PDAs, where the computation is light-weight. Services can be composed among them by means of the port concept to form complex ad-hoc systems and their implementation is carried out using a component language called UM-RTCOM. In order to apply our proposals a fire detection application is described.
Abstract: In this paper we consider the issue of distributed adaptive estimation over sensor networks. To deal with more realistic scenario, different variance for observation noise is assumed for sensors in the network. To solve the problem of different variance of observation noise, the proposed method is divided into two phases: I) Estimating each sensor-s observation noise variance and II) using the estimated variances to obtain the desired parameter. Our proposed algorithm is based on a diffusion least mean square (LMS) implementation with linear combiner model. In the proposed algorithm, the step-size parameter the coefficients of linear combiner are adjusted according to estimated observation noise variances. As the simulation results show, the proposed algorithm considerably improves the diffusion LMS algorithm given in literature.
Abstract: Khao Yai National Park is the First National Park in
Thailand and approximately 800,000 tourists visited Khao Yai yearly.
This study aimed to identify the perception of tourists in Khao Yai
National Park according to the implementation of eco-friendly
cleansers along their leisure in the campsites. Due to tourist’s
activities in the park were affected on quality of environment;
especially on water resource. Therefore, eco-friendly cleansers were
used in campsites for tourists and restaurants during high tourist
season. The results indicated positive effects of environmental
friendly cleansers on water quality in Lam Ta Khong River, as well
as the tourist’s perception on eco-friendly cleansers.
Abstract: Permanent rivers are the main sources of renewable
water supply for the croplands under the irrigation and drainage
schemes. They are also the major source of sediment loads transport
into the storage reservoirs of the hydro-electrical dams, diversion
weirs and regulating dams. Sedimentation process results from soil
erosion which is related to poor watershed management and human
intervention ion in the hydraulic regime of the rivers. These could
change the hydraulic behavior and as such, leads to riverbed and river
bank scouring, the consequences of which would be sediment load
transport into the dams and therefore reducing the flow discharge in
water intakes. The present paper investigate sedimentation process
by varying the Manning coefficient "n" by using the SHARC
software along the watercourse in the Dez River. Results indicated
that the optimum "n" within that river range is 0.0315 at which
quantity minimum sediment loads are transported into the Eastern
intake. Comparison of the model results with those obtained by those
from the SSIIM software within the same river reach showed a very
close proximity between them. This suggests a relative accuracy with
which the model can simulate the hydraulic flow characteristics and
therefore its suitability as a powerful analytical tool for project
feasibility studies and project implementation.
Abstract: The segmentation of endovascular tools in fluoroscopy images can be accurately performed automatically or by minimum user intervention, using known modern techniques. It has been proven in literature, but no clinical implementation exists so far because the computational time requirements of such technology have not yet been met. A classical segmentation scheme is composed of edge enhancement filtering, line detection, and segmentation. A new method is presented that consists of a vector that propagates in the image to track an edge as it advances. The filtering is performed progressively in the projected path of the vector, whose orientation allows for oriented edge detection, and a minimal image area is globally filtered. Such an algorithm is rapidly computed and can be implemented in real-time applications. It was tested on medical fluoroscopy images from an endovascular cerebral intervention. Ex- periments showed that the 2D tracking was limited to guidewires without intersection crosspoints, while the 3D implementation was able to cope with such planar difficulties.
Abstract: Laboratory activities have produced benefits in
student learning. With current drives of new technology resources
and evolving era of education methods, renewal status of learning
and teaching in laboratory methods are in progress, for both learners
and the educators. To enhance learning outcomes in laboratory works
particularly in engineering practices and testing, learning via handson
by instruction may not sufficient. This paper describes and
compares techniques and implementation of traditional (expository)
with open-ended laboratory (problem-based) for two consecutive
cohorts studying environmental laboratory course in civil engineering
program. The transition of traditional to problem-based findings and
effect were investigated in terms of course assessment student
feedback survey, course outcome learning measurement and student
performance grades. It was proved that students have demonstrated
better performance in their grades and 12% increase in the course
outcome (CO) in problem-based open-ended laboratory style than
traditional method; although in perception, students has responded
less favorable in their feedback.
Abstract: This paper employs a new approach to regulate the
blood glucose level of type I diabetic patient under an intensive
insulin treatment. The closed-loop control scheme incorporates
expert knowledge about treatment by using reinforcement learning
theory to maintain the normoglycemic average of 80 mg/dl and the
normal condition for free plasma insulin concentration in severe
initial state. The insulin delivery rate is obtained off-line by using Qlearning
algorithm, without requiring an explicit model of the
environment dynamics. The implementation of the insulin delivery
rate, therefore, requires simple function evaluation and minimal
online computations. Controller performance is assessed in terms of
its ability to reject the effect of meal disturbance and to overcome the
variability in the glucose-insulin dynamics from patient to patient.
Computer simulations are used to evaluate the effectiveness of the
proposed technique and to show its superiority in controlling
hyperglycemia over other existing algorithms
Abstract: This paper explains a project based learning method where autonomous mini-robots are developed for research, education and entertainment purposes. In case of remote systems wireless sensors are developed in critical areas, which would collect data at specific time intervals, send the data to the central wireless node based on certain preferred information would make decisions to turn on or off a switch or control unit. Such information transfers hardly sums up to a few bytes and hence low data rates would suffice for such implementations. As a robot is a multidisciplinary platform, the interfacing issues involved are discussed in this paper. The paper is mainly focused on power supply, grounding and decoupling issues.
Abstract: Mathematical, graphical and intuitive models are often
constructed in the development process of computational systems.
The Unified Modeling Language (UML) is one of the most popular
modeling languages used by practicing software engineers. This
paper critically examines UML models and suggests an augmented
use case view with the addition of new constructs for modeling
software. It also shows how a use case diagram can be enhanced. The
improved modeling constructs are presented with examples for
clarifying important design and implementation issues.
Abstract: The advent of multi-million gate Field Programmable
Gate Arrays (FPGAs) with hardware support for multiplication opens
an opportunity to recreate a significant portion of the front end of a
human cochlea using this technology. In this paper we describe the
implementation of the cochlear filter and show that it is entirely
suited to a single device XC3S500 FPGA implementation .The filter
gave a good fit to real time data with efficiency of hardware usage.
Abstract: In-core memory requirement is a bottleneck in solving
large three dimensional Navier-Stokes finite element problem
formulations using sparse direct solvers. Out-of-core solution
strategy is a viable alternative to reduce the in-core memory
requirements while solving large scale problems. This study
evaluates the performance of various out-of-core sequential solvers
based on multifrontal or supernodal techniques in the context of
finite element formulations for three dimensional problems on a
Windows platform. Here three different solvers, HSL_MA78,
MUMPS and PARDISO are compared. The performance of these
solvers is evaluated on a 64-bit machine with 16GB RAM for finite
element formulation of flow through a rectangular channel. It is
observed that using out-of-core PARDISO solver, relatively large
problems can be solved. The implementation of Newton and
modified Newton's iteration is also discussed.
Abstract: In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.