Abstract: ERP systems are the largest software applications adopted by universities, along with quite significant investments in their implementation. However, unlike other applications little research has been conducted regarding these systems in a university environment. This paper aims at providing a critical review of previous research in ERP system in higher education with a special focus on higher education in Australia. The research not only forms the basis of an evaluation of previous research and research needs, it also makes inroads in identifying the payoff of ERPs in the sector from different perspectives with particular reference to the user. The paper is divided into two parts, the first part focuses on ERP literature in higher education at large, while the second focuses on ERP literature in higher education in Australia.
Abstract: Effective cooling of electronic equipment has emerged
as a challenging and constraining problem of the new century. In the
present work the feasibility and effectiveness of jet impingement
cooling on electronics were investigated numerically and
experimentally. Studies have been conducted to see the effect of the
geometrical parameters such as jet diameter (D), jet to target
spacing (Z) and ratio of jet spacing to jet diameter (Z/D) on the heat
transfer characteristics. The values of Reynolds numbers considered
are in the range 7000 to 42000. The results obtained from the
numerical studies are validated by conducting experiments. From the
studies it is found that the optimum value of Z/D ratio is 5. For a
given Reynolds number, the Nusselt number increases by about 28%
if the diameter of the nozzle is increased from 1mm to 2mm.
Correlations are proposed for Nusselt number in terms of Reynolds
number and these are valid for air as the cooling medium.
Abstract: In this work we develop an object extraction method
and propose efficient algorithms for object motion characterization.
The set of proposed tools serves as a basis for development of objectbased
functionalities for manipulation of video content. The
estimators by different algorithms are compared in terms of quality
and performance and tested on real video sequences. The proposed
method will be useful for the latest standards of encoding and
description of multimedia content – MPEG4 and MPEG7.
Abstract: Fully customized hardware based technology provides high performance and low power consumption by specializing the tasks in hardware but lacks design flexibility since any kind of changes require re-design and re-fabrication. Software based solutions operate with software instructions due to which a great flexibility is achieved from the easy development and maintenance of the software code. But this execution of instructions introduces a high overhead in performance and area consumption. In past few decades the reconfigurable computing domain has been introduced which overcomes the traditional trades-off between flexibility and performance and is able to achieve high performance while maintaining a good flexibility. The dramatic gains in terms of chip performance and design flexibility achieved through the reconfigurable computing systems are greatly dependent on the design of their computational units being integrated with reconfigurable logic resources. The computational unit of any reconfigurable system plays vital role in defining its strength. In this research paper an RFU based computational unit design has been presented using the tightly coupled, multi-threaded reconfigurable cores. The proposed design has been simulated for VLIW based architectures and a high gain in performance has been observed as compared to the conventional computing systems.
Abstract: High Speed PM Generators driven by micro-turbines
are widely used in Smart Grid System. So, this paper proposes
comparative study among six classical, optimized and genetic
analytical design cases for 400 kW output power at tip speed 200
m/s. These six design trials of High Speed Permanent Magnet
Synchronous Generators (HSPMSGs) are: Classical Sizing;
Unconstrained optimization for total losses and its minimization;
Constrained optimized total mass with bounded constraints are
introduced in the problem formulation. Then a genetic algorithm is
formulated for obtaining maximum efficiency and minimizing
machine size. In the second genetic problem formulation, we attempt
to obtain minimum mass, the machine sizing that is constrained by
the non-linear constraint function of machine losses. Finally, an
optimum torque per ampere genetic sizing is predicted. All results are
simulated with MATLAB, Optimization Toolbox and its Genetic
Algorithm. Finally, six analytical design examples comparisons are
introduced with study of machines waveforms, THD and rotor losses.
Abstract: Corporate social responsibility (CSR) can be defined as the management of social, environmental, economical and ethical concepts and firms sensivities to the expectations of the social stakeholders. CSR is seen as an important competitive advantage in the textile sector because this sector has an important impact on the environment and it is labor extensive. Textile sector has a strong advantage when compared with other sectors in Turkey due to its low labor costs and abundancy of raw materials. Turkey was a producer and an exporter of cotton, and an importer of fiber, clothes and dresses until 1950s. After 1950s, Turkey has begun to export fiber, ready-made clothes and become one of the most important textile producers in the world recently. CSR practices of the textile firms that are quoted in Istanbul Stock Exchange and these firms sensivities to their internal and external stakeholders and environment will be presented in this study.
Abstract: Support Vector Machine (SVM) is a recent class of statistical classification and regression techniques playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM is applied to an infrared (IR) binary communication system with different types of channel models including Ricean multipath fading and partially developed scattering channel with additive white Gaussian noise (AWGN) at the receiver. The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these channel stochastic models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to classical binary signal maximum likelihood detection using a matched filter driven by On-Off keying (OOK) modulation. We found that the performance of SVM is superior to that of the traditional optimal detection schemes used in statistical communication, especially for very low signal-to-noise ratio (SNR) ranges. For large SNR, the performance of the SVM is similar to that of the classical detectors. The implication of these results is that SVM can prove very beneficial to IR communication systems that notoriously suffer from low SNR at the cost of increased computational complexity.
Abstract: Residues are produced in all stages of human activities
in terms of composition and volume which vary according to
consumption practices and to production methods. Forms of
significant harm to the environment are associated to volume of
generated material as well as to improper disposal of solid wastes,
whose negative effects are noticed more frequently in the long term.
The solution to this problem constitutes a challenge to the
government, industry and society, because they involve economic,
social, environmental and, especially, awareness of the population in
general. The main concerns are focused on the impact it can have on
human health and on the environment (soil, water, air and sights).
The hazardous waste produced mainly by industry, are particularly
worrisome because, when improperly managed, they become a
serious threat to the environment. In view of this issue, this study
aimed to evaluate the management system of solid waste of a coprocessing
industrial waste company, to propose improvements to the
rejects generation management in a specific step of the Blending
production process.
Abstract: Nowadays, people are going more and more mobile, both in terms of devices and associated applications. Moreover, services that these devices are offering are getting wider and much more complex. Even though actual handheld devices have considerable computing power, their contexts of utilization are different. These contexts are affected by the availability of connection, high latency of wireless networks, battery life, size of the screen, on-screen or hard keyboard, etc. Consequently, development of mobile applications and their associated mobile Web services, if any, should follow a concise methodology so they will provide a high Quality of Service. The aim of this paper is to highlight and discuss main issues to consider when developing mobile applications and mobile Web services and then propose a framework that leads developers through different steps and modules toward development of efficient and secure mobile applications. First, different challenges in developing such applications are elicited and deeply discussed. Second, a development framework is presented with different modules addressing each of these challenges. Third, the paper presents an example of a mobile application, Eivom Cinema Guide, which benefits from following our development framework.
Abstract: In this paper, the decomposition-aggregation method
is used to carry out connective stability criteria for general linear
composite system via aggregation. The large scale system is
decomposed into a number of subsystems. By associating directed
graphs with dynamic systems in an essential way, we define the
relation between system structure and stability in the sense of
Lyapunov. The stability criteria is then associated with the stability
and system matrices of subsystems as well as those interconnected
terms among subsystems using the concepts of vector differential
inequalities and vector Lyapunov functions. Then, we show that the
stability of each subsystem and stability of the aggregate model
imply connective stability of the overall system. An example is
reported, showing the efficiency of the proposed technique.
Abstract: The increasing complexity of software development based on peer to peer networks makes necessary the creation of new frameworks in order to simplify the developer-s task. Additionally, some applications, e.g. fire detection or security alarms may require real-time constraints and the high level definition of these features eases the application development. In this paper, a service model based on a component model with real-time features is proposed. The high-level model will abstract developers from implementation tasks, such as discovery, communication, security or real-time requirements. The model is oriented to deploy services on small mobile devices, such as sensors, mobile phones and PDAs, where the computation is light-weight. Services can be composed among them by means of the port concept to form complex ad-hoc systems and their implementation is carried out using a component language called UM-RTCOM. In order to apply our proposals a fire detection application is described.
Abstract: Underpricing is one anomaly in initial public offerings
(IPO) literature that has been widely observed across different stock
markets with different trends emerging over different time periods.
This study seeks to determine how IPOs on the JSE performed on the
first day, first week and first month over the period of 1996-2011.
Underpricing trends are documented for both hot and cold market
periods in terms of four main sectors (cyclical, defensive, growth
stock and interest rate sensitive stocks). Using a sample of 360 listed
companies on the JSE, the empirical findings established that IPOs
on the JSE are significantly underpriced with an average market
adjusted first day return of 62.9%. It is also established that hot
market IPOs on the JSE are more underpriced than the cold market
IPOs. Also observed is the fact that as the offer price per share
increases above the median price for any given period, the level of
underpricing decreases substantially. While significant differences
exist in the level of underpricing of IPOs in the four different sectors
in the hot and cold market periods, interest rates sensitive stocks
showed a different trend from the other sectors and thus require
further investigation to uncover this pattern.
Abstract: Mobile IPv6 (MIPv6) describes how mobile node can change its point of attachment from one access router to another. As a demand for wireless mobile devices increases, many enhancements for macro-mobility (inter-domain) protocols have been proposed, designed and implemented in Mobile IPv6. Hierarchical Mobile IPv6 (HMIPv6) is one of them that is designed to reduce the amount of signaling required and to improve handover speed for mobile connections. This is achieved by introducing a new network entity called Mobility Anchor Point (MAP). This report presents a comparative study of the Hierarchical Mobility IPv6 and Mobile IPv6 protocols and we have narrowed down the scope to micro-mobility (intra-domain). The architecture and operation of each protocol is studied and they are evaluated based on the Quality of Service (QoS) parameter; handover latency. The simulation was carried out by using the Network Simulator-2. The outcome from this simulation has been discussed. From the results, it shows that, HMIPv6 performs best under intra-domain mobility compared to MIPv6. The MIPv6 suffers large handover latency. As enhancement we proposed to HMIPv6 to locate the MAP to be in the middle of the domain with respect to all Access Routers. That gives approximately same distance between MAP and Mobile Node (MN) regardless of the new location of MN, and possible shorter distance. This will reduce the delay since the distance is shorter. As a future work performance analysis is to be carried for the proposed HMIPv6 and compared to HMIPv6.
Abstract: The epoxidation of soybean oil at temperature of 600C
was provided the best result in terms of attaching the –OH
functionality. Temperatures below and above 600C it is likely the
attaching reaction did not proceed sufficiently fast. The considerable
yield below 40%, implies the oil is not completely converted, it is not
possible by conventional methods, because the epoxide decomposes
at the temperature required. The objective of this work was the
development of catalyst toward the conversion of epoxide and polyol
with reaction temperature at 50,60, and 700C. The effect of different
type of catalyst were studied, the effect of alcohols with different
molecular configuration was determined which leads to selective
addition of alcohols to the epoxide oils.
Abstract: Laboratory activities have produced benefits in
student learning. With current drives of new technology resources
and evolving era of education methods, renewal status of learning
and teaching in laboratory methods are in progress, for both learners
and the educators. To enhance learning outcomes in laboratory works
particularly in engineering practices and testing, learning via handson
by instruction may not sufficient. This paper describes and
compares techniques and implementation of traditional (expository)
with open-ended laboratory (problem-based) for two consecutive
cohorts studying environmental laboratory course in civil engineering
program. The transition of traditional to problem-based findings and
effect were investigated in terms of course assessment student
feedback survey, course outcome learning measurement and student
performance grades. It was proved that students have demonstrated
better performance in their grades and 12% increase in the course
outcome (CO) in problem-based open-ended laboratory style than
traditional method; although in perception, students has responded
less favorable in their feedback.
Abstract: This paper employs a new approach to regulate the
blood glucose level of type I diabetic patient under an intensive
insulin treatment. The closed-loop control scheme incorporates
expert knowledge about treatment by using reinforcement learning
theory to maintain the normoglycemic average of 80 mg/dl and the
normal condition for free plasma insulin concentration in severe
initial state. The insulin delivery rate is obtained off-line by using Qlearning
algorithm, without requiring an explicit model of the
environment dynamics. The implementation of the insulin delivery
rate, therefore, requires simple function evaluation and minimal
online computations. Controller performance is assessed in terms of
its ability to reject the effect of meal disturbance and to overcome the
variability in the glucose-insulin dynamics from patient to patient.
Computer simulations are used to evaluate the effectiveness of the
proposed technique and to show its superiority in controlling
hyperglycemia over other existing algorithms
Abstract: Traffic Management and Information Systems, which rely on a system of sensors, aim to describe in real-time traffic in urban areas using a set of parameters and estimating them. Though the state of the art focuses on data analysis, little is done in the sense of prediction. In this paper, we describe a machine learning system for traffic flow management and control for a prediction of traffic flow problem. This new algorithm is obtained by combining Random Forests algorithm into Adaboost algorithm as a weak learner. We show that our algorithm performs relatively well on real data, and enables, according to the Traffic Flow Evaluation model, to estimate and predict whether there is congestion or not at a given time on road intersections.
Abstract: In recent years there has been renewal of interest in the
relation between Green IT and Cloud Computing. The growing use of
computers in cloud platform has caused marked energy consumption,
putting negative pressure on electricity cost of cloud data center. This
paper proposes an effective mechanism to reduce energy utilization in
cloud computing environments. We present initial work on the
integration of resource and power management that aims at reducing
power consumption. Our mechanism relies on recalling virtualization
services dynamically according to user-s virtualization request and
temporarily shutting down the physical machines after finish in order
to conserve energy. Given the estimated energy consumption, this
proposed effort has the potential to positively impact power
consumption. The results from the experiment concluded that energy
indeed can be saved by powering off the idling physical machines in
cloud platforms.
Abstract: This paper presents a reliability-based approach to select appropriate wind turbine types for a wind farm considering site-specific wind speed patterns. An actual wind farm in the northern region of Iran with the wind speed registration of one year is studied in this paper. An analytic approach based on total probability theorem is utilized in this paper to model the probabilistic behavior of both turbines- availability and wind speed. Well-known probabilistic reliability indices such as loss of load expectation (LOLE), expected energy not supplied (EENS) and incremental peak load carrying capability (IPLCC) for wind power integration in the Roy Billinton Test System (RBTS) are examined. The most appropriate turbine type achieving the highest reliability level is chosen for the studied wind farm.
Abstract: The world's population continues to grow at a quarter of a million people per day, increasing the consumption of energy. This has made the world to face the problem of energy crisis now days. In response to the energy crisis, the principles of renewable energy gained popularity. There are much advancement made in developing the wind and solar energy farms across the world. These energy farms are not enough to meet the energy requirement of world. This has attracted investors to procure new sources of energy to be substituted. Among these sources, extraction of energy from the waves is considered as best option. The world oceans contain enough energy to meet the requirement of world. Significant advancements in design and technology are being made to make waves as a continuous source of energy. One major hurdle in launching wave energy devices in a developing country like Pakistan is the initial cost. A simple, reliable and cost effective wave energy converter (WEC) is required to meet the nation-s energy need. This paper will present a novel design proposed by team SAS for harnessing wave energy. This paper has three major sections. The first section will give a brief and concise view of ocean wave creation, propagation and the energy carried by them. The second section will explain the designing of SAS-2. A gear chain mechanism is used for transferring the energy from the buoy to a rotary generator. The third section will explain the manufacturing of scaled down model for SAS-2 .Many modifications are made in the trouble shooting stage. The design of SAS-2 is simple and very less maintenance is required. SAS-2 is producing electricity at Clifton. The initial cost of SAS-2 is very low. This has proved SAS- 2 as one of the cost effective and reliable source of harnessing wave energy for developing countries.