Abstract: Based on 276 responses from academic staff in an
evaluation of an online learning environment (OLE), this paper
identifies those elements of the OLE that were most used and valued
by staff, those elements of the OLE that staff most wanted to see
improved, and those factors that most contributed to staff perceptions
that the use of the OLE enhanced their teaching. The most used and
valued elements were core functions, including accessing unit
information, accessing lecture/tutorial/lab notes, and reading online
discussions. The elements identified as most needing attention related
to online assessment: submitting assignments, managing assessment
items, and receiving feedback on assignments. Staff felt that using the
OLE enhanced their teaching when they were satisfied that their
students were able to access and use their learning materials, and
when they were satisfied with the professional development they
received and were confident with their ability to teach with the OLE.
Abstract: Extraction of laccase produced by L. polychrous in an
aqueous two-phase system, composed of polyethylene glycol and
phosphate salt at pH 7.0 and 250C was investigated. The effect of
PEG molecular weight, PEG concentration and phosphate
concentration was determined. Laccase preferentially partitioned to
the top phase. Good extraction of laccase to the top phase was
observed with PEG 4000. The optimum system was found in the
system containing 12% w/w PEG 4000 and 16% w/w phosphate salt
with KE of 88.3, purification factor of 3.0-fold and 99.1% yield.
Some properties of the enzyme such as thermal stability, effect of
heavy metal ions and kinetic constants were also presented in this
work. The thermal stability decreased sharply with high temperature
above 60 0C. The enzyme was inhibited by Cd2+, Pb2+, Zn2+ and
Cu2+. The Vmax and Km values of the enzyme were 74.70
μmol/min/ml and 9.066 mM respectively.
Abstract: The convergence of heterogeneous wireless access technologies characterizes the 4G wireless networks. In such converged systems, the seamless and efficient handoff between
different access technologies (vertical handoff) is essential and remains a challenging problem. The heterogeneous co-existence of access technologies with largely different characteristics creates a decision problem of determining the “best" available network at
“best" time to reduce the unnecessary handoffs. This paper proposes a dynamic decision model to decide the “best" network at “best"
time moment to handoffs. The proposed dynamic decision model make the right vertical handoff decisions by determining the “best"
network at “best" time among available networks based on, dynamic
factors such as “Received Signal Strength(RSS)" of network and
“velocity" of mobile station simultaneously with static factors like Usage Expense, Link capacity(offered bandwidth) and power
consumption. This model not only meets the individual user needs but also improve the whole system performance by reducing the unnecessary handoffs.
Abstract: Managers as the key employees have a very important
role in maintaining the workforce performance which is critical to the
construction companies- success in the future. If motivated
employees start with motivated managers probably it would seem
plausible if the de-motivated ones start with de-motivated managers.
This study aims to analyze the importance of motivated managers to
their successes and construction companies- successes. In this study,
a quantitative method was used and the study area was in Medan,
North Sumatera. Questionnaire survey was distributed directly to
construction companies in Medan which are listed in the
Construction Services Development Board. A total of 60 managers
responded and the completed questionnaires were analyzed using the
descriptive analysis. The results indicated that the respondents
acknowledge the importance of motivation among themselves to the
projects and construction companies- success, implying that it is vital
to maintain the motivation and good performance of the workforce.
Abstract: Petri Net being one of the most useful graphical tools for modelling complex asynchronous systems, we have used Petri Net to model multi-track railway level crossing system. The roadway has been augmented with four half-size barriers. For better control, a three stage control mechanism has been introduced to ensure that no road-vehicle is trapped on the level crossing. Timed Petri Net is used to include the temporal nature of the signalling system. Safeness analysis has also been included in the discussion section.
Abstract: Starting with an analysis of the financial and
operational indicators that can be found in the specialised literature,
this study aims to contribute to improvements in the performance
measurement systems used when the unit of analysis is the
manufacturing plant. For this a search was done in the highest impact
Journals of Production and Operations Management and
Management Accounting , with the aim of determining the financial
and operational indicators used to evaluate performance when
Advanced Production Practices have been implemented, more
specifically when the practices implemented are Total Quality
Management, JIT/Lean Manufacturing and Total Productive
Maintenance. This has enabled us to obtain a classification of the two
types of indicators based on how much each is used. For the financial
indicators we have also prepared a proposal that can be adapted to
manufacturing plants- accounting features. In the near future we will
propose a model that links practices implementation with financial
and operational indicators and these two last with each other. We aim
to will test this model empirically with the data obtained in the High
Performance Manufacturing Project.
Abstract: We present a new numerical method for the computation of the steady-state solution of Markov chains. Theoretical analyses show that the proposed method, with a contraction factor α, converges to the one-dimensional null space of singular linear systems of the form Ax = 0. Numerical experiments are used to illustrate the effectiveness of the proposed method, with applications to a class of interesting models in the domain of tandem queueing networks.
Abstract: Data mining has been integrated into application systems to enhance the quality of the decision-making process. This study aims to focus on the integration of data mining technology and Knowledge Management System (KMS), due to the ability of data mining technology to create useful knowledge from large volumes of data. Meanwhile, KMS vitally support the creation and use of knowledge. The integration of data mining technology and KMS are popularly used in business for enhancing and sustaining organizational performance. However, there is a lack of studies that applied data mining technology and KMS in the education sector; particularly students- academic performance since this could reflect the IHL performance. Realizing its importance, this study seeks to integrate data mining technology and KMS to promote an effective management of knowledge within IHLs. Several concepts from literature are adapted, for proposing the new integrative data mining technology and KMS framework to an IHL.
Abstract: Database management systems that integrate user preferences promise better solution for personalization, greater flexibility and higher quality of query responses. This paper presents a tentative work that studies and investigates approaches to express user preferences in queries. We sketch an extend capabilities of SQLf language that uses the fuzzy set theory in order to define the user preferences. For that, two essential points are considered: the first concerns the expression of user preferences in SQLf by so-called fuzzy commensurable predicates set. The second concerns the bipolar way in which these user preferences are expressed on mandatory and/or optional preferences.
Abstract: Simulation is a very powerful method used for highperformance
and high-quality design in distributed system, and now
maybe the only one, considering the heterogeneity, complexity and
cost of distributed systems. In Grid environments, foe example, it is
hard and even impossible to perform scheduler performance
evaluation in a repeatable and controllable manner as resources and
users are distributed across multiple organizations with their own
policies. In addition, Grid test-beds are limited and creating an
adequately-sized test-bed is expensive and time consuming.
Scalability, reliability and fault-tolerance become important
requirements for distributed systems in order to support distributed
computation. A distributed system with such characteristics is called
dependable. Large environments, like Cloud, offer unique
advantages, such as low cost, dependability and satisfy QoS for all
users. Resource management in large environments address
performant scheduling algorithm guided by QoS constrains. This
paper presents the performance evaluation of scheduling heuristics
guided by different optimization criteria. The algorithms for
distributed scheduling are analyzed in order to satisfy users
constrains considering in the same time independent capabilities of
resources. This analysis acts like a profiling step for algorithm
calibration. The performance evaluation is based on simulation. The
simulator is MONARC, a powerful tool for large scale distributed
systems simulation. The novelty of this paper consists in synthetic
analysis results that offer guidelines for scheduler service
configuration and sustain the empirical-based decision. The results
could be used in decisions regarding optimizations to existing Grid
DAG Scheduling and for selecting the proper algorithm for DAG
scheduling in various actual situations.
Abstract: DG application has received increasing attention during
recent years. The impact of DG on various aspects of distribution system
operation, such as reliability and energy loss, depend highly on DG
location in distribution feeder. Optimal DG placement is an important
subject which has not been fully discussed yet.
This paper presents an optimization method to determine optimal DG
placement, based on a cost/worth analysis approach. This method
considers technical and economical factors such as energy loss, load point
reliability indices and DG costs, and particularly, portability of DG. The
proposed method is applied to a test system and the impacts of different
parameters such as load growth rate and load forecast uncertainty (LFU)
on optimum DG location are studied.
Abstract: Automated storage and retrieval systems (AS/RS)
become frequently used systems in warehouses. There has been a
transition from human based forklift applications to fast and safe
AS/RS applications in firm-s warehouse systems. In this study, basic
components and automation systems of the AS/RS are examined.
Proposed system's automation components and their tasks in the
system control algorithm were stated. According to this control
algorithm the control system structure was obtained.
Abstract: Measuring the complexity of software has been an
insoluble problem in software engineering. Complexity measures can
be used to predict critical information about testability, reliability,
and maintainability of software systems from automatic analysis of
the source code. During the past few years, many complexity
measures have been invented based on the emerging Cognitive
Informatics discipline. These software complexity measures,
including cognitive functional size, lend themselves to the approach
of the total cognitive weights of basic control structures such as loops
and branches. This paper shows that the current existing calculation
method can generate different results that are algebraically
equivalence. However, analysis of the combinatorial meanings of this
calculation method shows significant flaw of the measure, which also
explains why it does not satisfy Weyuker's properties. Based on the
findings, improvement directions, such as measures fusion, and
cumulative variable counting scheme are suggested to enhance the
effectiveness of cognitive complexity measures.
Abstract: The experiment was conducted to study the effect of
rearing systems on fatty acid composition and cholesterol content of
Thai indigenous chicken meat. Three hundred and sixty chicks were
allocated to 2 different rearing systems: conventional, housing in an
indoor pen (5 birds/m2); free-range, housing in an indoor pen (5
birds/m2) with access to a grass paddock (1 bird/m2) from 8 wk of age
until slaughter. All birds were provided with the same diet during the
experimental period. At 16 wk of age, 24 birds per group were
slaughtered to evaluate the fatty acid composition and cholesterol
content of breast and thigh meat. The results showed that the
proportion of SFA, MUFA and PUFA in breast and thigh meat were
not different among groups (P>0.05). However, the proportion of n-3
fatty acids was higher and the ratio of n-6 to n-3 fatty acids was lower
in free-range system than in conventional system (P0.05). The data indicated that the free-range system
could increase the proportion of n-3 fatty acids, but no effect on
cholesterol content in Thai indigenous chicken meat.
Abstract: This paper proposes a delay-dependent leader-following consensus condition of multi-agent systems with both communication delay and probabilistic self-delay. The proposed methods employ a suitable piecewise Lyapunov-Krasovskii functional and the average dwell time approach. New consensus criterion for the systems are established in terms of linear matrix inequalities (LMIs) which can be easily solved by various effective optimization algorithms. Numerical example showed that the proposed method is effective.
Abstract: Recommender Systems act as personalized decision
guides, aiding users in decisions on matters related to personal taste.
Most previous research on Recommender Systems has focused on the
statistical accuracy of the algorithms driving the systems, with no
emphasis on the trustworthiness of the user. RS depends on
information provided by different users to gather its knowledge. We
believe, if a large group of users provide wrong information it will
not be possible for the RS to arrive in an accurate conclusion. The
system described in this paper introduce the concept of Testing the
knowledge of user to filter out these “bad users".
This paper emphasizes on the mechanism used to provide robust
and effective recommendation.
Abstract: In this paper, we have compared the performance of a Turbo and Trellis coded optical code division multiple access (OCDMA) system. The comparison of the two codes has been accomplished by employing optical orthogonal codes (OOCs). The Bit Error Rate (BER) performances have been compared by varying the code weights of address codes employed by the system. We have considered the effects of optical multiple access interference (OMAI), thermal noise and avalanche photodiode (APD) detector noise. Analysis has been carried out for the system with and without double optical hard limiter (DHL). From the simulation results it is observed that a better and distinct comparison can be drawn between the performance of Trellis and Turbo coded systems, at lower code weights of optical orthogonal codes for a fixed number of users. The BER performance of the Turbo coded system is found to be better than the Trellis coded system for all code weights that have been considered for the simulation. Nevertheless, the Trellis coded OCDMA system is found to be better than the uncoded OCDMA system. Trellis coded OCDMA can be used in systems where decoding time has to be kept low, bandwidth is limited and high reliability is not a crucial factor as in local area networks. Also the system hardware is less complex in comparison to the Turbo coded system. Trellis coded OCDMA system can be used without significant modification of the existing chipsets. Turbo-coded OCDMA can however be employed in systems where high reliability is needed and bandwidth is not a limiting factor.
Abstract: The asymmetric trafc between uplink and downlink
over recent mobile communication systems has been conspicuous because
of providing new communication services. This paper proposes
an asymmetric trafc accommodation scheme adopting a multihop
cooperative transmission technique for CDMA/FDD cellular networks.
The proposed scheme employs the cooperative transmission
technique in the already proposed downlink multihop transmissions
for the accommodation of the asymmetric trafc, which utilizes
the vacant uplink band for the downlink relay transmissions. The
proposed scheme reduces the transmission power at the downlink
relay transmissions and then suppresses the interference to the uplink
communications, and thus, improves the uplink performance. The
proposed scheme is evaluated by computer simulation and the results
show that it can achieve better throughput performance.
Abstract: We present a non standard Euclidean vehicle
routing problem adding a level of clustering, and we revisit the use
of self-organizing maps as a tool which naturally handles such
problems. We present how they can be used as a main operator
into an evolutionary algorithm to address two conflicting
objectives of route length and distance from customers to bus stops
minimization and to deal with capacity constraints. We apply the
approach to a real-life case of combined clustering and vehicle
routing for the transportation of the 780 employees of an
enterprise. Basing upon a geographic information system we
discuss the influence of road infrastructures on the solutions
generated.
Abstract: The aim of this contribution is to present a new
approach in modeling the electrical activity of the human heart. A
recurrent artificial neural network is being used in order to exhibit a
subset of the dynamics of the electrical behavior of the human heart.
The proposed model can also be used, when integrated, as a
diagnostic tool of the human heart system.
What makes this approach unique is the fact that every model is
being developed from physiological measurements of an individual.
This kind of approach is very difficult to apply successfully in many
modeling problems, because of the complexity and entropy of the
free variables describing the complex system. Differences between
the modeled variables and the variables of an individual, measured at
specific moments, can be used for diagnostic purposes. The sensor
fusion used in order to optimize the utilization of biomedical sensors
is another point that this paper focuses on. Sensor fusion has been
known for its advantages in applications such as control and
diagnostics of mechanical and chemical processes.