Abstract: The aim of this study was to compare the
sensitometric properties of commonly used radiographic films
processed with chemical solutions in different workload hospitals.
The effect of different processing conditions on induced densities on
radiologic films was investigated. Two accessible double emulsions
Fuji and Kodak films were exposed with 11-step wedge and
processed with Champion and CPAC processing solutions. The
mentioned films provided in both workloads centers, high and low.
Our findings displays that the speed and contrast of Kodak filmscreen
in both work load (high and low) is higher than Fuji filmscreen
for both processing solutions. However there was significant
differences in films contrast for both workloads when CPAC solution
had been used (p=0.000 and 0.028). The results showed base plus
fog density for Kodak film was lower than Fuji. Generally Champion
processing solution caused more speed and contrast for investigated
films in different conditions and there was significant differences in
95% confidence level between two used processing solutions
(p=0.01). Low base plus fog density for Kodak films provide more
visibility and accuracy and higher contrast results in using lower
exposure factors to obtain better quality in resulting radiographs. In
this study we found an economic advantages since Champion
solution and Kodak film are used while it makes lower patient dose.
Thus, in a radiologic facility any change in film processor/processing
cycle or chemistry should be carefully investigated before
radiological procedures of patients are acquired.
Abstract: I/O workload is a critical and important factor to
analyze I/O pattern and to maximize file system performance.
However to measure I/O workload on running distributed parallel file
system is non-trivial due to collection overhead and large volume of
data. In this paper, we measured and analyzed file system activities on
two large-scale cluster systems which had TFlops level high
performance computation resources. By comparing file system
activities of 2009 with those of 2006, we analyzed the change of I/O
workloads by the development of system performance and high-speed
network technology.
Abstract: Thanks to VR technology advanced, there are many
researches had used VR technology to develop a training system.
Using VR characteristics can simulate many kinds of situations to
reach our training-s goal. However, a good training system not only
considers real simulation but also considers learner-s learning
motivation. So, there are many researches started to conduct game-s
features into VR training system. We typically called this is a serious
game. It is using game-s features to engage learner-s learning
motivation. However, VR or Serious game has another important
advantage. That is simulating feature. Using this feature can create
any kinds of pressured environments. Because in the real
environment may happen any emergent situations. So, increasing the
trainees- pressure is more important when they are training. Most
pervious researches are investigated serious game-s applications and
learning performance. Seldom researches investigated how to
increase the learner-s mental workload when they are training. So, in
our study, we will introduce a real case study and create two types
training environments. Comparing the learner-s mental workload
between VR training and serious game.
Abstract: There are many views on how human decision makers behave. In this work, the Justices of the United States Supreme Court will be viewed in terms of constrained maximization and cognitivecybernetic theory. This paper will integrate research in such fields as law, political science, psychology, economics and decision making theory. It will be argued that due to its heavy workload, the Supreme Court is forced to make decisions in a boundedly rational manner. The ideas and theory put forward here will be tested in the area of the Court’s decisions involving religion. Therefore, the cases involving the U.S. Constitution’s Free Exercise Clause and Establishment Clause will be analyzed. Also, variables such as the U.S. government’s involvement in these cases will be considered. The years to be studied will be 1987-2011.
Abstract: This paper presents design trade-off and performance impacts of
the amount of pipeline phase of control path signals in a wormhole-switched
network-on-chip (NoC). The numbers of the pipeline phase of the control
path vary between two- and one-cycle pipeline phase. The control paths
consist of the routing request paths for output selection and the arbitration
paths for input selection. Data communications between on-chip routers are
implemented synchronously and for quality of service, the inter-router data
transports are controlled by using a link-level congestion control to avoid
lose of data because of an overflow. The trade-off between the area (logic
cell area) and the performance (bandwidth gain) of two proposed NoC router
microarchitectures are presented in this paper. The performance evaluation is
made by using a traffic scenario with different number of workloads under
2D mesh NoC topology using a static routing algorithm. By using a 130-nm
CMOS standard-cell technology, our NoC routers can be clocked at 1 GHz,
resulting in a high speed network link and high router bandwidth capacity
of about 320 Gbit/s. Based on our experiments, the amount of control path
pipeline stages gives more significant impact on the NoC performance than
the impact on the logic area of the NoC router.
Abstract: How to efficiently assign system resource to route the
Client demand by Gateway servers is a tricky predicament. In this
paper, we tender an enhanced proposal for autonomous recital of
Gateway servers under highly vibrant traffic loads. We devise a
methodology to calculate Queue Length and Waiting Time utilizing
Gateway Server information to reduce response time variance in
presence of bursty traffic.
The most widespread contemplation is performance, because
Gateway Servers must offer cost-effective and high-availability
services in the elongated period, thus they have to be scaled to meet
the expected load. Performance measurements can be the base for
performance modeling and prediction. With the help of performance
models, the performance metrics (like buffer estimation, waiting
time) can be determined at the development process.
This paper describes the possible queue models those can be
applied in the estimation of queue length to estimate the final value
of the memory size. Both simulation and experimental studies using
synthesized workloads and analysis of real-world Gateway Servers
demonstrate the effectiveness of the proposed system.
Abstract: In the current Grid environment, efficient workload
management presents a significant challenge, for which there are
exorbitant de facto standards encompassing resource discovery,
brokerage, and data transfer, among others. In addition, the real-time
resource status, essential for an optimal resource allocation strategy,
is often not readily accessible. To address these issues and provide a
cleaner abstraction of the Grid with the potential of generalizing into
arbitrary resource-sharing environment, this paper proposes a new
Condor-based pilot mechanism applied in the PanDA architecture,
PanDA-PF WMS, with the goal of providing a more generic yet
efficient resource allocating strategy. In this architecture, the PanDA
server primarily acts as a repository of user jobs, responding to pilot
requests from distributed, remote resources. Scheduling decisions are
subsequently made according to the real-time resource information
reported by pilots. Pilot Factory is a Condor-inspired solution for a
scalable pilot dissemination and effectively functions as a resource
provisioning mechanism through which the user-job server, PanDA,
reaches out to the candidate resources only on demand.
Abstract: The aim of this study was to evaluate the sensitivity
of a range of EEG indices to time-on-task effects and to a workload
manipulation (cueing), during performance of a resource-limited
vigilance task. Effects of task period and cueing on performance and
subjective state response were consistent with previous vigilance
studies and with resource theory. Two EEG indices – the Task Load
Index (TLI) and global lower frequency (LF) alpha power – showed
effects of task period and cueing similar to those seen with correct
detections. Across four successive task periods, the TLI declined and
LF alpha power increased. Cueing increased TLI and decreased LF
alpha. Other indices – the Engagement Index (EI), frontal theta and
upper frequency (UF) alpha failed to show these effects. However, EI
and frontal theta were sensitive to interactive effects of task period
and cueing, which may correspond to a stronger anxiety response to
the uncued task.
Abstract: Use of the Internet and the World-Wide-Web
(WWW) has become widespread in recent years and mobile agent
technology has proliferated at an equally rapid rate. In this scenario
load balancing becomes important for P2P systems. Beside P2P
systems can be highly heterogeneous, i.e., they may consists of peers
that range from old desktops to powerful servers connected to
internet through high-bandwidth lines. There are various loads
balancing policies came into picture. Primitive one is Message
Passing Interface (MPI). Its wide availability and portability make it
an attractive choice; however the communication requirements are
sometimes inefficient when implementing the primitives provided by
MPI. In this scenario we use the concept of mobile agent because
Mobile agent (MA) based approach have the merits of high
flexibility, efficiency, low network traffic, less communication
latency as well as highly asynchronous. In this study we present
decentralized load balancing scheme using mobile agent technology
in which when a node is overloaded, task migrates to less utilized
nodes so as to share the workload. However, the decision of which
nodes receive migrating task is made in real-time by defining certain
load balancing policies. These policies are executed on PMADE (A
Platform for Mobile Agent Distribution and Execution) in
decentralized manner using JuxtaNet and various load balancing
metrics are discussed.
Abstract: In this paper the direct kinematic model of a multiple
applications three degrees of freedom industrial manipulator, was
developed using the homogeneous transformation matrices and the
Denavit - Hartenberg parameters, likewise the inverse kinematic
model was developed using the same method, verifying that in the
workload border the inverse kinematic presents considerable errors,
therefore a genetic algorithm was implemented to optimize the model
improving greatly the efficiency of the model.
Abstract: This study performs a comparative analysis of the 21 Greek Universities in terms of their public funding, awarded for covering their operating expenditure. First it introduces a DEA/MCDM model that allocates the fund into four expenditure factors in the most favorable way for each university. Then, it presents a common, consensual assessment model to reallocate the amounts, remaining in the same level of total public budget. From the analysis it derives that a number of universities cannot justify the public funding in terms of their size and operational workload. For them, the sufficient reduction of their public funding amount is estimated as a future target. Due to the lack of precise data for a number of expenditure criteria, the analysis is based on a mixed crisp-ordinal data set.
Abstract: Load balancing is the process of improving the
performance of a parallel and distributed system through a
redistribution of load among the processors [1] [5]. In this paper we
present the performance analysis of various load balancing
algorithms based on different parameters, considering two typical
load balancing approaches static and dynamic. The analysis indicates
that static and dynamic both types of algorithm can have
advancements as well as weaknesses over each other. Deciding type
of algorithm to be implemented will be based on type of parallel
applications to solve. The main purpose of this paper is to help in
design of new algorithms in future by studying the behavior of
various existing algorithms.
Abstract: As the air traffic increases at a hub airport, some
flights cannot land or depart at their preferred target time. This event
happens because the airport runways become occupied to near their
capacity. It results in extra costs for both passengers and airlines
because of the loss of connecting flights or more waiting, more fuel
consumption, rescheduling crew members, etc. Hence, devising an
appropriate scheduling method that determines a suitable runway and
time for each flight in order to efficiently use the hub capacity and
minimize the related costs is of great importance. In this paper, we
present a mixed-integer zero-one model for scheduling a set of mixed
landing and departing flights (despite of most previous studies
considered only landings). According to the fact that the flight cost is
strongly affected by the level of airline, we consider different airline
categories in our model. This model presents a single objective
minimizing the total sum of three terms, namely 1) the weighted
deviation from targets, 2) the scheduled time of the last flight (i.e.,
makespan), and 3) the unbalancing the workload on runways. We
solve 10 simulated instances of different sizes up to 30 flights and 4
runways. Optimal solutions are obtained in a reasonable time, which
are satisfactory in comparison with the traditional rule, namely First-
Come-First-Serve (FCFS) that is far apart from optimality in most
cases.
Abstract: Dealing with hundreds of features in character
recognition systems is not unusual. This large number of features
leads to the increase of computational workload of recognition
process. There have been many methods which try to remove
unnecessary or redundant features and reduce feature dimensionality.
Besides because of the characteristics of Farsi scripts, it-s not
possible to apply other languages algorithms to Farsi directly. In this
paper some methods for feature subset selection using genetic
algorithms are applied on a Farsi optical character recognition (OCR)
system. Experimental results show that application of genetic
algorithms (GA) to feature subset selection in a Farsi OCR results in
lower computational complexity and enhanced recognition rate.
Abstract: A major requirement for Grid application developers is ensuring performance and scalability of their applications. Predicting the performance of an application demands understanding its specific features. This paper discusses performance modeling and prediction of multi-agent based simulation (MABS) applications on the Grid. An experiment conducted using a synthetic MABS workload explains the key features to be included in the performance model. The results obtained from the experiment show that the prediction model developed for the synthetic workload can be used as a guideline to understand to estimate the performance characteristics of real world simulation applications.
Abstract: Context awareness is a capability whereby mobile
computing devices can sense their physical environment and adapt
their behavior accordingly. The term context-awareness, in
ubiquitous computing, was introduced by Schilit in 1994 and has
become one of the most exciting concepts in early 21st-century
computing, fueled by recent developments in pervasive computing
(i.e. mobile and ubiquitous computing). These include computing
devices worn by users, embedded devices, smart appliances, sensors
surrounding users and a variety of wireless networking technologies.
Context-aware applications use context information to adapt
interfaces, tailor the set of application-relevant data, increase the
precision of information retrieval, discover services, make the user
interaction implicit, or build smart environments. For example: A
context aware mobile phone will know that the user is currently in a
meeting room, and reject any unimportant calls. One of the major
challenges in providing users with context-aware services lies in
continuously monitoring their contexts based on numerous sensors
connected to the context aware system through wireless
communication. A number of context aware frameworks based on
sensors have been proposed, but many of them have neglected the
fact that monitoring with sensors imposes heavy workloads on
ubiquitous devices with limited computing power and battery. In this
paper, we present CALEEF, a lightweight and energy efficient
context aware framework for resource limited ubiquitous devices.
Abstract: With the increasing number of on-chip components and the critical requirement for processing power, Chip Multiprocessor (CMP) has gained wide acceptance in both academia and industry during the last decade. However, the conventional bus-based onchip communication schemes suffer from very high communication delay and low scalability in large scale systems. Network-on-Chip (NoC) has been proposed to solve the bottleneck of parallel onchip communications by applying different network topologies which separate the communication phase from the computation phase. Observing that the memory bandwidth of the communication between on-chip components and off-chip memory has become a critical problem even in NoC based systems, in this paper, we propose a novel 3D NoC with on-chip Dynamic Random Access Memory (DRAM) in which different layers are dedicated to different functionalities such as processors, cache or memory. Results show that, by using our proposed architecture, average link utilization has reduced by 10.25% for SPLASH-2 workloads. Our proposed design costs 1.12% less execution cycles than the traditional design on average.
Abstract: Flight management system (FMS) is a specialized
computer system that automates a wide variety of in-flight tasks,
reducing the workload on the flight crew to the point that modern
aircraft no longer carry flight engineers or navigators. The primary
function of FMS is to perform the in-flight management of the flight
plan using various sensors (such as GPS and INS often backed up by
radio navigation) to determine the aircraft's position. From the
cockpit FMS is normally controlled through a Control Display Unit
(CDU) which incorporates a small screen and keyboard or touch
screen. This paper investigates the performance of GPS/ INS
integration techniques in which the data fusion process is done using
Kalman filtering. This will include the importance of sensors
calibration as well as the alignment of the strap down inertial
navigation system. The limitations of the inertial navigation systems
are investigated in order to understand why INS sometimes is
integrated with other navigation aids and not just operating in standalone
mode. Finally, both the loosely coupled and tightly coupled
configurations are analyzed for several types of situations and
operational conditions.
Abstract: Background: Widespread use of chemotherapeutic
drugs in the treatment of cancer has lead to higher health hazards
among employee who handle and administer such drugs, so nurses
should know how to protect themselves, their patients and their work
environment against toxic effects of chemotherapy. Aim of this study
was carried out to examine the effect of chemotherapy safety protocol
for oncology nurses on their protective measure practices. Design: A
quasi experimental research design was utilized. Setting: The study
was carried out in oncology department of Menoufia university
hospital and Tanta oncology treatment center. Sample: A
convenience sample of forty five nurses in Tanta oncology treatment
center and eighteen nurses in Menoufiya oncology department.
Tools: 1. an interviewing questionnaire that covering sociodemographic
data, assessment of unit and nurses' knowledge about
chemotherapy. II: Obeservational check list to assess nurses' actual
practices of handling and adminestration of chemotherapy. A base
line data were assessed before implementing Chemotherapy Safety
protocol, then Chemotherapy Safety protocol was implemented, and
after 2 monthes they were assessed again. Results: reveled that 88.9%
of study group I and 55.6% of study group II improved to good total
knowledge scores after educating on the safety protocol, also 95.6%
of study group I and 88.9% of study group II had good total practice
score after educating on the safety protocol. Moreover less than half
of group I (44.4%) reported that heavy workload is the most barriers
for them, while the majority of group II (94.4%) had many barriers
for adhering to the safety protocol such as they didn’t know the
protocol, the heavy work load and inadequate equipment.
Conclusions: Safety protocol for Oncology Nurses seemed to have
positive effect on improving nurses' knowledge and practice.
Recommendation: chemotherapy safety protocol should be instituted
for all oncology nurses who are working in any oncology unit and/ or
center to enhance compliance, and this protocol should be done at
frequent intervals.
Abstract: Workload and resource management are two essential functions provided at the service level of the grid software infrastructure. To improve the global throughput of these software environments, workloads have to be evenly scheduled among the available resources. To realize this goal several load balancing strategies and algorithms have been proposed. Most strategies were developed in mind, assuming homogeneous set of sites linked with homogeneous and fast networks. However for computational grids we must address main new issues, namely: heterogeneity, scalability and adaptability. In this paper, we propose a layered algorithm which achieve dynamic load balancing in grid computing. Based on a tree model, our algorithm presents the following main features: (i) it is layered; (ii) it supports heterogeneity and scalability; and, (iii) it is totally independent from any physical architecture of a grid.