Abstract: We present a prioritized, limited multi-server processor sharing (PS) system where each server has various capacities, and N (≥2) priority classes are allowed in each PS server. In each prioritized, limited server, different service ratio is assigned to each class request, and the number of requests to be processed is limited to less than a certain number. Routing strategies of such prioritized, limited multi-server PS systems that take into account the capacity of each server are also presented, and a performance evaluation procedure for these strategies is discussed. Practical performance measures of these strategies, such as loss probability, mean waiting time, and mean sojourn time, are evaluated via simulation. In the PS server, at the arrival (or departure) of a request, the extension (shortening) of the remaining sojourn time of each request receiving service can be calculated by using the number of requests of each class and the priority ratio. Utilising a simulation program which executes these events and calculations, the performance of the proposed prioritized, limited multi-server PS rule can be analyzed. From the evaluation results, most suitable routing strategy for the loss or waiting system is clarified.
Abstract: Unlike the best effort service provided by the internet
today, next-generation wireless networks will support real-time
applications. This paper proposes an adaptive early packet discard
(AEPD) policy to improve the performance of the real time TCP
traffic over ATM networks and avoid the fragmentation problem.
Three main aspects are incorporated in the proposed policy. First,
providing quality-of-service (QoS) guaranteed for real-time
applications by implementing a priority scheduling. Second,
resolving the partially corrupted packets problem by differentiating
the buffered cells of one packet from another. Third, adapting a
threshold dynamically using Fuzzy logic based on the traffic
behavior to maintain a high throughput under a variety of load
conditions. The simulation is run for two priority classes of the input
traffic: real time and non-real time classes. Simulation results show
that the proposed AEPD policy improves throughput and fairness
over that using static threshold under the same traffic conditions.
Abstract: This paper focuses on cost and profit analysis of
single-server Markovian queuing system with two priority classes. In
this paper, functions of total expected cost, revenue and profit of the
system are constructed and subjected to optimization with respect to
its service rates of lower and higher priority classes. A computing
algorithm has been developed on the basis of fast converging
numerical method to solve the system of non linear equations formed
out of the mathematical analysis. A novel performance measure of
cost and profit analysis in view of its economic interpretation for the
system with priority classes is attempted to discuss in this paper. On
the basis of computed tables observations are also drawn to enlighten
the variational-effect of the model on the parameters involved
therein.
Abstract: High speed networks provide realtime variable bit rate
service with diversified traffic flow characteristics and quality
requirements. The variable bit rate traffic has stringent delay and
packet loss requirements. The burstiness of the correlated traffic
makes dynamic buffer management highly desirable to satisfy the
Quality of Service (QoS) requirements. This paper presents an
algorithm for optimization of adaptive buffer allocation scheme for
traffic based on loss of consecutive packets in data-stream and buffer
occupancy level. Buffer is designed to allow the input traffic to be
partitioned into different priority classes and based on the input
traffic behavior it controls the threshold dynamically. This algorithm
allows input packets to enter into buffer if its occupancy level is less
than the threshold value for priority of that packet. The threshold is
dynamically varied in runtime based on packet loss behavior. The
simulation is run for two priority classes of the input traffic –
realtime and non-realtime classes. The simulation results show that
Adaptive Partial Buffer Sharing (ADPBS) has better performance
than Static Partial Buffer Sharing (SPBS) and First In First Out
(FIFO) queue under the same traffic conditions.
Abstract: We propose a novel prioritized limited
processor-sharing (PS) rule and a simulation algorithm for the performance evaluation of this rule. The performance measures of practical interest are evaluated using this algorithm. Suppose that there
are two classes and that an arriving (class-1 or class-2) request encounters n1 class-1 and n2 class-2 requests (including the arriving
one) in a single-server system. According to the proposed rule, class-1
requests individually and simultaneously receive m / (m * n1+ n2) of the service-facility capacity, whereas class-2 requests receive 1 / (m *n1 + n2) of it, if m * n1 + n2 ≤ C. Otherwise (m * n1 + n2 > C), the arriving request will be queued in the corresponding class waiting
room or rejected. Here, m (1) denotes the priority ratio, and C ( ∞), the service-facility capacity. In this rule, when a request arrives at [or
departs from] the system, the extension [shortening] of the remaining
sojourn time of each request receiving service can be calculated using
the number of requests of each class and the priority ratio. Employing
a simulation program to execute these events and calculations enables
us to analyze the performance of the proposed prioritized limited PS
rule, which is realistic in a time-sharing system (TSS) with a
sufficiently small time slot. Moreover, this simulation algorithm is
expanded for the evaluation of the prioritized limited PS system with
N 3 priority classes.