Abstract: An original DEA model is to evaluate each DMU
optimistically, but the interval DEA Model proposed in this paper
has been formulated to obtain an efficiency interval consisting of
Evaluations from both the optimistic and the pessimistic view points.
DMUs are improved so that their lower bounds become so large as to
attain the maximum Value one. The points obtained by this method
are called ideal points. Ideal PPS is calculated by ideal of efficiency
DMUs. The purpose of this paper is to rank DMUs by this ideal PPS.
Finally we extend the efficiency interval of a DMU under variable
RTS technology.
Abstract: Let p ≥ 5 be a prime number and let Fp be a finite
field. In this work, we determine the number of rational points on
singular curves Ea : y2 = x(x - a)2 over Fp for some specific
values of a.
Abstract: In this paper we propose a novel approach for
searching eCommerce products using a mobile phone, illustrated by a
prototype eCoMobile. This approach aims to globalize the mobile
search by integrating the concept of user multilinguism into it. To
show that, we particularly deal with English and Arabic languages.
Indeed the mobile user can formulate his query on a commercial
product in either language (English/Arabic). The description of his
information need on commercial products relies on the ontology that
represents the conceptualization of the product catalogue knowledge
domain defined in both English and Arabic languages. A query
expressed on a mobile device client defines the concept that
corresponds to the name of the product followed by a set of pairs
(property, value) specifying the characteristics of the product. Once a
query is submitted it is then communicated to the server side which
analyses it and in its turn performs an http request to an eCommerce
application server (like Amazon). This latter responds by returning
an XML file representing a set of elements where each element
defines an item of the searched product with its specific
characteristics. The XML file is analyzed on the server side and then
items are displayed on the mobile device client along with its
relevant characteristics in the chosen language.
Abstract: Fast delay estimation methods, as opposed to
simulation techniques, are needed for incremental performance
driven layout synthesis. On-chip inductive effects are becoming
predominant in deep submicron interconnects due to increasing clock
speed and circuit complexity. Inductance causes noise in signal
waveforms, which can adversely affect the performance of the circuit
and signal integrity. Several approaches have been put forward which
consider the inductance for on-chip interconnect modelling. But for
even much higher frequency, of the order of few GHz, the shunt
dielectric lossy component has become comparable to that of other
electrical parameters for high speed VLSI design. In order to cope up
with this effect, on-chip interconnect has to be modelled as
distributed RLCG line. Elmore delay based methods, although
efficient, cannot accurately estimate the delay for RLCG interconnect
line. In this paper, an accurate analytical delay model has been
derived, based on first and second moments of RLCG
interconnection lines. The proposed model considers both the effect
of inductance and conductance matrices. We have performed the
simulation in 0.18μm technology node and an error of as low as less
as 5% has been achieved with the proposed model when compared to
SPICE. The importance of the conductance matrices in interconnect
modelling has also been discussed and it is shown that if G is
neglected for interconnect line modelling, then it will result an delay
error of as high as 6% when compared to SPICE.
Abstract: Fair share objective has been included into the goaloriented
parallel computer job scheduling policy recently. However,
the previous work only presented the overall scheduling performance.
Thus, the per-user performance of the policy is still lacking. In this
work, the details of per-user fair share performance under the
Tradeoff-fs(Tx:avgX) policy will be further evaluated. A basic fair
share priority backfill policy namely RelShare(1d) is also studied.
The performance of all policies is collected using an event-driven
simulator with three real job traces as input. The experimental results
show that the high demand users are usually benefited under most
policies because their jobs are large or they have a lot of jobs. In the
large job case, one job executed may result in over-share during that
period. In the other case, the jobs may be backfilled for
performances. However, the users with a mixture of jobs may suffer
because if the smaller jobs are executing the priority of the remaining
jobs from the same user will be lower. Further analysis does not show
any significant impact of users with a lot of jobs or users with a large
runtime approximation error.
Abstract: The NGN (Next Generation Network), which can
provide advanced multimedia services over an all-IP based network, has been the subject of much attention for years. While there have
been tremendous efforts to develop its architecture and protocols, especially for IMS, which is a key technology of the NGN, it is far
from being widely deployed. However, efforts to create an advanced
signaling infrastructure realizing many requirements have resulted in a
large number of functional components and interactions between those
components. Thus, the carriers are trying to explore effective ways to
deploy IMS while offering value-added services. As one such
approach, we have proposed a self-organizing IMS. A self-organizing
IMS enables IMS functional components and corresponding physical
nodes to adapt dynamically and automatically based on situation such
as network load and available system resources while continuing IMS
operation. To realize this, service continuity for users is an important
requirement when a reconfiguration occurs during operation. In this
paper, we propose a mechanism that will provide service continuity to
users and focus on the implementation and describe performance
evaluation in terms of number of control signaling and processing time
during reconfiguration
Abstract: In the past decade, because of wide applications of
hybrid systems, many researchers have considered modeling and
control of these systems. Since switching systems constitute an
important class of hybrid systems, in this paper a method for optimal
control of linear switching systems is described. The method is also
applied on the two-tank system which is a much appropriate system
to analyze different modeling and control techniques of hybrid
systems. Simulation results show that, in this method, the goals of
control and also problem constraints can be satisfied by an
appropriate selection of cost function.
Abstract: The complexity of teaching English in higher
institutions by non-native speakers within a second/foreign language
setting has created continuous discussions and research about
teaching approaches and teaching practises, professional identities
and challenges. In addition, there is a growing awareness that
teaching English within discipline-specific contexts adds up to the
existing complexity. This awareness leads to reassessments,
discussions and suggestions on course design and content and
teaching approaches and techniques. In meeting expectations
teaching at a university specified in a particular discipline such as
engineering, English language educators are not only required to
teach students to be able to communicate in English effectively but
also to teach soft skills such as problem solving skills. This paper is
part of a research conducted to investigate how English language
educators negotiate with the complexities of teaching problem
solving skills through English language teaching at a technical
university. This paper reports the way an English language educator
identified himself and the way he approached his teaching in this
institutional context.
Abstract: This paper investigates the inverse problem of determining
the unknown time-dependent leading coefficient in the parabolic
equation using the usual conditions of the direct problem and an additional
condition. An algorithm is developed for solving numerically
the inverse problem using the technique of space decomposition in a
reproducing kernel space. The leading coefficients can be solved by a
lower triangular linear system. Numerical experiments are presented
to show the efficiency of the proposed methods.
Abstract: In this study a clustering technique has been implemented which is K-Means like with hierarchical initial set (HKM). The goal of this study is to prove that clustering document sets do enhancement precision on information retrieval systems, since it was proved by Bellot & El-Beze on French language. A comparison is made between the traditional information retrieval system and the clustered one. Also the effect of increasing number of clusters on precision is studied. The indexing technique is Term Frequency * Inverse Document Frequency (TF * IDF). It has been found that the effect of Hierarchical K-Means Like clustering (HKM) with 3 clusters over 242 Arabic abstract documents from the Saudi Arabian National Computer Conference has significant results compared with traditional information retrieval system without clustering. Additionally it has been found that it is not necessary to increase the number of clusters to improve precision more.
Abstract: This paper proposes an innovative approach for the Connection Admission Control (CAC) problem. Starting from an abstract network modelling, the CAC problem is formulated in a technology independent fashion allowing the proposed concepts to be applied to any wireless and wired domain. The proposed CAC is decoupled from the other Resource Management procedures, but cooperates with them in order to guarantee the desired QoS requirements. Moreover, it is based on suitable performance measurements which, by using proper predictors, allow to forecast the domain dynamics in the next future. Finally, the proposed CAC control scheme is based on a feedback loop aiming at maximizing a suitable performance index accounting for the domain throughput, whilst respecting a set of constraints accounting for the QoS requirements.
Abstract: The morphological parameter of a thin film surface
can be characterized by power spectral density (PSD) functions
which provides a better description to the topography than the RMS
roughness and imparts several useful information of the surface
including fractal and superstructure contributions. Through the
present study Nanoparticle copper/carbon composite films were
prepared by co-deposition of RF-Sputtering and RF-PECVD method
from acetylene gas and copper target. Surface morphology of thin
films is characterized by using atomic force microscopy (AFM). The
Carbon content of our films was obtained by Rutherford Back
Scattering (RBS) and it varied from .4% to 78%. The power values of
power spectral density (PSD) for the AFM data were determined by
the fast Fourier transform (FFT) algorithms. We investigate the effect
of carbon on the roughness of thin films surface. Using such
information, roughness contributions of the surface have been
successfully extracted.