Abstract: In this contribution a newly developed e-learning environment is presented, which incorporates Intelligent Agents and Computational Intelligence Techniques. The new e-learning environment is constituted by three parts, the E-learning platform Front-End, the Student Questioner Reasoning and the Student Model Agent. These parts are distributed geographically in dispersed computer servers, with main focus on the design and development of these subsystems through the use of new and emerging technologies. These parts are interconnected in an interoperable way, using web services for the integration of the subsystems, in order to enhance the user modelling procedure and achieve the goals of the learning process.
Abstract: Statistics indicate that more than 1000 phishing attacks are launched every month. With 57 million people hit by the fraud so far in America alone, how do we combat phishing?This publication aims to discuss strategies in the war against Phishing. This study is an examination of the analysis and critique found in the ways adopted at various levels to counter the crescendo of phishing attacks and new techniques being adopted for the same. An analysis of the measures taken up by the varied popular Mail servers and popular browsers is done under this study. This work intends to increase the understanding and awareness of the internet user across the globe and even discusses plausible countermeasures at the users as well as the developers end. This conceptual paper will contribute to future research on similar topics.
Abstract: A new observer based fault detection and diagnosis
scheme for predicting induction motors- faults is proposed in this
paper. Prediction of incipient faults, using different variants of
Kalman filter and their relative performance are evaluated. Only soft
faults are considered for this work. The data generation, filter
convergence issues, hypothesis testing and residue estimates are
addressed. Simulink model is used for data generation and various
types of faults are considered. A comparative assessment of the
estimates of different observers associated with these faults is
included.
Abstract: Power consumption is rapidly increased in data centers
because the number of data center is increased and more the scale of
data center become larger. Therefore, it is one of key research items to
reduce power consumption in data center. The peak power of a typical
server is around 250 watts. When a server is idle, it continues to use
around 60% of the power consumed when in use, though vendors are
putting effort into reducing this “idle" power load. Servers tend to
work at only around a 5% to 20% utilization rate, partly because of
response time concerns. An average of 10% of servers in their data
centers was unused. In those reason, we propose dynamic power
management system to reduce power consumption in green data
center. Experiment result shows that about 55% power consumption is
reduced at idle time.
Abstract: This paper discusses two observers, which are used
for the estimation of parameters of PMSM. Former one, reduced
order observer, which is used to estimate the inaccessible parameters
of PMSM. Later one, full order observer, which is used to estimate
all the parameters of PMSM even though some of the parameters are
directly available for measurement, so as to meet with the
insensitivity to the parameter variation. However, the state space
model contains some nonlinear terms i.e. the product of different
state variables. The asymptotic state observer, which approximately
reconstructs the state vector for linear systems without uncertainties,
was presented by Luenberger. In this work, a modified form of such
an observer is used by including a non-linear term involving the
speed. So, both the observers are designed in the framework of
nonlinear control; their stability and rate of convergence is discussed.
Abstract: In this contribution a newly developed elearning environment is presented, which incorporates Intelligent Agents and Computational Intelligence Techniques. The new e-learning environment is constituted by three parts, the E-learning platform Front-End, the Student Questioner Reasoning and the Student Model Agent. These parts are distributed geographically in dispersed computer servers, with main focus on the design and development of these subsystems through the use of new and emerging technologies. These parts are interconnected in an interoperable way, using web services for the integration of the subsystems, in order to enhance the user modelling procedure and achieve the goals of the learning process.
Abstract: Fisheries management all around the world is
hampered by the lack, or poor quality, of critical data on fish
resources and fishing operations. The main reasons for the chronic
inability to collect good quality data during fishing operations is the
culture of secrecy common among fishers and the lack of modern
data gathering technology onboard most fishing vessels. In response,
OLRAC-SPS, a South African company, developed fisheries datalogging
software (eLog in short) and named it Olrac. The Olrac eLog
solution is capable of collecting, analysing, plotting, mapping,
reporting, tracing and transmitting all data related to fishing
operations. Olrac can be used by skippers, fleet/company managers,
offshore mariculture farmers, scientists, observers, compliance
inspectors and fisheries management authorities. The authors believe
that using eLog onboard fishing vessels has the potential to
revolutionise the entire process of data collection and reporting
during fishing operations and, if properly deployed and utilised,
could transform the entire commercial fleet to a provider of good
quality data and forever change the way fish resources are managed.
In addition it will make it possible to trace catches back to the actual
individual fishing operation, to improve fishing efficiency and to
dramatically improve control of fishing operations and enforcement
of fishing regulations.
Abstract: Recently, content delivery services have grown rapidly
over the Internet. For ASPs (Application Service Provider) providing
content delivery services, P2P architecture is beneficial to reduce
outgoing traffic from content servers. On the other hand, ISPs are
suffering from the increase in P2P traffic. The P2P traffic is
unnecessarily redundant because the same content or the same
fractions of content are transferred through an inter-ISP link several
times. Subscriber ISPs have to pay a transit fee to upstream ISPs based
on the volume of inter-ISP traffic. In order to solve such problems,
several works have been done for the purpose of P2P traffic reduction.
However, these existing works cannot control the traffic volume of a
certain link. In order to solve such an ISP-s operational requirement,
we propose a method to control traffic volume for a link within a
preconfigured upper bound value. We evaluated that the proposed
method works well by conducting a simulation on a 1,000-user scale.
We confirm that the traffic volume could be controlled at a lower level
than the upper bound for all evaluated conditions. Moreover, our
method could control the traffic volume at 98.95% link usage against
the target value.
Abstract: How to effectively allocate system resource to process
the Client request by Gateway servers is a challenging problem. In
this paper, we propose an improved scheme for autonomous
performance of Gateway servers under highly dynamic traffic loads.
We devise a methodology to calculate Queue Length and Waiting
Time utilizing Gateway Server information to reduce response time
variance in presence of bursty traffic. The most widespread
contemplation is performance, because Gateway Servers must offer
cost-effective and high-availability services in the elongated period,
thus they have to be scaled to meet the expected load. Performance
measurements can be the base for performance modeling and
prediction. With the help of performance models, the performance
metrics (like buffer estimation, waiting time) can be determined at
the development process. This paper describes the possible queue
models those can be applied in the estimation of queue length to
estimate the final value of the memory size. Both simulation and
experimental studies using synthesized workloads and analysis of
real-world Gateway Servers demonstrate the effectiveness of the
proposed system.
Abstract: Currently, web usage make a huge data from a lot of
user attention. In general, proxy server is a system to support web
usage from user and can manage system by using hit rates. This
research tries to improve hit rates in proxy system by applying data
mining technique. The data set are collected from proxy servers in the
university and are investigated relationship based on several features.
The model is used to predict the future access websites. Association
rule technique is applied to get the relation among Date, Time, Main
Group web, Sub Group web, and Domain name for created model.
The results showed that this technique can predict web content for the
next day, moreover the future accesses of websites increased from
38.15% to 85.57 %.
This model can predict web page access which tends to increase
the efficient of proxy servers as a result. In additional, the
performance of internet access will be improved and help to reduce
traffic in networks.
Abstract: In order to make surfing the internet faster, and to save redundant processing load with each request for the same web page, many caching techniques have been developed to reduce latency of retrieving data on World Wide Web. In this paper we will give a quick overview of existing web caching techniques used for dynamic web pages then we will introduce a design and implementation model that take advantage of “URL Rewriting" feature in some popular web servers, e.g. Apache, to provide an effective approach of caching dynamic web pages.
Abstract: The Internet is the global data communications
infrastructure based on the interconnection of both public and private
networks using protocols that implement Internetworking on a global
scale. Hence the control of protocol and infrastructure development,
resource allocation and network operation are crucial and interlinked
aspects. Internet Governance is the hotly debated and contentious
subject that refers to the global control and operation of key Internet
infrastructure such as domain name servers and resources such as
domain names. It is impossible to separate technical and political
positions as they are interlinked. Furthermore the existence of a
global market, transparency and competition impact upon Internet
Governance and related topics such as network neutrality and
security. Current trends and developments regarding Internet
governance with a focus on the policy-making process, security and
control have been observed to evaluate current and future
implications on the Internet. The multi stakeholder approach to
Internet Governance discussed in this paper presents a number of
opportunities, issues and developments that will affect the future
direction of the Internet. Internet operation, maintenance and
advisory organisations such as the Internet Corporation for Assigned
Names and Numbers (ICANN) or the Internet Governance Forum
(IGF) are currently in the process of formulating policies for future
Internet Governance. Given the controversial nature of the issues at
stake and the current lack of agreement it is predicted that
institutional as well as market governance will remain present for the
network access and content.