Abstract: Groundwater is one of the most important water
resources in Fars province. Based on this study, 95 percent of the
total annual water consumption in Fars is used for agriculture,
whereas the percentages for domestic and industrial uses are 4 and 1
percent, respectively. Population growth, urban and industrial
growth, and agricultural development in Fars have created a
condition of water stress. In this province, farmers and other users are
pumping groundwater faster than its natural replenishment rate,
causing a continuous drop in groundwater tables and depletion of this
resource. In this research variation of groundwater level, their effects
and ways to help control groundwater levels in aquifer of the Kavar-
Maharloo plains in Fars plain were evaluated .Excessive
exploitation of groundwater in this aquifer caused the groundwater
levels fall too fast or to unacceptable levels. The average drawdown
of the groundwater level in this plain were 17 meters during
1995 to 2006. The purpose of this study is to evaluate water level
changes in the Kavar-Maharloo Aquifer in the Fars province in order
to determine the areas of greatest depletion, the cause of depletion,
and predict the remaining life of the aquifer.
Abstract: Recently the use of data mining to scientific bibliographic data bases has been implemented to analyze the pathways of the knowledge or the core scientific relevances of a laureated novel or a country. This specific case of data mining has been named citation mining, and it is the integration of citation bibliometrics and text mining. In this paper we present an improved WEB implementation of statistical physics algorithms to perform the text mining component of citation mining. In particular we use an entropic like distance between the compression of text as an indicator of the similarity between them. Finally, we have included the recently proposed index h to characterize the scientific production. We have used this web implementation to identify users, applications and impact of the Mexican scientific institutions located in the State of Morelos.
Abstract: User-based Collaborative filtering (CF), one of the
most prevailing and efficient recommendation techniques, provides
personalized recommendations to users based on the opinions of other
users. Although the CF technique has been successfully applied in
various applications, it suffers from serious sparsity problems. The
cloud-model approach addresses the sparsity problems by
constructing the user-s global preference represented by a cloud
eigenvector. The user-based CF approach works well with dense
datasets while the cloud-model CF approach has a greater
performance when the dataset is sparse. In this paper, we present a
hybrid approach that integrates the predictions from both the
user-based CF and the cloud-model CF approaches. The experimental
results show that the proposed hybrid approach can ameliorate the
sparsity problem and provide an improved prediction quality.
Abstract: The aim of this research is to design a collaborative
framework that integrates risk analysis activities into the geospatial
database design (GDD) process. Risk analysis is rarely undertaken
iteratively as part of the present GDD methods in conformance to
requirement engineering (RE) guidelines and risk standards.
Accordingly, when risk analysis is performed during the GDD, some
foreseeable risks may be overlooked and not reach the output
specifications especially when user intentions are not systematically
collected. This may lead to ill-defined requirements and ultimately in
higher risks of geospatial data misuse. The adopted approach consists
of 1) reviewing risk analysis process within the scope of RE and
GDD, 2) analyzing the challenges of risk analysis within the context
of GDD, and 3) presenting the components of a risk-based
collaborative framework that improves the collection of the
intended/forbidden usages of the data and helps geo-IT experts to
discover implicit requirements and risks.
Abstract: Recently, various services such as television and the
Internet have come to be received through various terminals.
However, we could gain greater convenience by receiving these
services through cellular phone terminals when we go out and then
continuing to receive the same services through a large screen digital
television after we have come home. However, it is necessary to go
through the same authentication processing again when using TVs
after we have come home. In this study, we have developed an
authentication method that enables users to switch terminals in
environments in which the user receives service from a server through
a terminal. Specifically, the method simplifies the authentication of
the server side when switching from one terminal to another terminal
by using previously authenticated information.
Abstract: The sensitivity of orifice plate metering to disturbed
flow (either asymmetric or swirling) is a subject of great concern to
flow meter users and manufacturers. The distortions caused by pipe
fittings and pipe installations upstream of the orifice plate are major
sources of this type of non-standard flows. These distortions can alter
the accuracy of metering to an unacceptable degree. In this work, a
multi-scale object known as metal foam has been used to generate a
predetermined turbulent flow upstream of the orifice plate. The
experimental results showed that the combination of an orifice plate
and metal foam flow conditioner is broadly insensitive to upstream
disturbances. This metal foam demonstrated a good performance in
terms of removing swirl and producing a repeatable flow profile
within a short distance downstream of the device. The results of using
a combination of a metal foam flow conditioner and orifice plate for
non-standard flow conditions including swirling flow and asymmetric
flow show this package can preserve the accuracy of metering up to
the level required in the standards.
Abstract: Testable software has two inherent properties – observability and controllability. Observability facilitates observation of internal behavior of software to required degree of detail. Controllability allows creation of difficult-to-achieve states prior to execution of various tests. In this paper, we describe COTT, a Controllability and Observability Testing Tool, to create testable object-oriented software. COTT provides a framework that helps the user to instrument object-oriented software to build the required controllability and observability. During testing, the tool facilitates creation of difficult-to-achieve states required for testing of difficultto- test conditions and observation of internal details of execution at unit, integration and system levels. The execution observations are logged in a test log file, which are used for post analysis and to generate test coverage reports.
Abstract: Cognitive models allow predicting some aspects of utility
and usability of human machine interfaces (HMI), and simulating
the interaction with these interfaces. The action of predicting is based
on a task analysis, which investigates what a user is required to do
in terms of actions and cognitive processes to achieve a task. Task
analysis facilitates the understanding of the system-s functionalities.
Cognitive models are part of the analytical approaches, that do not
associate the users during the development process of the interface.
This article presents a study about the evaluation of a human
machine interaction with a contextual assistant-s interface using ACTR
and GOMS cognitive models. The present work shows how these
techniques may be applied in the evaluation of HMI, design and
research by emphasizing firstly the task analysis and secondly the
time execution of the task. In order to validate and support our
results, an experimental study of user performance is conducted at
the DOMUS laboratory, during the interaction with the contextual
assistant-s interface. The results of our models show that the GOMS
and ACT-R models give good and excellent predictions respectively
of users performance at the task level, as well as the object level.
Therefore, the simulated results are very close to the results obtained
in the experimental study.
Abstract: This paper gives an overview of how an OWL
ontology has been created to represent template knowledge models
defined in CML that are provided by CommonKADS.
CommonKADS is a mature knowledge engineering methodology
which proposes the use of template knowledge model for knowledge
modelling. The aim of developing this ontology is to present the
template knowledge model in a knowledge representation language
that can be easily understood and shared in the knowledge
engineering community. Hence OWL is used as it has become a
standard for ontology and also it already has user friendly tools for
viewing and editing.
Abstract: The computer has become an essential tool in modern
life, and the combined use of a computer with a projector is very
common in teaching and presentations. However, as typical computer
operating devices involve a mouse or keyboard, when making
presentations, users often need to stay near the computer to execute
functions such as changing pages, writing, and drawing, thus, making
the operation time-consuming, and reducing interactions with the
audience. This paper proposes a laser pointer interaction system able
to simulate mouse functions in order that users need not remain near
the computer, but can directly use laser pointer operations from at a
distance. It can effectively reduce the users- time spent by the
computer, allowing for greater interactions with the audience.
Abstract: The model-based approach to user interface design
relies on developing separate models capturing various aspects about
users, tasks, application domain, presentation and dialog structures.
This paper presents a task modeling approach for user interface
design and aims at exploring mappings between task, domain and
presentation models. The basic idea of our approach is to identify
typical configurations in task and domain models and to investigate
how they relate each other. A special emphasis is put on applicationspecific
functions and mappings between domain objects and
operational task structures. In this respect, we will address two
layers in task decomposition: a functional (planning) layer and an
operational layer.
Abstract: The development of Internet technology in recent years has led to a more active role of users in creating Web content. This has significant effects both on individual learning and collaborative knowledge building. This paper will present an integrative framework model to describe and explain learning and knowledge building with shared digital artifacts on the basis of Luhmann-s systems theory and Piaget-s model of equilibration. In this model, knowledge progress is based on cognitive conflicts resulting from incongruities between an individual-s prior knowledge and the information which is contained in a digital artifact. Empirical support for the model will be provided by 1) applying it descriptively to texts from Wikipedia, 2) examining knowledge-building processes using a social network analysis, and 3) presenting a survey of a series of experimental laboratory studies.
Abstract: This paper proposes a prototype of a lower-limb
rehabilitation system for recovering and strengthening patients-
injured lower limbs. The system is composed of traction motors for
each leg position, a treadmill as a walking base, tension sensors,
microcontrollers controlling motor functions and a main system with
graphic user interface. For derivation of reference or normal velocity
profiles of the body segment point, kinematic method is applied based
on the humanoid robot model using the reference joint angle data of
normal walking.
Abstract: With the development of virtual communities, there is
an increase in the number of members in Virtual Communities (VCs).
Many join VCs with the objective of sharing their knowledge and
seeking knowledge from others. Despite the eagerness of sharing
knowledge and receiving knowledge through VCs, there is no
standard of assessing ones knowledge sharing capabilities and
prospects of knowledge sharing. This paper developed a vector space
model to assess the knowledge sharing prospect of VC users.
Abstract: In recent years, IT convergence technology has been developed to get creative solution by combining robotics or sports science technology. Object detection and recognition have mainly applied to sports science field that has processed by recognizing face and by tracking human body. But object detection and recognition using vision sensor is challenge task in real world because of illumination. In this paper, object detection and recognition using vision sensor applied to sports simulator has been introduced. Face recognition has been processed to identify user and to update automatically a person athletic recording. Human body has tracked to offer a most accurate way of riding horse simulator. Combined image processing has been processed to reduce illumination adverse affect because illumination has caused low performance in detection and recognition in real world application filed. Face has recognized using standard face graph and human body has tracked using pose model, which has composed of feature nodes generated diverse face and pose images. Face recognition using Gabor wavelet and pose recognition using pose graph is robust to real application. We have simulated using ETRI database, which has constructed on horse riding simulator.
Abstract: This paper deals with the modeling and the evaluation of a multiplicative phase noise influence on the bit error ratio in a general space communication system. Our research is focused on systems with multi-state phase shift keying modulation techniques and it turns out, that the phase noise significantly affects the bit error rate, especially for higher signal to noise ratios. These results come from a system model created in Matlab environment and are shown in a form of constellation diagrams and bit error rate dependencies. The change of a user data bit rate is also considered and included into simulation results. Obtained outcomes confirm theoretical presumptions.
Abstract: Network management techniques have long been of
interest to the networking research community. The queue size plays
a critical role for the network performance. The adequate size of the
queue maintains Quality of Service (QoS) requirements within
limited network capacity for as many users as possible. The
appropriate estimation of the queuing model parameters is crucial for
both initial size estimation and during the process of resource
allocation. The accurate resource allocation model for the
management system increases the network utilization. The present
paper demonstrates the results of empirical observation of memory
allocation for packet-based services.
Abstract: In this paper, we present C@sa, a multiagent system aiming at modeling, controlling and simulating the behavior of an intelligent house. The developed system aims at providing to architects, designers and psychologists a simulation and control tool for understanding which is the impact of embedded and pervasive technology on people daily life. In this vision, the house is seen as an environment made up of independent and distributed devices, controlled by agents, interacting to support user's goals and tasks.
Abstract: In this paper, an accurate theoretical analysis for the achievable average channel capacity (in the Shannon sense) per user of a hybrid cellular direct-sequence/fast frequency hopping code-division multiple-access (DS/FFH-CDMA) system operating in a Rayleigh fading environment is presented. The analysis covers the downlink operation and leads to the derivation of an exact mathematical expression between the normalized average channel capacity available to each system-s user, under simultaneous optimal power and rate adaptation and the system-s parameters, as the number of hops per bit, the processing gain applied, the number of users per cell and the received signal-tonoise power ratio over the signal bandwidth. Finally, numerical results are presented to illustrate the proposed mathematical analysis.
Abstract: A learning management system (commonly
abbreviated as LMS) is a software application for the administration,
documentation, tracking, and reporting of training programs,
classroom and online events, e-learning programs, and training
content (Ellis 2009). (Hall 2003) defines an LMS as \"software that
automates the administration of training events. All Learning
Management Systems manage the log-in of registered users, manage
course catalogs, record data from learners, and provide reports to
management\". Evidence of the worldwide spread of e-learning in
recent years is easy to obtain. In April 2003, no fewer than 66,000
fully online courses and 1,200 complete online programs were listed
on the TeleCampus portal from TeleEducation (Paulsen 2003). In the
report \" The US market in the Self-paced eLearning Products and
Services:2010-2015 Forecast and Analysis\" The number of student
taken classes exclusively online will be nearly equal (1% less) to the
number taken classes exclusively in physical campuses. Number of
student taken online course will increase from 1.37 million in 2010 to
3.86 million in 2015 in USA. In another report by The Sloan
Consortium three-quarters of institutions report that the economic
downturn has increased demand for online courses and programs.