Abstract: This study explores reading and library factors related to secondary school student academic outcomes in rural areas in Uganda. This mixed methods study utilized quantitative data collected as part of a more extensive project to explore six student factors in relation to students’ school, library, and home environments. The Kitengesa Community Library in Uganda (www.kitengesalibrary.org) served as the site for this study. The factors explored for this study include reading frequency, library use frequency, library access, overall grade average (OGA), and presence and type of reading materials in the home. Results indicated that both reading frequency and certain types of reading materials read for recreational purposes are correlated with higher OGA. Reading frequency was positively correlated with student OGA for all students.
Abstract: Cortisol is essential to the regulation of the immune
system and pathological yawning is a symptom of multiple sclerosis
(MS). Electromyography activity (EMG) in the jaw muscles typically
rises when the muscles are moved – extended or flexed; and yawning
has been shown to be highly correlated with cortisol levels in healthy
people as shown in the Thompson Cortisol Hypothesis. It is likely
that these elevated cortisol levels are also seen in people with MS.
The possible link between EMG in the jaw muscles and rises in saliva
cortisol levels during yawning were investigated in a randomized
controlled trial of 60 volunteers aged 18-69 years who were exposed
to conditions that were designed to elicit the yawning response.
Saliva samples were collected at the start and after yawning, or at the
end of the presentation of yawning-provoking stimuli, in the absence
of a yawn, and EMG data was additionally collected during rest and
yawning phases. Hospital Anxiety and Depression Scale, Yawning
Susceptibility Scale, General Health Questionnaire, demographic,
and health details were collected and the following exclusion criteria
were adopted: chronic fatigue, diabetes, fibromyalgia, heart
condition, high blood pressure, hormone replacement therapy,
multiple sclerosis, and stroke. Significant differences were found
between the saliva cortisol samples for the yawners, t (23) = -4.263, p
= 0.000, as compared with the non-yawners between rest and poststimuli,
which was non-significant. There were also significant
differences between yawners and non-yawners for the EMG
potentials with the yawners having higher rest and post-yawning
potentials. Significant evidence was found to support the Thompson
Cortisol Hypothesis suggesting that rises in cortisol levels are
associated with the yawning response. Further research is underway
to explore the use of cortisol as a potential diagnostic tool as an assist
to the early diagnosis of symptoms related to neurological disorders.
Bournemouth University Research & Ethics approval granted:
JC28/1/13-KA6/9/13. Professional code of conduct, confidentiality,
and safety issues have been addressed and approved in the Ethics
submission. Trials identification number: ISRCTN61942768.
http://www.controlled-trials.com/isrctn/
Abstract: The 5th generation of mobile networks is term used in
various research papers and projects to identify the next major phase
of mobile telecommunications standards. 5G wireless networks will
support higher peak data rate, lower latency and provide best
connections with QoS guarantees.
In this article, we discuss various promising technologies for 5G
wireless communication systems, such as IPv6 support, World Wide
Wireless Web (WWWW), Dynamic Adhoc Wireless Networks
(DAWN), BEAM DIVISION MULTIPLE ACCESS (BDMA), Cloud
Computing, cognitive radio technology and FBMC/OQAM.
This paper is organized as follows: First, we will give introduction
to 5G systems, present some goals and requirements of 5G. In the
next, basic differences between 4G and 5G are given, after we talk
about key technology innovations of 5G systems and finally we will
conclude in last Section.
Abstract: Latin hypercube designs (LHDs) have been applied in
many computer experiments among the space-filling designs found in
the literature. A LHD can be randomly generated but a randomly
chosen LHD may have bad properties and thus act poorly in
estimation and prediction. There is a connection between Latin
squares and orthogonal arrays (OAs). A Latin square of order s
involves an arrangement of s symbols in s rows and s columns, such
that every symbol occurs once in each row and once in each column
and this exists for every non-negative integer s. In this paper, a
computer program was written to construct orthogonal array-based
Latin hypercube designs (OA-LHDs). Orthogonal arrays (OAs) were
constructed from Latin square of order s and the OAs constructed
were afterward used to construct the desired Latin hypercube designs
for three input variables for use in computer experiments. The LHDs
constructed have better space-filling properties and they can be used
in computer experiments that involve only three input factors.
MATLAB 2012a computer package (www.mathworks.com/) was
used for the development of the program that constructs the designs.
Abstract: Due to the large amount of information in the World
Wide Web (WWW, web) and the lengthy and usually linearly
ordered result lists of web search engines that do not indicate
semantic relationships between their entries, the search for topically
similar and related documents can become a tedious task. Especially,
the process of formulating queries with proper terms representing
specific information needs requires much effort from the user. This
problem gets even bigger when the user's knowledge on a subject and
its technical terms is not sufficient enough to do so. This article
presents the new and interactive search application DocAnalyser that
addresses this problem by enabling users to find similar and related
web documents based on automatic query formulation and state-ofthe-
art search word extraction. Additionally, this tool can be used to
track topics across semantically connected web documents.
Abstract: Celiac disease is a permanent enteropathy caused by the ingestion of gluten, a protein occurring in wheat, rye and barley. The only way of the effective daily treatment is a strict gluten-free diet. From the investigation of products available in the local market, it was found that Latvian producers do not offer gluten-free products. The aim of this research was to study and analyze changes of celiac patient’s attitude to gluten-free product quality and availability in the Latvian market and purchasing habits. The survey was designed using website www.visidati.lv, and a questionnaire was sent to people suffering from celiac disease. The first time the respondents were asked to fill in the questionnaire in 2011, but now repeatedly from the beginning of September 2013 till the end of January 2014. The questionnaire was performed with 75 celiac patients, respondents were from all Latvian regions and they answered 16 questions. One of the most important questions was aimed to find out consumers’ opinion about quality of gluten-free products, consumption patterns of gluten-free products, and, moreover, their interest in products made in Latvia. Respondents were asked to name gluten-free products they mainly buy and give specific purchase locations, evaluate the quality of products and necessity for products produced in Latvia. The results of questionnaire show that the consumers are satisfied with the quality of gluten-free flour, flour blends, sweets and pasta, but are not satisfied with the quality of bread and confectionery available in the Latvian markets.
Abstract: The paper discusses the design of a .NET Windows Service based agent system called MACS (Multi-Agent Classification System). MACS is a system aims to accurately classify spreadsheet developers competency over a network. It is designed to automatically and autonomously monitor spreadsheet users and gather their development activities based on the utilization of the software multi-agent technology (MAS). This is accomplished in such a way that makes management capable to efficiently allow for precise tailor training activities for future spreadsheet development. The monitoring agents of MACS are intended to be distributed over the WWW in order to satisfy the monitoring and classification of the multiple developer aspect. The Prometheus methodology is used for the design of the agents of MACS. Prometheus has been used to undertake this phase of the system design because it is developed specifically for specifying and designing agent-oriented systems. Additionally, Prometheus specifies also the communication needed between the agents in order to coordinate to achieve their delegated tasks.
Abstract: The paper discusses the implementation of the MultiAgent classification System (MACS) and utilizing it to provide an automated and accurate classification of end users developing applications in the spreadsheet domain. However, different technologies have been brought together to build MACS. The strength of the system is the integration of the agent technology with the FIPA specifications together with other technologies, which are the .NET widows service based agents, the Windows Communication Foundation (WCF) services, the Service Oriented Architecture (SOA), and Oracle Data Mining (ODM). The Microsoft's .NET widows service based agents were utilized to develop the monitoring agents of MACS, the .NET WCF services together with SOA approach allowed the distribution and communication between agents over the WWW. The Monitoring Agents (MAs) were configured to execute automatically to monitor excel spreadsheets development activities by content. Data gathered by the Monitoring Agents from various resources over a period of time was collected and filtered by a Database Updater Agent (DUA) residing in the .NET client application of the system. This agent then transfers and stores the data in Oracle server database via Oracle stored procedures for further processing that leads to the classification of the end user developers.
Abstract: Fine-grained data replication over the Internet allows duplication of frequently accessed data objects, as opposed to entire sites, to certain locations so as to improve the performance of largescale content distribution systems. In a distributed system, agents representing their sites try to maximize their own benefit since they are driven by different goals such as to minimize their communication costs, latency, etc. In this paper, we will use game theoretical techniques and in particular auctions to identify a bidding mechanism that encapsulates the selfishness of the agents, while having a controlling hand over them. In essence, the proposed game theory based mechanism is the study of what happens when independent agents act selfishly and how to control them to maximize the overall performance. A bidding mechanism asks how one can design systems so that agents- selfish behavior results in the desired system-wide goals. Experimental results reveal that this mechanism provides excellent solution quality, while maintaining fast execution time. The comparisons are recorded against some well known techniques such as greedy, branch and bound, game theoretical auctions and genetic algorithms.
Abstract: The Romanian government has been making
significant attempts to make its services and information available on
the Internet. According to the UN e-government survey conducted in
2008, Romania comes under mid range countries by utilization of egovernment
(percent of utilization 41%). Romania-s national portal
www.e-guvernare.ro aims at progressively making all services and
information accessible through the portal. However, the success of
these efforts depends, to a great extent, on how well the targeted
users for such services, citizens in general, make use of them. For
this reason, the purpose of the presented study was to identify what
factors could affect the citizens' adoption of e-government services.
The study is an extension of the Technology Acceptance Model. The
proposed model was validated using data collected from 481 citizens.
The results provided substantial support for all proposed hypotheses
and showed the significance of the extended constructs.
Abstract: A serious problem on the WWW is finding reliable
information. Not everything found on the Web is true and the
Semantic Web does not change that in any way. The problem will be
even more crucial for the Semantic Web, where agents will be
integrating and using information from multiple sources. Thus, if an
incorrect premise is used due to a single faulty source, then any
conclusions drawn may be in error. Thus, statements published on
the Semantic Web have to be seen as claims rather than as facts, and
there should be a way to decide which among many possibly
inconsistent sources is most reliable. In this work, we propose a trust
model for the Semantic Web. The proposed model is inspired by the
use trust in human society. Trust is a type of social knowledge and
encodes evaluations about which agents can be taken as reliable
sources of information or services. Our proposed model allows
agents to decide which among different sources of information to
trust and thus act rationally on the semantic web.
Abstract: Due to the tremendous amount of information provided
by the World Wide Web (WWW) developing methods for mining
the structure of web-based documents is of considerable interest. In
this paper we present a similarity measure for graphs representing
web-based hypertext structures. Our similarity measure is mainly
based on a novel representation of a graph as linear integer strings,
whose components represent structural properties of the graph. The
similarity of two graphs is then defined as the optimal alignment of
the underlying property strings. In this paper we apply the well known
technique of sequence alignments for solving a novel and challenging
problem: Measuring the structural similarity of generalized trees.
In other words: We first transform our graphs considered as high
dimensional objects in linear structures. Then we derive similarity
values from the alignments of the property strings in order to
measure the structural similarity of generalized trees. Hence, we
transform a graph similarity problem to a string similarity problem for
developing a efficient graph similarity measure. We demonstrate that
our similarity measure captures important structural information by
applying it to two different test sets consisting of graphs representing
web-based document structures.
Abstract: The development of wearable sensing technologies is a great challenge which is being addressed by the Proetex FP6 project (www.proetex.org). Its main aim is the development of wearable sensors to improve the safety and efficiency of emergency personnel. This will be achieved by continuous, real-time monitoring of vital signs, posture, activity, and external hazards surrounding emergency workers. We report here the development of carbon dioxide (CO2) sensing boot by incorporating commercially available CO2 sensor with a wireless platform into the boot assembly. Carefully selected commercially available sensors have been tested. Some of the key characteristics of the selected sensors are high selectivity and sensitivity, robustness and the power demand. This paper discusses some of the results of CO2 sensor tests and sensor integration with wireless data transmission
Abstract: It has been recognized that due to the autonomy and
heterogeneity, of Web services and the Web itself, new approaches
should be developed to describe and advertise Web services. The
most notable approaches rely on the description of Web services
using semantics. This new breed of Web services, termed semantic
Web services, will enable the automatic annotation, advertisement,
discovery, selection, composition, and execution of interorganization
business logic, making the Internet become a common
global platform where organizations and individuals communicate
with each other to carry out various commercial activities and to
provide value-added services. This paper deals with two of the
hottest R&D and technology areas currently associated with the Web
– Web services and the semantic Web. It describes how semantic
Web services extend Web services as the semantic Web improves the
current Web, and presents three different conceptual approaches to
deploying semantic Web services, namely, WSDL-S, OWL-S, and
WSMO.
Abstract: With the tremendous growth of World Wide Web
(WWW) data, there is an emerging need for effective information
retrieval at the document level. Several query languages such as
XML-QL, XPath, XQL, Quilt and XQuery are proposed in recent
years to provide faster way of querying XML data, but they still lack of
generality and efficiency. Our approach towards evolving a framework
for querying semistructured documents is based on formal query
algebra. Two elements are introduced in the proposed framework:
first, a generic and flexible data model for logical representation of
semistructured data and second, a set of operators for the manipulation
of objects defined in the data model. In additional to accommodating
several peculiarities of semistructured data, our model offers novel
features such as bidirectional paths for navigational querying and
partitions for data transformation that are not available in other
proposals.
Abstract: With the explosive growth of data available on the
Internet, personalization of this information space become a
necessity. At present time with the rapid increasing popularity of the
WWW, Websites are playing a crucial role to convey knowledge and
information to the end users. Discovering hidden and meaningful
information about Web users usage patterns is critical to determine
effective marketing strategies to optimize the Web server usage for
accommodating future growth. The task of mining useful information
becomes more challenging when the Web traffic volume is enormous
and keeps on growing. In this paper, we propose a intelligent model
to discover and analyze useful knowledge from the available Web
log data.
Abstract: Digital news with a variety topics is abundant on the
internet. The problem is to classify news based on its appropriate
category to facilitate user to find relevant news rapidly. Classifier
engine is used to split any news automatically into the respective
category. This research employs Support Vector Machine (SVM) to
classify Indonesian news. SVM is a robust method to classify
binary classes. The core processing of SVM is in the formation of an
optimum separating plane to separate the different classes. For
multiclass problem, a mechanism called one against one is used to
combine the binary classification result. Documents were taken
from the Indonesian digital news site, www.kompas.com. The
experiment showed a promising result with the accuracy rate of 85%.
This system is feasible to be implemented on Indonesian news
classification.
Abstract: Information sharing and gathering are important in the rapid advancement era of technology. The existence of WWW has caused rapid growth of information explosion. Readers are overloaded with too many lengthy text documents in which they are more interested in shorter versions. Oil and gas industry could not escape from this predicament. In this paper, we develop an Automated Text Summarization System known as AutoTextSumm to extract the salient points of oil and gas drilling articles by incorporating statistical approach, keywords identification, synonym words and sentence-s position. In this study, we have conducted interviews with Petroleum Engineering experts and English Language experts to identify the list of most commonly used keywords in the oil and gas drilling domain. The system performance of AutoTextSumm is evaluated using the formulae of precision, recall and F-score. Based on the experimental results, AutoTextSumm has produced satisfactory performance with F-score of 0.81.
Abstract: This research is designed for helping a WAPbased mobile phone-s user in order to analyze of logistics in the traffic area by applying and designing the accessible processes from mobile user to server databases. The research-s design comprises Mysql 4.1.8-nt database system for being the server which there are three sub-databases, traffic light – times of intersections in periods of the day, distances on the road of area-blocks where are divided from the main sample-area and speeds of sample vehicles (motorcycle, personal car and truck) in periods of the day. For interconnections between the server and user, PHP is used to calculate distances and travelling times from the beginning point to destination, meanwhile XHTML applied for receiving, sending and displaying data from PHP to user-s mobile. In this research, the main sample-area is focused at the Huakwang-Ratchada-s area, Bangkok, Thailand where usually the congested point and 6.25 km2 surrounding area which are split into 25 blocks, 0.25 km2 for each. For simulating the results, the designed server-database and all communicating models of this research have been uploaded to www.utccengineering.com/m4tg and used the mobile phone which supports WAP 2.0 XHTML/HTML multimode browser for observing values and displayed pictures. According to simulated results, user can check the route-s pictures from the requiring point to destination along with analyzed consuming times when sample vehicles travel in various periods of the day.
Abstract: The paper investigates the feasibility of constructing a software multi-agent based monitoring and classification system and utilizing it to provide an automated and accurate classification of end users developing applications in the spreadsheet domain. The agents function autonomously to provide continuous and periodic monitoring of excels spreadsheet workbooks. Resulting in, the development of the MultiAgent classification System (MACS) that is in compliance with the specifications of the Foundation for Intelligent Physical Agents (FIPA). However, different technologies have been brought together to build MACS. The strength of the system is the integration of the agent technology with the FIPA specifications together with other technologies that are Windows Communication Foundation (WCF) services, Service Oriented Architecture (SOA), and Oracle Data Mining (ODM). The Microsoft's .NET widows service based agents were utilized to develop the monitoring agents of MACS, the .NET WCF services together with SOA approach allowed the distribution and communication between agents over the WWW that is in order to satisfy the monitoring and classification of the multiple developer aspect. ODM was used to automate the classification phase of MACS.