Abstract: For emergency and relief service providers such as pre-hospital emergencies, quick arrival at the scene of an accident or any EMS mission is one of the most important requirements of effective service delivery. EMS Response time (the interval between the time of the call and the time of arrival on scene) is a critical factor in determining the quality of pre-hospital Emergency Medical Services (EMS). This is especially important for heart attack, stroke, or accident patients that seconds are vital in saving their lives. Location-based e-services can be broadly defined as any service that provides information pertinent to the current location of an active mobile handset or precise address of landline phone call at a specific time window, regardless of the underlying delivery technology used to convey the information. According to research, one of the effective methods of meeting this goal is determining the location of the caller via the cooperation of landline and mobile phone operators in the country. The follow-up of the Communications Regulatory Authority (CRA) organization has resulted in the receipt of two separate secured electronic web services. Thus, to ensure human privacy, a secure technical architecture was required for launching the services in the pre-hospital EMS information management system. In addition, to quicken medics’ arrival at the patient's bedside, rescue vehicles should make use of an intelligent transportation system to estimate road traffic using a GPS-based mobile navigation system independent of the Internet. This paper seeks to illustrate the architecture of the practical national model used by the Iranian EMS organization.
Abstract: The success of education is dependent on evolution and adaptation, while the traditional system has worked before, one type of education evolved with the digital age is virtual education that has influenced efficiency in today’s learning environments. Virtual learning has indeed proved its efficiency to overcome the drawbacks of the physical environment such as time, facilities, location, etc., but despite what it had accomplished, the educational system over all is not adequate for being a productive system yet. Earning a degree is not anymore enough to obtain a career job; it is simply missing the skills and creativity. There are always two sides of a coin; a college degree or a specialized certificate, each has its own merits, but having both can put you on a successful IT career path. For many of job-seeking individuals across world to have a clear meaningful goal for work and education and positively contribute the community, a productive correlation and cooperation among employers, universities alongside with the individual technical skills is a must for generations to come. Fortunately, the proposed research “Entrepreneur Universal Education System” is an evolution to meet the needs of both employers and students, in addition to gaining vital and real-world experience in the chosen fields is easier than ever. The new vision is to empower the education to improve organizations’ needs which means improving the world as its primary goal, adopting universal skills of effective thinking, effective action, effective relationships, preparing the students through real-world accomplishment and encouraging them to better serve their organization and their communities faster and more efficiently.
Abstract: The telemedicine services require correct computing resource management to guarantee productivity and efficiency for medical and non-medical staff. The aim of this study was to examine web management strategies to ensure the availability of resources and services in telemedicine so as to provide medical information management with an accessible strategy. In addition, to evaluate the quality-of-service parameters, the followings were measured: delays, throughput, jitter, latency, available bandwidth, percent of access and denial of services based of web management performance map with profiles permissions and database management. Through 24 different test scenarios, the results show 100% in availability of medical information, in relation to access of medical staff to web services, and quality of service (QoS) of 99% because of network delay and performance of computer network. The findings of this study suggest that the proposed strategy of web management is an ideal solution to guarantee the availability, reliability, and accessibility of medical information. Finally, this strategy offers seven user profile used at telemedicine center of Bogota-Colombia keeping QoS parameters suitable to telemedicine services.
Abstract: We have developed a distributed computing capability, Digital Forensics Compute Cluster (DFORC2) to speed up the ingestion and processing of digital evidence that is resident on computer hard drives. DFORC2 parallelizes evidence ingestion and file processing steps. It can be run on a standalone computer cluster or in the Amazon Web Services (AWS) cloud. When running in a virtualized computing environment, its cluster resources can be dynamically scaled up or down using Kubernetes. DFORC2 is an open source project that uses Autopsy, Apache Spark and Kafka, and other open source software packages. It extends the proven open source digital forensics capabilities of Autopsy to compute clusters and cloud architectures, so digital forensics tasks can be accomplished efficiently by a scalable array of cluster compute nodes. In this paper, we describe DFORC2 and compare it with a standalone version of Autopsy when both are used to process evidence from hard drives of different sizes.
Abstract: Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.
Abstract: In this paper, Load Balancing idea is used in the Web service mobile host. The main idea of Load Balancing is to establish a one-to-many mapping mechanism: An entrance-mapping request to plurality of processing node in order to realize the dividing and assignment processing. Because the mobile host is a resource constrained environment, there are some Web services which cannot be completed on the mobile host. When the mobile host resource is not enough to complete the request, Load Balancing scheduler will divide the request into a plurality of sub-requests and transfer them to different auxiliary mobile hosts. Auxiliary mobile host executes sub-requests, and then, the results will be returned to the mobile host. Service request integrator receives results of sub-requests from the auxiliary mobile host, and integrates the sub-requests. In the end, the complete request is returned to the client. Experimental results show that this technology adopted in this paper can complete requests and have a higher efficiency.
Abstract: With the ubiquitous computing’s emergence and the evolution of enterprises’ needs, one of the main challenges is to build context-aware applications based on Web services. These applications have become particularly relevant in the pervasive computing domain. In this paper, we introduce our approach that optimizes the use of Web services with context notions when dealing with contextual environments. We focus particularly on making Web services autonomous and natively context-aware. We implement and evaluate the proposed approach with a pedagogical example of a context-aware Web service treating temperature values.
Abstract: This paper explores efficient ways to implement various
media-updating features like news aggregation, video conversion,
and bulk email handling. All of these jobs share the property
that they are periodic in nature, and they all benefit from being
handled in a distributed fashion. The data for these jobs also often
comes from a social or collaborative source. We isolate the class of
periodic, one round map reduce jobs as a useful setting to describe
and handle media updating tasks. As such tasks are simpler than
general map reduce jobs, programming them in a general map
reduce platform could easily become tedious. This paper presents
a MediaUpdater module of the Yioop Open Source Search Engine
Web Portal designed to handle such jobs via an extension of a
PHP class. We describe how to implement various media-updating
tasks in our system as well as experiments carried out using these
implementations on an Amazon Web Services cluster.
Abstract: Web service adaptation involves the creation of adapters that solve Web services incompatibilities known as mismatches. Since the importance of Web services adaptation is increasing because of the frequent implementation and use of online Web services, this paper presents a literature review of web services to investigate the main methods of adaptation, their theoretical underpinnings and the metrics used to measure adapters performance. Eighteen publications were reviewed independently by two researchers. We found that adaptation techniques are needed to solve different types of problems that may arise due to incompatibilities in Web service interfaces, including protocols, messages, data and semantics that affect the interoperability of the services. Although adapters are non-invasive methods that can improve Web services interoperability and there are current approaches for service adaptation; there is, however, not yet one solution that fits all types of mismatches. Our results also show that only a few research projects incorporate theoretical frameworks and that metrics to measure adapters’ performance are very limited. We conclude that further research on software adaptation should improve current adaptation methods in different layers of the service interoperability and that an adaptation theoretical framework that incorporates a theoretical underpinning and measures of qualitative and quantitative performance needs to be created.
Abstract: The increasing availability of information about earth
surface elevation (Digital Elevation Models DEM) generated from
different sources (remote sensing, Aerial Images, Lidar) poses the
question about how to integrate and make available to the most than
possible audience this huge amount of data. In order to exploit the potential of 3D elevation representation the
quality of data management plays a fundamental role. Due to the high
acquisition costs and the huge amount of generated data, highresolution
terrain surveys tend to be small or medium sized and
available on limited portion of earth. Here comes the need to merge
large-scale height maps that typically are made available for free at
worldwide level, with very specific high resolute datasets. One the
other hand, the third dimension increases the user experience and the
data representation quality, unlocking new possibilities in data
analysis for civil protection, real estate, urban planning, environment
monitoring, etc. The open-source 3D virtual globes, which are
trending topics in Geovisual Analytics, aim at improving the
visualization of geographical data provided by standard web services
or with proprietary formats. Typically, 3D Virtual globes like do not
offer an open-source tool that allows the generation of a terrain
elevation data structure starting from heterogeneous-resolution terrain
datasets. This paper describes a technological solution aimed to set
up a so-called “Terrain Builder”. This tool is able to merge
heterogeneous-resolution datasets, and to provide a multi-resolution
worldwide terrain services fully compatible with CesiumJS and
therefore accessible via web using traditional browser without any
additional plug-in.
Abstract: In this corporate world, the technology of Web
services has grown rapidly and its significance for the development
of web based applications gradually rises over time. The success of
Business to Business integration rely on finding novel partners and
their services in a global business environment. However, the
selection of the most suitable Web service from the list of services
with the identical functionality is more vital. The satisfaction level of
the customer and the provider’s reputation of the Web service are
primarily depending on the range it reaches the customer’s
requirements. In most cases, the customer of the Web service feels
that he is spending for the service which is undelivered. This is
because the customer always thinks that the real functionality of the
web service is not reached. This will lead to change of the service
frequently. In this paper, a framework is proposed to evaluate the
Quality of Service (QoS) and its cost that makes the optimal
correlation between each other. In addition, this research work
proposes some management decision against the functional deviancy
of the web service that is guaranteed at time of selection.
Abstract: Digital reference service is when a traditional library
reference service is provided electronically. In most cases users do
not get full satisfaction from using digital reference service due to
variety of reasons. This paper discusses the formal specification of
web services applications for digital reference services (WSDRS).
WSDRS is an informal model that claims to reduce the problems of
digital reference services in libraries. It uses web services technology
to provide efficient digital way of satisfying users’ need in the
reference section of libraries. Informal model is in natural language
which is inconsistent and ambiguous that may cause difficulties to the
developers of the system. In order to solve this problem we decided
to convert the informal specifications into formal specifications. This
is supposed to reduce the overall development time and cost. We use
Z language to develop the formal model and verify it with Z/EVES
theorem prover tool.
Abstract: The web services applications for digital reference
service (WSDRS) of LIS model is an informal model that claims to
reduce the problems of digital reference services in libraries. It uses
web services technology to provide efficient way of satisfying users’
needs in the reference section of libraries. The formal WSDRS model
consists of the Z specifications of all the informal specifications of
the model. This paper discusses the formal validation of the Z
specifications of WSDRS model. The authors formally verify and
thus validate the properties of the model using Z/EVES theorem
prover.
Abstract: Quality of Service (QoS) attributes as part of the
service description is an important factor for service attribute. It is not
easy to exactly quantify the weight of each QoS conditions since
human judgments based on their preference causes vagueness. As
web services selection requires optimization, evolutionary computing
based on heuristics to select an optimal solution is adopted. In this
work, the evolutionary computing technique Particle Swarm
Optimization (PSO) is used for selecting a suitable web services
based on the user’s weightage of each QoS values by optimizing the
QoS weight vector and thereby finding the best weight vectors for
best services that is being selected. Finally the results are compared
and analyzed using static inertia weight and deterministic inertia
weight of PSO.
Abstract: This article discusses event monitoring options for
heterogeneous event sources as they are given in nowadays
heterogeneous distributed information systems. It follows the central
assumption, that a fully generic event monitoring solution cannot
provide complete support for event monitoring; instead, event source
specific semantics such as certain event types or support for certain
event monitoring techniques have to be taken into account.
Following from this, the core result of the work presented here is
the extension of a configurable event monitoring (Web) service for a
variety of event sources. A service approach allows us to trade
genericity for the exploitation of source specific characteristics. It
thus delivers results for the areas of SOA, Web services, CEP and
EDA.
Abstract: Different tools and technologies were implemented
for Crisis Response and Management (CRM) which is generally
using available network infrastructure for information exchange.
Depending on type of disaster or crisis, network infrastructure could
be affected and it could not be able to provide reliable connectivity.
Thus any tool or technology that depends on the connectivity could
not be able to fulfill its functionalities. As a solution, a new message
exchange framework has been developed. Framework provides
offline/online information exchange platform for CRM Information
Systems (CRMIS) and it uses XML compression and packet
prioritization algorithms and is based on open source web
technologies. By introducing offline capabilities to the web
technologies, framework will be able to perform message exchange
on unreliable networks. The experiments done on the simulation
environment provide promising results on low bandwidth networks
(56kbps and 28.8 kbps) with up to 50% packet loss and the solution is
to successfully transfer all the information on these low quality
networks where the traditional 2 and 3 tier applications failed.
Abstract: This paper investigates a new data mining capability that entails mining of High Utility Itemsets (HUI) in a distributed environment. Existing research in data mining deals with only presence or absence of an items and do not consider the semantic measures like weight or cost of the items. Thus, HUI mining algorithm has evolved. HUI mining is the one kind of utility mining concept, aims to identify itemsets whose utility satisfies a given threshold. Although, the approach of mining HUIs in a distributed environment and mining of the same from XML data have not explored yet. In this work, a novel approach is proposed to mine HUIs from the XML based data in a distributed environment. This work utilizes Service Oriented Computing (SOC) paradigm which provides Knowledge as a Service (KaaS). The interesting patterns are provided via the web services with the help of knowledge server to answer the queries of the consumers. The performance of the approach is evaluated on various databases using execution time and memory consumption.
Abstract: Certain sciences such as physics, chemistry or biology,
have a strong computational aspect and use computing infrastructures
to advance their scientific goals. Often, high performance and/or high
throughput computing infrastructures such as clusters and computational
Grids are applied to satisfy computational needs. In addition,
these sciences are sometimes characterised by scientific collaborations
requiring resource sharing which is typically provided by Grid
approaches. In this article, I discuss Grid computing approaches in
High Energy Physics as well as in bioinformatics and highlight some
of my experience in both scientific domains.
Abstract: The future of business intelligence (BI) is to integrate
intelligence into operational systems that works in real-time
analyzing small chunks of data based on requirements on continuous
basis. This is moving away from traditional approach of doing
analysis on ad-hoc basis or sporadically in passive and off-line mode
analyzing huge amount data. Various AI techniques such as expert
systems, case-based reasoning, neural-networks play important role
in building business intelligent systems. Since BI involves various
tasks and models various types of problems, hybrid intelligent
techniques can be better choice. Intelligent systems accessible
through web services make it easier to integrate them into existing
operational systems to add intelligence in every business processes.
These can be built to be invoked in modular and distributed way to
work in real time. Functionality of such systems can be extended to
get external inputs compatible with formats like RSS. In this paper,
we describe a framework that use effective combinations of these
techniques, accessible through web services and work in real-time.
We have successfully developed various prototype systems and done
few commercial deployments in the area of personalization and
recommendation on mobile and websites.
Abstract: Recently, web services to access from many type devices
are often used. We have developed the shortest path planning
system called "Bus-Net" in Tottori prefecture as a web application
to sustain the public transport. And it used the same user interface
for both devices. To support both devices, the interface cannot use
JavaScript and so on.
Thus, we developed the method that use individual user interface
for each device type to improve its convenience. To be concrete,
we defined formats of condition input to the path planning system
and result output from it and separate the system into the request
processing part and user interface parts that depend on device types.
By this method, we have also developed special device for Bus-Net
named "Intelligent-Bus-Stop".