Abstract: The increasing availability of information about earth
surface elevation (Digital Elevation Models DEM) generated from
different sources (remote sensing, Aerial Images, Lidar) poses the
question about how to integrate and make available to the most than
possible audience this huge amount of data. In order to exploit the potential of 3D elevation representation the
quality of data management plays a fundamental role. Due to the high
acquisition costs and the huge amount of generated data, highresolution
terrain surveys tend to be small or medium sized and
available on limited portion of earth. Here comes the need to merge
large-scale height maps that typically are made available for free at
worldwide level, with very specific high resolute datasets. One the
other hand, the third dimension increases the user experience and the
data representation quality, unlocking new possibilities in data
analysis for civil protection, real estate, urban planning, environment
monitoring, etc. The open-source 3D virtual globes, which are
trending topics in Geovisual Analytics, aim at improving the
visualization of geographical data provided by standard web services
or with proprietary formats. Typically, 3D Virtual globes like do not
offer an open-source tool that allows the generation of a terrain
elevation data structure starting from heterogeneous-resolution terrain
datasets. This paper describes a technological solution aimed to set
up a so-called “Terrain Builder”. This tool is able to merge
heterogeneous-resolution datasets, and to provide a multi-resolution
worldwide terrain services fully compatible with CesiumJS and
therefore accessible via web using traditional browser without any
additional plug-in.
Abstract: The system for analyzing and eliciting public
grievances serves its main purpose to receive and process all sorts of
complaints from the public and respond to users. Due to the more
number of complaint data becomes big data which is difficult to store
and process. The proposed system uses HDFS to store the big data
and uses MapReduce to process the big data. The concept of cache
was applied in the system to provide immediate response and timely
action using big data analytics. Cache enabled big data increases the
response time of the system. The unstructured data provided by the
users are efficiently handled through map reduce algorithm. The
processing of complaints takes place in the order of the hierarchy of
the authority. The drawbacks of the traditional database system used
in the existing system are set forth by our system by using Cache
enabled Hadoop Distributed File System. MapReduce framework
codes have the possible to leak the sensitive data through
computation process. We propose a system that add noise to the
output of the reduce phase to avoid signaling the presence of
sensitive data. If the complaints are not processed in the ample time,
then automatically it is forwarded to the higher authority. Hence it
ensures assurance in processing. A copy of the filed complaint is sent
as a digitally signed PDF document to the user mail id which serves
as a proof. The system report serves to be an essential data while
making important decisions based on legislation.
Abstract: Big Data and analytics have gained a huge momentum
in recent years. Big Data feeds into the field of Learning Analytics
(LA) that may allow academic institutions to better understand the
learners’ needs and proactively address them. Hence, it is important
to have an understanding of Big Data and its applications. The
purpose of this descriptive paper is to provide an overview of Big
Data, the technologies used in Big Data, and some of the applications
of Big Data in education. Additionally, it discusses some of the
concerns related to Big Data and current research trends. While Big
Data can provide big benefits, it is important that institutions
understand their own needs, infrastructure, resources, and limitation
before jumping on the Big Data bandwagon.
Abstract: The aim of this paper is to understand emerging
learning conditions, when a visual analytics is implemented and used
in K 12 (education). To date, little attention has been paid to the role
visual analytics (digital media and technology that highlight visual
data communication in order to support analytical tasks) can play in
education, and to the extent to which these tools can process
actionable data for young students. This study was conducted in three
public K 12 schools, in four social science classes with students aged
10 to 13 years, over a period of two to four weeks at each school.
Empirical data were generated using video observations and analyzed
with help of metaphors within Actor-network theory (ANT). The
learning conditions are found to be distinguished by broad
complexity, characterized by four dimensions. These emerge from
the actors’ deeply intertwined relations in the activities. The paper
argues in relation to the found dimensions that novel approaches to
teaching and learning could benefit students’ knowledge building as
they work with visual analytics, analyzing visualized data.
Abstract: Customer churn prediction is one of the most useful
areas of study in customer analytics. Due to the enormous amount
of data available for such predictions, machine learning and data
mining have been heavily used in this domain. There exist many
machine learning algorithms directly applicable for the problem of
customer churn prediction, and here, we attempt to experiment on
a novel approach by using a cognitive learning based technique in
an attempt to improve the results obtained by using a combination
of supervised learning methods, with cognitive unsupervised learning
methods.
Abstract: This paper describes the problem of building secure
computational services for encrypted information in the Cloud
Computing without decrypting the encrypted data; therefore, it meets
the yearning of computational encryption algorithmic aspiration
model that could enhance the security of big data for privacy,
confidentiality, availability of the users. The cryptographic model
applied for the computational process of the encrypted data is the
Fully Homomorphic Encryption Scheme. We contribute a theoretical
presentations in a high-level computational processes that are based
on number theory and algebra that can easily be integrated and
leveraged in the Cloud computing with detail theoretic mathematical
concepts to the fully homomorphic encryption models. This
contribution enhances the full implementation of big data analytics
based cryptographic security algorithm.
Abstract: Health analytics (HA) is used in healthcare systems
for effective decision making, management and planning of
healthcare and related activities. However, user resistances, unique
position of medical data content and structure (including
heterogeneous and unstructured data) and impromptu HA projects
have held up the progress in HA applications. Notably, the accuracy
of outcomes depends on the skills and the domain knowledge of the
data analyst working on the healthcare data. Success of HA depends
on having a sound process model, effective project management and
availability of supporting tools. Thus, to overcome these challenges
through an effective process model, we propose a HA process model
with features from rational unified process (RUP) model and agile
methodology.
Abstract: Over the past era, there have been a lot of efforts and
studies are carried out in growing proficient tools for performing
various tasks in big data. Recently big data have gotten a lot of
publicity for their good reasons. Due to the large and complex
collection of datasets it is difficult to process on traditional data
processing applications. This concern turns to be further mandatory
for producing various tools in big data. Moreover, the main aim of
big data analytics is to utilize the advanced analytic techniques
besides very huge, different datasets which contain diverse sizes from
terabytes to zettabytes and diverse types such as structured or
unstructured and batch or streaming. Big data is useful for data sets
where their size or type is away from the capability of traditional
relational databases for capturing, managing and processing the data
with low-latency. Thus the out coming challenges tend to the
occurrence of powerful big data tools. In this survey, a various
collection of big data tools are illustrated and also compared with the
salient features.
Abstract: There have been a lot of efforts and researches undertaken in developing efficient tools for performing several tasks in data mining. Due to the massive amount of information embedded in huge data warehouses maintained in several domains, the extraction of meaningful pattern is no longer feasible. This issue turns to be more obligatory for developing several tools in data mining. Furthermore the major aspire of data mining software is to build a resourceful predictive or descriptive model for handling large amount of information more efficiently and user friendly. Data mining mainly contracts with excessive collection of data that inflicts huge rigorous computational constraints. These out coming challenges lead to the emergence of powerful data mining technologies. In this survey a diverse collection of data mining tools are exemplified and also contrasted with the salient features and performance behavior of each tool.
Abstract: Big data has the potential to improve the quality of services; enable infrastructure that businesses depend on to adapt continually and efficiently; improve the performance of employees; help organizations better understand customers; and reduce liability risks. Analytics and marketing models of fixed and mobile operators are falling short in combating churn and declining revenue per user. Big Data presents new method to reverse the way and improve profitability. The benefits of Big Data and next-generation network, however, are more exorbitant than improved customer relationship management. Next generation of networks are in a prime position to monetize rich supplies of customer information—while being mindful of legal and privacy issues. As data assets are transformed into new revenue streams will become integral to high performance.
Abstract: Due to the widespread of mobile sensing, there is a strong need to handle trails of moving objects, and trajectories. This paper proposes three visual analytics approaches for higher order information of trajectory datasets based on the higher order Voronoi diagram data structure. Proposed approaches reveal geometrical, topological, and directional information. Experimental resultsdemonstrate the applicability and usefulness of proposed three approaches.
Abstract: The future of business intelligence (BI) is to integrate
intelligence into operational systems that works in real-time
analyzing small chunks of data based on requirements on continuous
basis. This is moving away from traditional approach of doing
analysis on ad-hoc basis or sporadically in passive and off-line mode
analyzing huge amount data. Various AI techniques such as expert
systems, case-based reasoning, neural-networks play important role
in building business intelligent systems. Since BI involves various
tasks and models various types of problems, hybrid intelligent
techniques can be better choice. Intelligent systems accessible
through web services make it easier to integrate them into existing
operational systems to add intelligence in every business processes.
These can be built to be invoked in modular and distributed way to
work in real time. Functionality of such systems can be extended to
get external inputs compatible with formats like RSS. In this paper,
we describe a framework that use effective combinations of these
techniques, accessible through web services and work in real-time.
We have successfully developed various prototype systems and done
few commercial deployments in the area of personalization and
recommendation on mobile and websites.
Abstract: The proliferation of user-generated content (UGC) results in huge opportunities to explore event patterns. However, existing event recommendation systems primarily focus on advanced information technology users. Little work has been done to address novice and low-literacy users. The next billion users providing and consuming UGC are likely to include communities from developing countries who are ready to use affordable technologies for subsistence goals. Therefore, we propose a design framework for providing event recommendations to address the needs of such users. Grounded in information integration theory (IIT), our framework advocates that effective event recommendation is supported by systems capable of (1) reliable information gathering through structured user input, (2) accurate sense making through spatial-temporal analytics, and (3) intuitive information dissemination through interactive visualization techniques. A mobile pest management application is developed as an instantiation of the design framework. Our preliminary study suggests a set of design principles for novice and low-literacy users.
Abstract: The advances in location-based data collection
technologies such as GPS, RFID etc. and the rapid reduction of their
costs provide us with a huge and continuously increasing amount of
data about movement of vehicles, people and goods in an urban area.
This explosive growth of geospatially-referenced data has far
outpaced the planner-s ability to utilize and transform the data into
insightful information thus creating an adverse impact on the return
on the investment made to collect and manage this data. Addressing
this pressing need, we designed and developed DIVAD, a dynamic
and interactive visual analytics dashboard to allow city planners to
explore and analyze city-s transportation data to gain valuable
insights about city-s traffic flow and transportation requirements. We
demonstrate the potential of DIVAD through the use of interactive
choropleth and hexagon binning maps to explore and analyze large
taxi-transportation data of Singapore for different geographic and
time zones.
Abstract: Weblogs are resource of social structure to discover and track the various type of information written by blogger. In this paper, we proposed to use mining weblogs technique for identifying the trends of influenza where blogger had disseminated their opinion for the anomaly disease. In order to identify the trends, web crawler is applied to perform a search and generated a list of visited links based on a set of influenza keywords. This information is used to implement the analytics report system for monitoring and analyzing the pattern and trends of influenza (H1N1). Statistical and graphical analysis reports are generated. Both types of the report have shown satisfactory reports that reflect the awareness of Malaysian on the issue of influenza outbreak through blogs.
Abstract: This paper compares the search engine marketing
strategies adopted in China and the Western countries through two illustrative cases, namely, Google and Baidu. Marketers in the West
use search engine optimization (SEO) to rank their sites higher for
queries in Google. Baidu, however, offers paid search placement, or the selling of engine results for particular keywords to the higher
bidders. Whereas Google has been providing innovative services ranging from Google Map to Google Blog, Baidu remains focused on
search services – the one that it does best. The challenges and
opportunities of the Chinese Internet market offered to global entrepreneurs are also discussed in the paper
Abstract: Traditional higher-education classrooms allow lecturers to observe students- behaviours and responses to a particular pedagogy during learning in a way that can influence changes to the pedagogical approach. Within current e-learning systems it is difficult to perform continuous analysis of the cohort-s behavioural tendency, making real-time pedagogical decisions difficult. This paper presents a Virtual Learning Process Environment (VLPE) based on the Business Process Management (BPM) conceptual framework. Within the VLPE, course designers can model various education pedagogies in the form of learning process workflows using an intuitive flow diagram interface. These diagrams are used to visually track the learning progresses of a cohort of students. This helps assess the effectiveness of the chosen pedagogy, providing the information required to improve course design. A case scenario of a cohort of students is presented and quantitative statistical analysis of their learning process performance is gathered and displayed in realtime using dashboards.
Abstract: Web intelligence, if made personal, can fuel the process of building communications around the interests and preferences of each individual customer or prospect, by providing specific behavioral insights about each individual. To become fully efficient, Web intelligence must reach a stage of a high-level maturity, passing throughout a process that involves five steps: (1) Web site analysis; (2) Web site and advertising optimization; (3) Segment targeting; (4) Interactive marketing (online only); and (5) Interactive marketing (online and offline). Discussing these steps in detail, the paper uncovers the real gold mine that is personal-level Web intelligence.
Abstract: We discuss a theoretical conceptual framework to help
understand how the new business analytics technologies have
diffused in firms. We draw on three theoretical perspectives for this
purpose. They are innovation diffusion theory, IT Business Value
and the technology-organization-environment theory. We develop a
conceptual framework that helps understand the interlinkages among
factors affecting diffusion of business analytics and its impact on
performance.