Abstract: The current web has become a modern encyclopedia,
where people share their thoughts and ideas on various topics around
them. This kind of encyclopedia is very useful for other people who
are looking for answers to their questions. However, with the
growing popularity of social networking and blogging and ever
expanding network services, there has also been a growing diversity
of technologies along with a different structure of individual web
sites. It is therefore difficult to directly find a relevant answer for a
common Internet user. This paper presents a web application for the
real-time end-to-end analysis of selected Internet trends where the
trend can be whatever the people post online. The application
integrates fully configurable tools for data collection and analysis
using selected webometric algorithms, and for its chronological
visualization to user. It can be assumed that the application facilitates
the users to evaluate the quality of various products that are
mentioned online.
Abstract: This article deals with the popularity of candidates for the president of the United States of America. The popularity is assessed according to public comments on the Web 2.0. Social networking, blogging and online forums (collectively Web 2.0) are for common Internet users the easiest way to share their personal opinions, thoughts, and ideas with the entire world. However, the web content diversity, variety of technologies and website structure differences, all of these make the Web 2.0 a network of heterogeneous data, where things are difficult to find for common users. The introductory part of the article describes methodology for gathering and processing data from Web 2.0. The next part of the article is focused on the evaluation and content analysis of obtained information, which write about presidential candidates.
Abstract: Web 2.0 (social networking, blogging and online
forums) can serve as a data source for social science research because
it contains vast amount of information from many different users.
The volume of that information has been growing at a very high rate
and becoming a network of heterogeneous data; this makes things
difficult to find and is therefore not almost useful. We have proposed
a novel theoretical model for gathering and processing data from
Web 2.0, which would reflect semantic content of web pages in
better way. This article deals with the analysis part of the model and
its usage for content analysis of blogs. The introductory part of the
article describes methodology for the gathering and processing data
from blogs. The next part of the article is focused on the evaluation
and content analysis of blogs, which write about specific trend.
Abstract: Software testing is important stage of software development cycle. Current testing process involves tester and electronic documents with test case scenarios. In this paper we focus on new approach to testing process using automated test case generation and tester guidance through the system based on the model of the system. Test case generation and model-based testing is not possible without proper system model. We aim on providing better feedback from the testing process thus eliminating the unnecessary paper work.