Development of a Complex Meteorological Support System for UAVs

The sensitivity of UAVs to the atmospheric effects are apparent. All the same the meteorological support for the UAVs missions is often non-adequate or partly missing. In our paper we show a new complex meteorological support system for different types of UAVs pilots, specialists and decision makers, too. The mentioned system has two important parts with different forecasts approach such as the statistical and dynamical ones. The statistical prediction approach is based on a large climatological data base and the special analog method which is able to select similar weather situations from the mentioned data base to apply them during the forecasting procedure. The applied dynamic approach uses the specific WRF model runs twice a day and produces 96 hours, high resolution weather forecast for the UAV users over the Hungary. An easy to use web-based system can give important weather information over the Carpathian basin in Central-Europe. The mentioned products can be reached via internet connection.

Aspects to Motivate users of a Design Engineering Wiki to Share their Knowledge

Industrial design engineering is an information and knowledge intensive job. Although Wikipedia offers a lot of this information, design engineers are better served with a wiki tailored to their job, offering information in a compact manner and functioning as a design tool. For that reason WikID has been developed. However for the viability of a wiki, an active user community is essential. The main subject of this paper is a study to the influence of the communication and the contents of WikID on the user-s willingness to contribute. At first the theory about a website-s first impression, general usability guidelines and user motivation in an online community is studied. Using this theory, the aspects of the current site are analyzed on their suitability. These results have been verified with a questionnaire amongst 66 industrial design engineers (or students industrial design engineering). The main conclusion is that design engineers are enchanted with the existence of WikID and its knowledge structure (taxonomy) but this structure has not become clear without any guidance. In other words, the knowledge structure is very helpful for inspiring and guiding design engineers through their tailored knowledge domain in WikID but this taxonomy has to be better communicated on the main page. Thereby the main page needs to be fitted more to the target group preferences.

Visual Study on Flow Patterns and Heat Transfer during Convective Boiling Inside Horizontal Smooth and Microfin Tubes

Evaporator is an important and widely used heat exchanger in air conditioning and refrigeration industries. Different methods have been used by investigators to increase the heat transfer rates in evaporators. One of the passive techniques to enhance heat transfer coefficient is the application of microfin tubes. The mechanism of heat transfer augmentation in microfin tubes is dependent on the flow regime of two-phase flow. Therefore many investigations of the flow patterns for in-tube evaporation have been reported in literatures. The gravitational force, surface tension and the vapor-liquid interfacial shear stress are known as three dominant factors controlling the vapor and liquid distribution inside the tube. A review of the existing literature reveals that the previous investigations were concerned with the two-phase flow pattern for flow boiling in horizontal tubes [12], [9]. Therefore, the objective of the present investigation is to obtain information about the two-phase flow patterns for evaporation of R-134a inside horizontal smooth and microfin tubes. Also Investigation of heat transfer during flow boiling of R-134a inside horizontal microfin and smooth tube have been carried out experimentally The heat transfer coefficients for annular flow in the smooth tube is shown to agree well with Gungor and Winterton-s correlation [4]. All the flow patterns occurred in the test can be divided into three dominant regimes, i.e., stratified-wavy flow, wavy-annular flow and annular flow. Experimental data are plotted in two kinds of flow maps, i.e., Weber number for the vapor versus weber number for the liquid flow map and mass flux versus vapor quality flow map. The transition from wavy-annular flow to annular or stratified-wavy flow is identified in the flow maps.

A Combination of Similarity Ranking and Time for Social Research Paper Searching

Nowadays social media are important tools for web resource discovery. The performance and capabilities of web searches are vital, especially search results from social research paper bookmarking. This paper proposes a new algorithm for ranking method that is a combination of similarity ranking with paper posted time or CSTRank. The paper posted time is static ranking for improving search results. For this particular study, the paper posted time is combined with similarity ranking to produce a better ranking than other methods such as similarity ranking or SimRank. The retrieval performance of combination rankings is evaluated using mean values of NDCG. The evaluation in the experiments implies that the chosen CSTRank ranking by using weight score at ratio 90:10 can improve the efficiency of research paper searching on social bookmarking websites.

Multi-Dimensional Concerns Mining for Web Applications via Concept-Analysis

Web applications have become very complex and crucial, especially when combined with areas such as CRM (Customer Relationship Management) and BPR (Business Process Reengineering), the scientific community has focused attention to Web applications design, development, analysis, and testing, by studying and proposing methodologies and tools. This paper proposes an approach to automatic multi-dimensional concern mining for Web Applications, based on concepts analysis, impact analysis, and token-based concern identification. This approach lets the user to analyse and traverse Web software relevant to a particular concern (concept, goal, purpose, etc.) via multi-dimensional separation of concerns, to document, understand and test Web applications. This technique was developed in the context of WAAT (Web Applications Analysis and Testing) project. A semi-automatic tool to support this technique is currently under development.

Finding Authoritative Researchers on Academic Web Sites

In this paper, we present a methodology for finding authoritative researchers by analyzing academic Web sites. We show a case study in which we concentrate on a set of Czech computer science departments- Web sites. We analyze the relations between them via hyperlinks and find the most important ones using several common ranking algorithms. We then examine the contents of the research papers present on these sites and determine the most authoritative Czech authors.

Implementing an Adaptive Behavior for Spread Spectrum Watermarking Procedures

The advances in multimedia and networking technologies have created opportunities for Internet pirates, who can easily copy multimedia contents and illegally distribute them on the Internet, thus violating the legal rights of content owners. This paper describes how a simple and well-known watermarking procedure based on a spread spectrum method and a watermark recovery by correlation can be improved to effectively and adaptively protect MPEG-2 videos distributed on the Internet. In fact, the procedure, in its simplest form, is vulnerable to a variety of attacks. However, its security and robustness have been increased, and its behavior has been made adaptive with respect to the video terminals used to open the videos and the network transactions carried out to deliver them to buyers. In fact, such an adaptive behavior enables the proposed procedure to efficiently embed watermarks, and this characteristic makes the procedure well suited to be exploited in web contexts, where watermarks usually generated from fingerprinting codes have to be inserted into the distributed videos “on the fly", i.e. during the purchase web transactions.

Tracking Activity of Real Individuals in Web Logs

This paper describes an enhanced cookie-based method for counting the visitors of web sites by using a web log processing system that aims to cope with the ambitious goal of creating countrywide statistics about the browsing practices of real human individuals. The focus is put on describing a new more efficient way of detecting human beings behind web users by placing different identifiers on the client computers. We briefly introduce our processing system designed to handle the massive amount of data records continuously gathered from the most important content providers of the Hungary. We conclude by showing statistics of different time spans comparing the efficiency of multiple visitor counting methods to the one presented here, and some interesting charts about content providers and web usage based on real data recorded in 2007 will also be presented.

In Search of Excellence – Google vs Baidu

This paper compares the search engine marketing strategies adopted in China and the Western countries through two illustrative cases, namely, Google and Baidu. Marketers in the West use search engine optimization (SEO) to rank their sites higher for queries in Google. Baidu, however, offers paid search placement, or the selling of engine results for particular keywords to the higher bidders. Whereas Google has been providing innovative services ranging from Google Map to Google Blog, Baidu remains focused on search services – the one that it does best. The challenges and opportunities of the Chinese Internet market offered to global entrepreneurs are also discussed in the paper

A Web Oriented Spread Spectrum Watermarking Procedure for MPEG-2 Videos

In the last decade digital watermarking procedures have become increasingly applied to implement the copyright protection of multimedia digital contents distributed on the Internet. To this end, it is worth noting that a lot of watermarking procedures for images and videos proposed in literature are based on spread spectrum techniques. However, some scepticism about the robustness and security of such watermarking procedures has arisen because of some documented attacks which claim to render the inserted watermarks undetectable. On the other hand, web content providers wish to exploit watermarking procedures characterized by flexible and efficient implementations and which can be easily integrated in their existing web services frameworks or platforms. This paper presents how a simple spread spectrum watermarking procedure for MPEG-2 videos can be modified to be exploited in web contexts. To this end, the proposed procedure has been made secure and robust against some well-known and dangerous attacks. Furthermore, its basic scheme has been optimized by making the insertion procedure adaptive with respect to the terminals used to open the videos and the network transactions carried out to deliver them to buyers. Finally, two different implementations of the procedure have been developed: the former is a high performance parallel implementation, whereas the latter is a portable Java and XML based implementation. Thus, the paper demonstrates that a simple spread spectrum watermarking procedure, with limited and appropriate modifications to the embedding scheme, can still represent a valid alternative to many other well-known and more recent watermarking procedures proposed in literature.

Real-time ROI Acquisition for Unsupervised and Touch-less Palmprint

In this paper we proposed a novel method to acquire the ROI (Region of interest) of unsupervised and touch-less palmprint captured from a web camera in real-time. We use Viola-Jones approach and skin model to get the target area in real time. Then an innovative course-to-fine approach to detect the key points on the hand is described. A new algorithm is used to find the candidate key points coarsely and quickly. In finely stage, we verify the hand key points with the shape context descriptor. To make the user much comfortable, it can process the hand image with different poses, even the hand is closed. Experiments show promising result by using the proposed method in various conditions.

The Open Knowledge Kernel

Web services are pieces of software that can be invoked via a standardized protocol. They can be combined via formalized taskflow languages. The Open Knowledge system is a fully distributed system using P2P technology, that allows users to publish the setaskflows, and programmers to register their web services or publish implementations of them, for the roles described in these workflows.Besides this, the system offers the functionality to select a peer that could coordinate such an interaction model and inform web services when it is their 'turn'. In this paper we describe the architecture and implementation of the Open Knowledge Kernel which provides the core functionality of the Open Knowledge system.

A Unique Solution for Designing Low-Cost, Heterogeneous Sensor Networks Using a Middleware Integration Platform

Proprietary sensor network systems are typically expensive, rigid and difficult to incorporate technologies from other vendors. When using competing and incompatible technologies, a non-proprietary system is complex to create because it requires significant technical expertise and effort, which can be more expensive than a proprietary product. This paper presents the Sensor Abstraction Layer (SAL) that provides middleware architectures with a consistent and uniform view of heterogeneous sensor networks, regardless of the technologies involved. SAL abstracts and hides the hardware disparities and specificities related to accessing, controlling, probing and piloting heterogeneous sensors. SAL is a single software library containing a stable hardware-independent interface with consistent access and control functions to remotely manage the network. The end-user has near-real-time access to the collected data via the network, which results in a cost-effective, flexible and simplified system suitable for novice users. SAL has been used for successfully implementing several low-cost sensor network systems.

Instructional Design and Development Utilizing Technology: A Student Perspective

The sequence Analyze, Design, Develop, Implement, and Evaluate (ADDIE) provides a powerful methodology for designing computer-based educational materials. Helping students to understand this design process sequence may be achieved by providing them with direct, guided experience. This article examines such help and guidance and the overall learning process from a student-s personal experience.

A New Approach to Annotate the Text's of the Websites and Documents with a Quite Comprehensive Knowledge Base

Machine-understandable data when strongly interlinked constitutes the basis for the SemanticWeb. Annotating web documents is one of the major techniques for creating metadata on the Web. Annotating websites defines the containing data in a form which is suitable for interpretation by machines. In this paper, we present a new approach to annotate websites and documents by promoting the abstraction level of the annotation process to a conceptual level. By this means, we hope to solve some of the problems of the current annotation solutions.

A Review on WEB Resources in Teaching of Geotechnical Engineering

The use of computer hardware and software in education and training dates to the early 1940s, when American researchers developed flight simulators which used analog computers to generate simulated onboard instrument data.Computer software is widely used to help engineers and undergraduate student solve their problems quickly and more accurately. This paper presents the list of computer software in geotechnical engineering.

Modeling User Behaviour by Planning

A model of user behaviour based automated planning is introduced in this work. The behaviour of users of web interactive systems can be described in term of a planning domain encapsulating the timed actions patterns representing the intended user profile. The user behaviour recognition is then posed as a planning problem where the goal is to parse a given sequence of user logs of the observed activities while reaching a final state. A general technique for transforming a timed finite state automata description of the behaviour into a numerical parameter planning model is introduced. Experimental results show that the performance of a planning based behaviour model is effective and scalable for real world applications. A major advantage of the planning based approach is to represent in a single automated reasoning framework problems of plan recognitions, plan synthesis and plan optimisation.

Using a Semantic Self-Organising Web Page-Ranking Mechanism for Public Administration and Education

In the proposed method for Web page-ranking, a novel theoretic model is introduced and tested by examples of order relationships among IP addresses. Ranking is induced using a convexity feature, which is learned according to these examples using a self-organizing procedure. We consider the problem of selforganizing learning from IP data to be represented by a semi-random convex polygon procedure, in which the vertices correspond to IP addresses. Based on recent developments in our regularization theory for convex polygons and corresponding Euclidean distance based methods for classification, we develop an algorithmic framework for learning ranking functions based on a Computational Geometric Theory. We show that our algorithm is generic, and present experimental results explaining the potential of our approach. In addition, we explain the generality of our approach by showing its possible use as a visualization tool for data obtained from diverse domains, such as Public Administration and Education.

New Methods for E-Commerce Databases Designing in Semantic Web Systems (Modern Systems)

The purpose of this paper is to study Database Models to use them efficiently in E-commerce websites. In this paper we are going to find a method which can save and retrieve information in Ecommerce websites. Thus, semantic web applications can work with, and we are also going to study different technologies of E-commerce databases and we know that one of the most important deficits in semantic web is the shortage of semantic data, since most of the information is still stored in relational databases, we present an approach to map legacy data stored in relational databases into the Semantic Web using virtually any modern RDF query language, as long as it is closed within RDF. To achieve this goal we study XML structures for relational data bases of old websites and eventually we will come up one level over XML and look for a map from relational model (RDM) to RDF. Noting that a large number of semantic webs get advantage of relational model, opening the ways which can be converted to XML and RDF in modern systems (semantic web) is important.

A Comparison and Analysis of Name Matching Algorithms

Names are important in many societies, even in technologically oriented ones which use e.g. ID systems to identify individual people. Names such as surnames are the most important as they are used in many processes, such as identifying of people and genealogical research. On the other hand variation of names can be a major problem for the identification and search for people, e.g. web search or security reasons. Name matching presumes a-priori that the recorded name written in one alphabet reflects the phonetic identity of two samples or some transcription error in copying a previously recorded name. We add to this the lode that the two names imply the same person. This paper describes name variations and some basic description of various name matching algorithms developed to overcome name variation and to find reasonable variants of names which can be used to further increasing mismatches for record linkage and name search. The implementation contains algorithms for computing a range of fuzzy matching based on different types of algorithms, e.g. composite and hybrid methods and allowing us to test and measure algorithms for accuracy. NYSIIS, LIG2 and Phonex have been shown to perform well and provided sufficient flexibility to be included in the linkage/matching process for optimising name searching.