GeNS: a Biological Data Integration Platform

The scientific achievements coming from molecular biology depend greatly on the capability of computational applications to analyze the laboratorial results. A comprehensive analysis of an experiment requires typically the simultaneous study of the obtained dataset with data that is available in several distinct public databases. Nevertheless, developing a centralized access to these distributed databases rises up a set of challenges such as: what is the best integration strategy, how to solve nomenclature clashes, how to solve database overlapping data and how to deal with huge datasets. In this paper we present GeNS, a system that uses a simple and yet innovative approach to address several biological data integration issues. Compared with existing systems, the main advantages of GeNS are related to its maintenance simplicity and to its coverage and scalability, in terms of number of supported databases and data types. To support our claims we present the current use of GeNS in two concrete applications. GeNS currently contains more than 140 million of biological relations and it can be publicly downloaded or remotely access through SOAP web services.

Web Application Security, Attacks and Mitigation

Today’s technology is heavily dependent on web applications. Web applications are being accepted by users at a very rapid pace. These have made our work efficient. These include webmail, online retail sale, online gaming, wikis, departure and arrival of trains and flights and list is very long. These are developed in different languages like PHP, Python, C#, ASP.NET and many more by using scripts such as HTML and JavaScript. Attackers develop tools and techniques to exploit web applications and legitimate websites. This has led to rise of web application security; which can be broadly classified into Declarative Security and Program Security. The most common attacks on the applications are by SQL Injection and XSS which give access to unauthorized users who totally damage or destroy the system. This paper presents a detailed literature description and analysis on Web Application Security, examples of attacks and steps to mitigate the vulnerabilities.

Defining a Semantic Web-based Framework for Enabling Automatic Reasoning on CIM-based Management Platforms

CIM is the standard formalism for modeling management information developed by the Distributed Management Task Force (DMTF) in the context of its WBEM proposal, designed to provide a conceptual view of the managed environment. In this paper, we propose the inclusion of formal knowledge representation techniques, based on Description Logics (DLs) and the Web Ontology Language (OWL), in CIM-based conceptual modeling, and then we examine the benefits of such a decision. The proposal is specified as a CIM metamodel level mapping to a highly expressive subset of DLs capable of capturing all the semantics of the models. The paper shows how the proposed mapping provides CIM diagrams with precise semantics and can be used for automatic reasoning about the management information models, as a design aid, by means of newgeneration CASE tools, thanks to the use of state-of-the-art automatic reasoning systems that support the proposed logic and use algorithms that are sound and complete with respect to the semantics. Such a CASE tool framework has been developed by the authors and its architecture is also introduced. The proposed formalization is not only useful at design time, but also at run time through the use of rational autonomous agents, in response to a need recently recognized by the DMTF.

The Impact of Website Personality on Consumers' Initial Trust towards Online Retailing Websites

E-tailing websites are often perceived to be static, impersonal and distant. However, with the movement of the World Wide Web to Web 2.0 in recent years, these online websites have been found to display personalities akin to 'humanistic' qualities and project impressions much like its retailing counterpart i.e. salespeople. This paper examines the personality of e-tailing websites and their impact on consumers- initial trust towards the sites. A total of 239 Internet users participated in this field experiment study which utilized 6 online book retailers- websites that the participants had not previously visited before. Analysis revealed that out of four website personalities (sincerity, competence, excitement and sophistication) only sincerity and competence are able to exert an influence in building consumers- trust upon their first visit to the website. The implications of the findings are further elaborated in this paper.

Semi-Automatic Trend Detection in Scholarly Repository Using Semantic Approach

Currently WWW is the first solution for scholars in finding information. But, analyzing and interpreting this volume of information will lead to researchers overload in pursuing their research. Trend detection in scientific publication retrieval systems helps scholars to find relevant, new and popular special areas by visualizing the trend of input topic. However, there are few researches on trend detection in scientific corpora while their proposed models do not appear to be suitable. Previous works lack of an appropriate representation scheme for research topics. This paper describes a method that combines Semantic Web and ontology to support advance search functions such as trend detection in the context of scholarly Semantic Web system (SSWeb).

An Integrated Biotechnology Database of the National Agricultural Information Center in Korea

The National Agricultural Biotechnology Information Center (NABIC) plays a leading role in the biotechnology information database for agricultural plants in Korea. Since 2002, we have concentrated on functional genomics of major crops, building an integrated biotechnology database for agro-biotech information that focuses on bioinformatics of major agricultural resources such as rice, Chinese cabbage, and microorganisms. In the NABIC, integration-based biotechnology database provides useful information through a user-friendly web interface that allows analysis of genome infrastructure, multiple plants, microbial resources, and living modified organisms.

Protecting the Privacy and Trust of VIP Users on Social Network Sites

There is a real threat on the VIPs personal pages on the Social Network Sites (SNS). The real threats to these pages is violation of privacy and theft of identity through creating fake pages that exploit their names and pictures to attract the victims and spread of lies. In this paper, we propose a new secure architecture that improves the trusting and finds an effective solution to reduce fake pages and possibility of recognizing VIP pages on SNS. The proposed architecture works as a third party that is added to Facebook to provide the trust service to personal pages for VIPs. Through this mechanism, it works to ensure the real identity of the applicant through the electronic authentication of personal information by storing this information within content of their website. As a result, the significance of the proposed architecture is that it secures and provides trust to the VIPs personal pages. Furthermore, it can help to discover fake page, protect the privacy, reduce crimes of personality-theft, and increase the sense of trust and satisfaction by friends and admirers in interacting with SNS.

The Impact of Semantic Web on E-Commerce

Semantic Web Technologies enable machines to interpret data published in a machine-interpretable form on the web. At the present time, only human beings are able to understand the product information published online. The emerging semantic Web technologies have the potential to deeply influence the further development of the Internet Economy. In this paper we propose a scenario based research approach to predict the effects of these new technologies on electronic markets and business models of traders and intermediaries and customers. Over 300 million searches are conducted everyday on the Internet by people trying to find what they need. A majority of these searches are in the domain of consumer ecommerce, where a web user is looking for something to buy. This represents a huge cost in terms of people hours and an enormous drain of resources. Agent enabled semantic search will have a dramatic impact on the precision of these searches. It will reduce and possibly eliminate information asymmetry where a better informed buyer gets the best value. By impacting this key determinant of market prices semantic web will foster the evolution of different business and economic models. We submit that there is a need for developing these futuristic models based on our current understanding of e-commerce models and nascent semantic web technologies. We believe these business models will encourage mainstream web developers and businesses to join the “semantic web revolution."

Performance Modeling for Web based J2EE and .NET Applications

When architecting an application, key nonfunctional requirements such as performance, scalability, availability and security, which influence the architecture of the system, are some times not adequately addressed. Performance of the application may not be looked at until there is a concern. There are several problems with this reactive approach. If the system does not meet its performance objectives, the application is unlikely to be accepted by the stakeholders. This paper suggests an approach for performance modeling for web based J2EE and .Net applications to address performance issues early in the development life cycle. It also includes a Performance Modeling Case Study, with Proof-of-Concept (PoC) and implementation details for .NET and J2EE platforms.

Web Usability : A Fuzzy Approach to the Navigation Structure Enhancement in a Website System, Case of Iranian Civil Aviation Organization Website

With the proliferation of World Wide Web, development of web-based technologies and the growth in web content, the structure of a website becomes more complex and web navigation becomes a critical issue to both web designers and users. In this paper we define the content and web pages as two important and influential factors in website navigation and paraphrase the enhancement in the website navigation as making some useful changes in the link structure of the website based on the aforementioned factors. Then we suggest a new method for proposing the changes using fuzzy approach to optimize the website architecture. Applying the proposed method to a real case of Iranian Civil Aviation Organization (CAO) website, we discuss the results of the novel approach at the final section.

From I.A Richards to Web 3.0: Preparing Our Students for Tomorrow's World

This paper offers suggestions for educators at all levels about how to better prepare our students for the future, by building on the past. The discussion begins with a summary of changes in the World Wide Web, especially as the term Web 3.0 is being heard. The bulk of the discussion is retrospective and concerned with an overview of traditional teaching and research approaches as they evolved during the 20th century beginning with those grounded in the Cartesian reality of IA Richards- (1929) Practical Criticism. The paper concludes with a proposal of five strategies which incorporate timeless elements from the past as well as cutting-edge elements from today, in order to better prepare our students for the future.

Building Virtual Reality Environments for Distance Education on the Web: A Case Study in Medical Education

The paper presents an investigation into the role of virtual reality and web technologies in the field of distance education. Within this frame, special emphasis is given on the building of web-based virtual learning environments so as to successfully fulfill their educational objectives. In particular, basic pedagogical methods are studied, focusing mainly on the efficient preparation, approach and presentation of learning content, and specific designing rules are presented considering the hypermedia, virtual and educational nature of this kind of applications. The paper also aims to highlight the educational benefits arising from the use of virtual reality technology in medicine and study the emerging area of web-based medical simulations. Finally, an innovative virtual reality environment for distance education in medicine is demonstrated. The proposed environment reproduces conditions of the real learning process and enhances learning through a real-time interactive simulator.

The Journey of a Malicious HTTP Request

SQL injection on web applications is a very popular kind of attack. There are mechanisms such as intrusion detection systems in order to detect this attack. These strategies often rely on techniques implemented at high layers of the application but do not consider the low level of system calls. The problem of only considering the high level perspective is that an attacker can circumvent the detection tools using certain techniques such as URL encoding. One technique currently used for detecting low-level attacks on privileged processes is the tracing of system calls. System calls act as a single gate to the Operating System (OS) kernel; they allow catching the critical data at an appropriate level of detail. Our basic assumption is that any type of application, be it a system service, utility program or Web application, “speaks” the language of system calls when having a conversation with the OS kernel. At this level we can see the actual attack while it is happening. We conduct an experiment in order to demonstrate the suitability of system call analysis for detecting SQL injection. We are able to detect the attack. Therefore we conclude that system calls are not only powerful in detecting low-level attacks but that they also enable us to detect highlevel attacks such as SQL injection.

A New Version of Annotation Method with a XML-based Knowledge Base

Machine-understandable data when strongly interlinked constitutes the basis for the SemanticWeb. Annotating web documents is one of the major techniques for creating metadata on the Web. Annotating websitexs defines the containing data in a form which is suitable for interpretation by machines. In this paper, we present a better and improved approach than previous [1] to annotate the texts of the websites depends on the knowledge base.

An Overview of Some High Order and Multi-Level Finite Difference Schemes in Computational Aeroacoustics

In this paper, we have combined some spatial derivatives with the optimised time derivative proposed by Tam and Webb in order to approximate the linear advection equation which is given by = 0. Ôêé Ôêé + Ôêé Ôêé x f t u These spatial derivatives are as follows: a standard 7-point 6 th -order central difference scheme (ST7), a standard 9-point 8 th -order central difference scheme (ST9) and optimised schemes designed by Tam and Webb, Lockard et al., Zingg et al., Zhuang and Chen, Bogey and Bailly. Thus, these seven different spatial derivatives have been coupled with the optimised time derivative to obtain seven different finite-difference schemes to approximate the linear advection equation. We have analysed the variation of the modified wavenumber and group velocity, both with respect to the exact wavenumber for each spatial derivative. The problems considered are the 1-D propagation of a Boxcar function, propagation of an initial disturbance consisting of a sine and Gaussian function and the propagation of a Gaussian profile. It is known that the choice of the cfl number affects the quality of results in terms of dissipation and dispersion characteristics. Based on the numerical experiments solved and numerical methods used to approximate the linear advection equation, it is observed in this work, that the quality of results is dependent on the choice of the cfl number, even for optimised numerical methods. The errors from the numerical results have been quantified into dispersion and dissipation using a technique devised by Takacs. Also, the quantity, Exponential Error for Low Dispersion and Low Dissipation, eeldld has been computed from the numerical results. Moreover, based on this work, it has been found that when the quantity, eeldld can be used as a measure of the total error. In particular, the total error is a minimum when the eeldld is a minimum.

Efficient Web-Learning Collision Detection Tool on Five-Axis Machine

As networking has become popular, Web-learning tends to be a trend while designing a tool. Moreover, five-axis machining has been widely used in industry recently; however, it has potential axial table colliding problems. Thus this paper aims at proposing an efficient web-learning collision detection tool on five-axis machining. However, collision detection consumes heavy resource that few devices can support, thus this research uses a systematic approach based on web knowledge to detect collision. The methodologies include the kinematics analyses for five-axis motions, separating axis method for collision detection, and computer simulation for verification. The machine structure is modeled as STL format in CAD software. The input to the detection system is the g-code part program, which describes the tool motions to produce the part surface. This research produced a simulation program with C programming language and demonstrated a five-axis machining example with collision detection on web site. The system simulates the five-axis CNC motion for tool trajectory and detects for any collisions according to the input g-codes and also supports high-performance web service benefiting from C. The result shows that our method improves 4.5 time of computational efficiency, comparing to the conventional detection method.

An Intelligent System for Phish Detection, using Dynamic Analysis and Template Matching

Phishing, or stealing of sensitive information on the web, has dealt a major blow to Internet Security in recent times. Most of the existing anti-phishing solutions fail to handle the fuzziness involved in phish detection, thus leading to a large number of false positives. This fuzziness is attributed to the use of highly flexible and at the same time, highly ambiguous HTML language. We introduce a new perspective against phishing, that tries to systematically prove, whether a given page is phished or not, using the corresponding original page as the basis of the comparison. It analyzes the layout of the pages under consideration to determine the percentage distortion between them, indicative of any form of malicious alteration. The system design represents an intelligent system, employing dynamic assessment which accurately identifies brand new phishing attacks and will prove effective in reducing the number of false positives. This framework could potentially be used as a knowledge base, in educating the internet users against phishing.

Performance Evaluation of Para-virtualization on Modern Mobile Phone Platform

Emergence of smartphones brings to live the concept of converged devices with the availability of web amenities. Such trend also challenges the mobile devices manufactures and service providers in many aspects, such as security on mobile phones, complex and long time design flow, as well as higher development cost. Among these aspects, security on mobile phones is getting more and more attention. Microkernel based virtualization technology will play a critical role in addressing these challenges and meeting mobile market needs and preferences, since virtualization provides essential isolation for security reasons and it allows multiple operating systems to run on one processor accelerating development and cutting development cost. However, virtualization benefits do not come for free. As an additional software layer, it adds some inevitable virtualization overhead to the system, which may decrease the system performance. In this paper we evaluate and analyze the virtualization performance cost of L4 microkernel based virtualization on a competitive mobile phone by comparing the L4Linux, a para-virtualized Linux on top of L4 microkernel, with the native Linux performance using lmbench and a set of typical mobile phone applications.

Models to Customise Web Service Discovery Result using Static and Dynamic Parameters

This paper presents three models which enable the customisation of Universal Description, Discovery and Integration (UDDI) query results, based on some pre-defined and/or real-time changing parameters. These proposed models detail the requirements, design and techniques which make ranking of Web service discovery results from a service registry possible. Our contribution is two fold: First, we present an extension to the UDDI inquiry capabilities. This enables a private UDDI registry owner to customise or rank the query results, based on its business requirements. Second, our proposal utilises existing technologies and standards which require minimal changes to existing UDDI interfaces or its data structures. We believe these models will serve as valuable reference for enhancing the service discovery methods within a private UDDI registry environment.

WAF: an Interface Web Agent Framework

A trend in agent community or enterprises is that they are shifting from closed to open architectures composed of a large number of autonomous agents. One of its implications could be that interface agent framework is getting more important in multi-agent system (MAS); so that systems constructed for different application domains could share a common understanding in human computer interface (HCI) methods, as well as human-agent and agent-agent interfaces. However, interface agent framework usually receives less attention than other aspects of MAS. In this paper, we will propose an interface web agent framework which is based on our former project called WAF and a Distributed HCI template. A group of new functionalities and implications will be discussed, such as web agent presentation, off-line agent reference, reconfigurable activation map of agents, etc. Their enabling techniques and current standards (e.g. existing ontological framework) are also suggested and shown by examples from our own implementation in WAF.