Abstract: This paper addresses the fundamental requirements for
starting an online business. It covers the process of ideation,
conceptualization, formulation, and implementation of new venture
ideas on the Web. Using Facebook as an illustrative example, we learn
how to turn an idea into a successful electronic business and to execute
a business plan with IT skills, management expertise, a good
entrepreneurial attitude, and an understanding of Internet culture. The
personality traits and characteristics of a successful e-commerce
entrepreneur are discussed with reference to Facebook-s founder,
Mark Zuckerberg. Facebook is a social and e-commerce success. It
provides a trusted environment of which participants can conduct
business with social experience. People are able to discuss products
before, during the after the sale within the Facebook environment. The
paper also highlights the challenges and opportunities for e-commerce
entrepreneurial startups to go public and of entering the China market.
Abstract: A serious problem on the WWW is finding reliable
information. Not everything found on the Web is true and the
Semantic Web does not change that in any way. The problem will be
even more crucial for the Semantic Web, where agents will be
integrating and using information from multiple sources. Thus, if an
incorrect premise is used due to a single faulty source, then any
conclusions drawn may be in error. Thus, statements published on
the Semantic Web have to be seen as claims rather than as facts, and
there should be a way to decide which among many possibly
inconsistent sources is most reliable. In this work, we propose a trust
model for the Semantic Web. The proposed model is inspired by the
use trust in human society. Trust is a type of social knowledge and
encodes evaluations about which agents can be taken as reliable
sources of information or services. Our proposed model allows
agents to decide which among different sources of information to
trust and thus act rationally on the semantic web.
Abstract: Internet is nowadays included to all National Curriculums of the elementary school. A comparative study of their
goals leads to the conclusion that a complete curriculum should aim to student-s acquisition of the abilities to navigate and search for
information and additionally to emphasize on the evaluation of the information provided by the World Wide Web. In a constructivistic knowledge framework the design of a course has to take under
consideration the conceptual representations of students. The following paper presents the conceptual representation of students of eleven years old, attending the Sixth Grade of Greek Elementary School about World Wide Web and their use in the design and
implementation of an innovative course.
Abstract: In this paper, we propose effective system for digital music retrieval. We divided proposed system into Client and Server. Client part consists of pre-processing and Content-based feature extraction stages. In pre-processing stage, we minimized Time code Gap that is occurred among same music contents. As content-based feature, first-order differentiated MFCC were used. These presented approximately envelop of music feature sequences. Server part included Music Server and Music Matching stage. Extracted features from 1,000 digital music files were stored in Music Server. In Music Matching stage, we found retrieval result through similarity measure by DTW. In experiment, we used 450 queries. These were made by mixing different compression standards and sound qualities from 50 digital music files. Retrieval accurate indicated 97% and retrieval time was average 15ms in every single query. Out experiment proved that proposed system is effective in retrieve digital music and robust at various user environments of web.
Abstract: While the explosive increase in information published
on the Web, researchers have to filter information when searching for
conference related information. To make it easier for users to search
related information, this paper uses Topic Maps and social information
to implement ontology since ontology can provide the formalisms and
knowledge structuring for comprehensive and transportable machine
understanding that digital information requires. Besides enhancing
information in Topic Maps, this paper proposes a method of
constructing research Topic Maps considering social information.
First, extract conference data from the web. Then extract conference
topics and the relationships between them through the proposed
method. Finally visualize it for users to search and browse. This paper
uses ontology, containing abundant of knowledge hierarchy structure,
to facilitate researchers getting useful search results. However, most
previous ontology construction methods didn-t take “people" into
account. So this paper also analyzes the social information which helps
researchers find the possibilities of cooperation/combination as well as
associations between research topics, and tries to offer better results.
Abstract: It has been recognized that due to the autonomy and
heterogeneity, of Web services and the Web itself, new approaches
should be developed to describe and advertise Web services. The
most notable approaches rely on the description of Web services
using semantics. This new breed of Web services, termed semantic
Web services, will enable the automatic annotation, advertisement,
discovery, selection, composition, and execution of interorganization
business logic, making the Internet become a common
global platform where organizations and individuals communicate
with each other to carry out various commercial activities and to
provide value-added services. This paper deals with two of the
hottest R&D and technology areas currently associated with the Web
– Web services and the semantic Web. It describes how semantic
Web services extend Web services as the semantic Web improves the
current Web, and presents three different conceptual approaches to
deploying semantic Web services, namely, WSDL-S, OWL-S, and
WSMO.
Abstract: Ontologies play an important role in semantic web applications and are often developed by different groups and continues to evolve over time. The knowledge in ontologies changes very rapidly that make the applications outdated if they continue to use old versions or unstable if they jump to new versions. Temporal frames using frame versioning and slot versioning are used to take care of dynamic nature of the ontologies. The paper proposes new tags and restructured OWL format enabling the applications to work with the old or new version of ontologies. Gene Ontology, a very dynamic ontology, has been used as a case study to explain the OWL Ontology with Temporal Tags.
Abstract: Ontologies and tagging systems are two different ways to organize the knowledge present in the current Web. In this paper we propose a simple method to model folksonomies, as tagging systems, with ontologies. We show the scalability of the method using real data sets. The modeling method is composed of a generic ontology that represents any folksonomy and an algorithm to transform the information contained in folksonomies to the generic ontology. The method allows representing folksonomies at any instant of time.
Abstract: Nowadays, computer worms, viruses and Trojan horse
become popular, and they are collectively called malware. Those
malware just spoiled computers by deleting or rewriting important
files a decade ago. However, recent malware seems to be born to earn
money. Some of malware work for collecting personal information so
that malicious people can find secret information such as password for
online banking, evidence for a scandal or contact address which relates
with the target. Moreover, relation between money and malware
becomes more complex. Many kinds of malware bear bots to get
springboards. Meanwhile, for ordinary internet users,
countermeasures against malware come up against a blank wall.
Pattern matching becomes too much waste of computer resources,
since matching tools have to deal with a lot of patterns derived from
subspecies. Virus making tools can automatically bear subspecies of
malware. Moreover, metamorphic and polymorphic malware are no
longer special. Recently there appears malware checking sites that
check contents in place of users' PC. However, there appears a new
type of malicious sites that avoids check by malware checking sites. In
this paper, existing protocols and methods related with the web are
reconsidered in terms of protection from current attacks, and new
protocol and method are indicated for the purpose of security of the
web.
Abstract: Suspended cable structures are most preferable for large spans covering due to rational use of structural materials, but the problem of suspended cable structures is initial shape change under the action of non-symmetrical load. The problem can be solved by increasing of relation of dead weight and imposed load, but this methods cause increasing of materials consumption.Prestressed cable truss usage is another way how the problem of shape change under the action of non-symmetrical load can be fixed. The better results can be achieved if we replace top chord with cable truss with cross web. Rational structure of the cable truss for prestressed cable truss top chord was developed using optimization realized in FEM program ANSYS 12 environment. Single cable and cable truss model work was discovered.Analytical and model testing results indicate, that usage of cable truss with the cross web as a top chord of prestressed cable truss instead of single cable allows to reduce total displacements by 13-16% in the case of non-symmetrical load. In case of uniformly distributed load single cable is preferable.
Abstract: Web usage mining is an interesting application of data
mining which provides insight into customer behaviour on the Internet. An important technique to discover user access and navigation trails is based on sequential patterns mining. One of the
key challenges for web access patterns mining is tackling the problem
of mining richly structured patterns. This paper proposes a novel
model called Web Access Patterns Graph (WAP-Graph) to represent all of the access patterns from web mining graphically. WAP-Graph
also motivates the search for new structural relation patterns, i.e. Concurrent Access Patterns (CAP), to identify and predict more
complex web page requests. Corresponding CAP mining and modelling methods are proposed and shown to be effective in the
search for and representation of concurrency between access patterns
on the web. From experiments conducted on large-scale synthetic
sequence data as well as real web access data, it is demonstrated that
CAP mining provides a powerful method for structural knowledge discovery, which can be visualised through the CAP-Graph model.
Abstract: The purpose of semantic web research is to transform
the Web from a linked document repository into a distributed knowledge base and application platform, thus allowing the vast range of available information and services to be more efficiently
exploited. As a first step in this transformation, languages such as
OWL have been developed. Although fully realizing the Semantic Web still seems some way off, OWL has already been very
successful and has rapidly become a defacto standard for ontology
development in fields as diverse as geography, geology, astronomy,
agriculture, defence and the life sciences. The aim of this paper is to classify key concepts of Semantic Web as well as introducing a new
practical approach which uses these concepts to outperform Word Wide Web.
Abstract: Recognizing the increasing importance of using the
Internet to conduct business, this paper looks at some related matters
associated with small businesses making a decision of whether or not
to have a Website and go online. Small businesses in Saudi Arabia
struggle to have this decision. For organizations, to fully go online,
conduct business and provide online information services, they need
to connect their database to the Web. Some issues related to doing
that might be beyond the capabilities of most small businesses in
Saudi Arabia, such as Website management, technical issues and
security concerns. Here we focus on a small business firm in Saudi
Arabia (Case Study), discussing the issues related to going online
decision and the firm's options of what to do and how to do it. The
paper suggested some valuable solutions of connecting databases to
the Web. It also discusses some of the important issues related to
online information services and e-commerce, mainly Web hosting
options and security issues.
Abstract: Machine-understandable data when strongly
interlinked constitutes the basis for the SemanticWeb. Annotating
web documents is one of the major techniques for creating metadata
on the Web. Annotating websites defines the containing data in a
form which is suitable for interpretation by machines. In this paper,
we present a new approach to annotate websites and documents by
promoting the abstraction level of the annotation process to a
conceptual level. By this means, we hope to solve some of the
problems of the current annotation solutions.
Abstract: In the proposed method for Web page-ranking, a
novel theoretic model is introduced and tested by examples of order
relationships among IP addresses. Ranking is induced using a
convexity feature, which is learned according to these examples
using a self-organizing procedure. We consider the problem of selforganizing
learning from IP data to be represented by a semi-random
convex polygon procedure, in which the vertices correspond to IP
addresses. Based on recent developments in our regularization
theory for convex polygons and corresponding Euclidean distance
based methods for classification, we develop an algorithmic
framework for learning ranking functions based on a Computational
Geometric Theory. We show that our algorithm is generic, and
present experimental results explaining the potential of our approach.
In addition, we explain the generality of our approach by showing its
possible use as a visualization tool for data obtained from diverse
domains, such as Public Administration and Education.
Abstract: Information technology managers nowadays are
facing with tremendous pressure to plan, implement, and adopt new
technology solution due to the rapidity of technology changes.
Resulted from a lack of study that have been done in this topic, the
aim of this paper is to provide a comparison review on current tools
that are currently being used in order to respond to technological
changes. The study is based on extensive literature review of
published works with majority of them are ranging from 2000 to the
first part of 2011. The works were gathered from journals, books,
and other information sources available on the Web. Findings show
that, each tools has different focus and none of the tools are
providing a framework in holistic view, which should include
technical, people, process, and business environment aspect. Hence,
this result provides potential information about current available
tools that IT managers could use to manage changes in technology.
Further, the result reveals a research gap in the area where the
industries a short of such framework.
Abstract: One of the main advantages of the LO paradigm is to
allow the availability of good quality, shareable learning material
through the Web. The effectiveness of the retrieval process requires a
formal description of the resources (metadata) that closely fits the
user-s search criteria; in spite of the huge international efforts in this
field, educational metadata schemata often fail to fulfil this
requirement. This work aims to improve the situation, by the
definition of a metadata model capturing specific didactic features of
shareable learning resources. It classifies LOs into “teacher-oriented"
and “student-oriented" categories, in order to describe the role a LO
is to play when it is integrated into the educational process. This
article describes the model and a first experimental validation process
that has been carried out in a controlled environment.
Abstract: Web applications have become complex and crucial for many firms, especially when combined with areas such as CRM (Customer Relationship Management) and BPR (Business Process Reengineering). The scientific community has focused attention to Web application design, development, analysis, testing, by studying and proposing methodologies and tools. Static and dynamic techniques may be used to analyze existing Web applications. The use of traditional static source code analysis may be very difficult, for the presence of dynamically generated code, and for the multi-language nature of the Web. Dynamic analysis may be useful, but it has an intrinsic limitation, the low number of program executions used to extract information. Our reverse engineering analysis, used into our WAAT (Web Applications Analysis and Testing) project, applies mutational techniques in order to exploit server side execution engines to accomplish part of the dynamic analysis. This paper studies the effects of mutation source code analysis applied to Web software to build application models. Mutation-based generated models may contain more information then necessary, so we need a pruning mechanism.
Abstract: The tagging data of (users, tags and resources) constitutes a folksonomy that is the user-driven and bottom-up approach to organizing and classifying information on the Web. Tagging data stored in the folksonomy include a lot of very useful information and knowledge. However, appropriate approach for analyzing tagging data and discovering hidden knowledge from them still remains one of the main problems on the folksonomy mining researches. In this paper, we have proposed a folksonomy data mining approach based on FCA for discovering hidden knowledge easily from folksonomy. Also we have demonstrated how our proposed approach can be applied in the collaborative tagging system through our experiment. Our proposed approach can be applied to some interesting areas such as social network analysis, semantic web mining and so on.
Abstract: Multi-agent communication of Semantic Web
information cannot be realized without the need to reason with
ontology and agent locations. This is because for an agent to be able to
reason with an external semantic web ontology, it must know where
and how to access to that ontology. Similarly, for an agent to be able to
communicate with another agent, it must know where and how to send
a message to that agent. In this paper we propose a framework of an
agent which can reason with ontology and agent locations in order to
perform reasoning with multiple distributed ontologies and perform
communication with other agents on the semantic web. The agent
framework and its communication mechanism are formulated entirely
in meta-logic.