Abstract: This paper considers a forming process of a single competitive factor in the digital camera industry from the viewpoint of product platform. To make product development easier for companies and to increase product introduction ratios, development efforts concentrate on improving and strengthening certain product attributes, and it is born in the process that the product platform is formed continuously. It is pointed out that the formation of this product platform raises product development efficiency of individual companies, but on the other hand, it has a trade-off relationship of causing unification of competitive factors in the whole industry. This research tries to analyze product specification data which were collected from the web page of digital camera companies. Specifically, this research collected all product specification data released in Japan from 1995 to 2003 and analyzed the composition of image sensor and optical lens; and it identified product platforms shared by multiple products and discussed their application. As a result, this research found that the product platformation was born in the development of the standard product for major market segmentation. Every major company has made product platforms of image sensors and optical lenses, and as a result, this research found that the competitive factors were unified in the entire industry throughout product platformation. In other words, this product platformation brought product development efficiency of individual firms; however, it also caused industrial competition factors to be unified in the industry.
Abstract: In the design cycle, a main design task is to determine the external shape of the product. The external shape of a product is one of the key factors that can affect the customers’ preferences linking to the motivation to buy the product, especially in the case of a consumer electronic product such as a mobile phone. The relationship between the external shape and the customer preferences needs to be studied to enhance the customer’s purchase desire and action. In this research, a design for customer preferences model is developed for investigating the relationships between the external shape and the customer preferences of a product. In the first stage, the names of the geometric features are collected and evaluated from the data of the specified internet web pages using the developed text miner. The key geometric features can be determined if the number of occurrence on the web pages is relatively high. For each key geometric feature, the numerical values are explored using the text miner to collect the internet data from the web pages. In the second stage, a cluster analysis model is developed to evaluate the numerical values of the key geometric features to divide the external shapes into several groups. Several design suggestion cases can be proposed, for example, large model, mid-size model, and mini model, for designing a mobile phone. A customer preference index is developed by evaluating the numerical data of each of the key geometric features of the design suggestion cases. The design suggestion case with the top ranking of the customer preference index can be selected as the final design of the product. In this paper, an example product of a notebook computer is illustrated. It shows that the external shape of a product can be used to drive customer preferences. The presented design for customer preferences model is useful for determining a suitable external shape of the product to increase customer preferences.
Abstract: One of the significant issues facing web users is the amount of noise in web data which hinders the process of finding useful information in relation to their dynamic interests. Current research works consider noise as any data that does not form part of the main web page and propose noise web data reduction tools which mainly focus on eliminating noise in relation to the content and layout of web data. This paper argues that not all data that form part of the main web page is of a user interest and not all noise data is actually noise to a given user. Therefore, learning of noise web data allocated to the user requests ensures not only reduction of noisiness level in a web user profile, but also a decrease in the loss of useful information hence improves the quality of a web user profile. Noise Web Data Learning (NWDL) tool/algorithm capable of learning noise web data in web user profile is proposed. The proposed work considers elimination of noise data in relation to dynamic user interest. In order to validate the performance of the proposed work, an experimental design setup is presented. The results obtained are compared with the current algorithms applied in noise web data reduction process. The experimental results show that the proposed work considers the dynamic change of user interest prior to elimination of noise data. The proposed work contributes towards improving the quality of a web user profile by reducing the amount of useful information eliminated as noise.
Abstract: Scientific work analytically explores and demonstrates techniques that can animate objects and geometric characters using CSS3 language by applying proper formatting and positioning of elements. This paper presents examples of optimum application of the CSS3 descriptive language when generating general web animations (e.g., billiards and movement of geometric characters, etc.). The paper presents analytically, the optimal development and animation design with the frames within which the animated objects are. The originally developed content is based on the upgrading of existing CSS3 descriptive language animations with more complex syntax and project-oriented work. The purpose of the developed animations is to provide an overview of the interactive features of CSS3 descriptive language design for computer games and the animation of important analytical data based on the web view. It has been analytically demonstrated that CSS3 as a descriptive language allows inserting of various multimedia elements into websites for public and internal sites.
Abstract: The Information Retrieval community is facing the problem of effective representation of Web search results. When we organize web search results into clusters it becomes easy to the users to quickly browse through search results. The traditional search engines organize search results into clusters for ambiguous queries, representing each cluster for each meaning of the query. The clusters are obtained according to the topical similarity of the retrieved search results, but it is possible for results to be totally dissimilar and still correspond to the same meaning of the query. People search is also one of the most common tasks on the Web nowadays, but when a particular person’s name is queried the search engines return web pages which are related to different persons who have the same queried name. By placing the burden on the user of disambiguating and collecting pages relevant to a particular person, in this paper, we have developed an approach that clusters web pages based on the association of the web pages to the different people and clusters that are based on generic entity search.
Abstract: In this paper, we determine the similarity of two HTML web applications. We are going to use a genetic algorithm in order to determine the most significant web pages of each application (we are not going to use every web page of a site). Using these significant web pages, we will find the similarity value between the two applications. The algorithm is going to be efficient because we are going to use a reduced number of web pages for comparisons but it will return an approximate value of the similarity. The binary trees are used to keep the tags from the significant pages. The algorithm was implemented in Java language.
Abstract: The growth of organic farming practices in the last
few decades is continuing to stimulate the international debate about
this alternative food market. As a part of a PhD project research
about embeddedness in Alternative Food Networks (AFNs), this
paper focuses on the promotional aspects of organic farms websites
from the Madrid region. As a theoretical tool, some knowledge
categories drawn on the geographic studies literature are used to
classify the many ideas expressed in the web pages. By analysing
texts and pictures of 30 websites, the study aims to question how and
to what extent actors from organic world communicate to the
potential customers their personal beliefs about farming practices,
products qualities, and ecological and social benefits. Moreover, the
paper raises the question of whether organic farming laws and
regulations lack of completeness about the social and cultural aspects
of food.
Abstract: The web’s increased popularity has included a huge
amount of information, due to which automated web page
classification systems are essential to improve search engines’
performance. Web pages have many features like HTML or XML
tags, hyperlinks, URLs and text contents which can be considered
during an automated classification process. It is known that Webpage
classification is enhanced by hyperlinks as it reflects Web page
linkages. The aim of this study is to reduce the number of features to
be used to improve the accuracy of the classification of web pages. In
this paper, a novel feature selection method using an improved
Particle Swarm Optimization (PSO) using principle of evolution is
proposed. The extracted features were tested on the WebKB dataset
using a parallel Neural Network to reduce the computational cost.
Abstract: Search engine plays an important role in internet, to
retrieve the relevant documents among the huge number of web
pages. However, it retrieves more number of documents, which are
all relevant to your search topics. To retrieve the most meaningful
documents related to search topics, ranking algorithm is used in
information retrieval technique. One of the issues in data miming is
ranking the retrieved document. In information retrieval the ranking
is one of the practical problems. This paper includes various Page
Ranking algorithms, page segmentation algorithms and compares
those algorithms used for Information Retrieval. Diverse Page Rank
based algorithms like Page Rank (PR), Weighted Page Rank (WPR),
Weight Page Content Rank (WPCR), Hyperlink Induced Topic
Selection (HITS), Distance Rank, Eigen Rumor, Distance Rank Time
Rank, Tag Rank, Relational Based Page Rank and Query Dependent
Ranking algorithms are discussed and compared.
Abstract: Web search engines are designed to retrieve and
extract the information in the web databases and to return dynamic
web pages. The Semantic Web is an extension of the current web in
which it includes semantic content in web pages. The main goal of
semantic web is to promote the quality of the current web by
changing its contents into machine understandable form. Therefore,
the milestone of semantic web is to have semantic level information
in the web. Nowadays, people use different keyword- based search
engines to find the relevant information they need from the web.
But many of the words are polysemous. When these words are
used to query a search engine, it displays the Search Result Records
(SRRs) with different meanings. The SRRs with similar meanings are
grouped together based on Word Sense Disambiguation (WSD). In
addition to that semantic annotation is also performed to improve the
efficiency of search result records. Semantic Annotation is the
process of adding the semantic metadata to web resources. Thus the
grouped SRRs are annotated and generate a summary which
describes the information in SRRs. But the automatic semantic
annotation is a significant challenge in the semantic web. Here
ontology and knowledge based representation are used to annotate
the web pages.
Abstract: The emergence of the Semantic Web technology
increases day by day due to the rapid growth of multiple web pages.
Many standard formats are available to store the semantic web data.
The most popular format is the Resource Description Framework
(RDF). Querying large RDF graphs becomes a tedious procedure
with a vast increase in the amount of data. The problem of query
optimization becomes an issue in querying large RDF graphs.
Choosing the best query plan reduces the amount of query execution
time. To address this problem, nature inspired algorithms can be used
as an alternative to the traditional query optimization techniques. In
this research, the optimal query plan is generated by the proposed
SAPSO algorithm which is a hybrid of Simulated Annealing (SA)
and Particle Swarm Optimization (PSO) algorithms. The proposed
SAPSO algorithm has the ability to find the local optimistic result
and it avoids the problem of local minimum. Experiments were
performed on different datasets by changing the number of predicates
and the amount of data. The proposed algorithm gives improved
results compared to existing algorithms in terms of query execution
time.
Abstract: The continuous growth in the size of the World Wide Web has resulted in intricate Web sites, demanding enhanced user skills and more sophisticated tools to help the Web user to find the desired information. In order to make Web more user friendly, it is necessary to provide personalized services and recommendations to the Web user. For discovering interesting and frequent navigation patterns from Web server logs many Web usage mining techniques have been applied. The recommendation accuracy of usage based techniques can be improved by integrating Web site content and site structure in the personalization process.
Herein, we propose semantically enriched Web Usage Mining method for Personalization (SWUMP), an extension to solely usage based technique. This approach is a combination of the fields of Web Usage Mining and Semantic Web. In the proposed method, we envisage enriching the undirected graph derived from usage data with rich semantic information extracted from the Web pages and the Web site structure. The experimental results show that the SWUMP generates accurate recommendations and is able to achieve 10-20% better accuracy than the solely usage based model. The SWUMP addresses the new item problem inherent to solely usage based techniques.
Abstract: The internet is growing larger and becoming the most popular platform for the people to share their opinion in different interests. We choose the education domain specifically comparing some Malaysian universities against each other. This comparison produces benchmark based on different criteria shared by the online users in various online resources including Twitter, Facebook and web pages. The comparison is accomplished using opinion mining framework to extract, process the unstructured text and classify the result to positive, negative or neutral (polarity). Hence, we divide our framework to three main stages; opinion collection (extraction), unstructured text processing and polarity classification. The extraction stage includes web crawling, HTML parsing, Sentence segmentation for punctuation classification, Part of Speech (POS) tagging, the second stage processes the unstructured text with stemming and stop words removal and finally prepare the raw text for classification using Named Entity Recognition (NER). Last phase is to classify the polarity and present overall result for the comparison among the Malaysian universities. The final result is useful for those who are interested to study in Malaysia, in which our final output declares clear winners based on the public opinions all over the web.
Abstract: This paper proposes a system to extract images from web pages and then detect the skin color regions of these images. As part of the proposed system, using BandObject control, we built a Tool bar named 'Filter Tool Bar (FTB)' by modifying the Pavel Zolnikov implementation. The Yahoo! Team provides us with the Yahoo! SDK API, which also supports image search and is really useful. In the proposed system, we introduced three new methods for extracting images from the web pages (after loading the web page by using the proposed FTB, before loading the web page physically from the localhost, and before loading the web page from any server). These methods overcome the drawback of the regular expressions method for extracting images suggested by Ilan Assayag. The second part of the proposed system is concerned with the detection of the skin color regions of the extracted images. So, we studied two famous skin color detection techniques. The first technique is based on the RGB color space and the second technique is based on YUV and YIQ color spaces. We modified the second technique to overcome the failure of detecting complex image's background by using the saturation parameter to obtain an accurate skin detection results. The performance evaluation of the efficiency of the proposed system in extracting images before and after loading the web page from localhost or any server in terms of the number of extracted images is presented. Finally, the results of comparing the two skin detection techniques in terms of the number of pixels detected are presented.
Abstract: In this study, a fuzzy similarity approach for Arabic
web pages classification is presented. The approach uses a fuzzy
term-category relation by manipulating membership degree for the
training data and the degree value for a test web page. Six measures
are used and compared in this study. These measures include:
Einstein, Algebraic, Hamacher, MinMax, Special case fuzzy and
Bounded Difference approaches. These measures are applied and
compared using 50 different Arabic web pages. Einstein measure was
gave best performance among the other measures. An analysis of
these measures and concluding remarks are drawn in this study.
Abstract: Web sites are rapidly becoming the preferred media
choice for our daily works such as information search, company
presentation, shopping, and so on. At the same time, we live in a
period where visual appearances play an increasingly important
role in our daily life. In spite of designers- effort to develop a web
site which be both user-friendly and attractive, it would be difficult
to ensure the outcome-s aesthetic quality, since the visual
appearance is a matter of an individual self perception and opinion.
In this study, it is attempted to develop an automatic system for
web pages aesthetic evaluation which are the building blocks of
web sites. Based on the image processing techniques and artificial
neural networks, the proposed method would be able to categorize
the input web page according to its visual appearance and aesthetic
quality. The employed features are multiscale/multidirectional
textural and perceptual color properties of the web pages, fed to
perceptron ANN which has been trained as the evaluator. The
method is tested using university web sites and the results
suggested that it would perform well in the web page aesthetic
evaluation tasks with around 90% correct categorization.
Abstract: In this study, a fuzzy similarity approach for Arabic web pages classification is presented. The approach uses a fuzzy term-category relation by manipulating membership degree for the training data and the degree value for a test web page. Six measures are used and compared in this study. These measures include: Einstein, Algebraic, Hamacher, MinMax, Special case fuzzy and Bounded Difference approaches. These measures are applied and compared using 50 different Arabic web-pages. Einstein measure was gave best performance among the other measures. An analysis of these measures and concluding remarks are drawn in this study.
Abstract: The vast amount of information on the World Wide
Web is created and published by many different types of providers.
Unlike books and journals, most of this information is not subject to
editing or peer review by experts. This lack of quality control and the
explosion of web sites make the task of finding quality information
on the web especially critical. Meanwhile new facilities for
producing web pages such as Blogs make this issue more significant
because Blogs have simple content management tools enabling nonexperts
to build easily updatable web diaries or online journals. On
the other hand despite a decade of active research in information
quality (IQ) there is no framework for measuring information quality
on the Blogs yet. This paper presents a novel experimental
framework for ranking quality of information on the Weblog. The
results of data analysis revealed seven IQ dimensions for the Weblog.
For each dimension, variables and related coefficients were
calculated so that presented framework is able to assess IQ of
Weblogs automatically.
Abstract: This paper describes the project and development of a
very low-cost and small electronic prototype, especially designed for
monitoring and controlling existing home automation alarm systems
(intruder, smoke, gas, flood, etc.), via TCP/IP, with a typical web
browser. Its use will allow home owners to be immediately alerted
and aware when an alarm event occurs, and being also able to
interact with their home automation alarm system, disarming, arming
and watching event alerts, with a personal wireless Wi-Fi PDA or
smartphone logged on to a dedicated predefined web page, and using
also a PC or Laptop.
Abstract: This paper proposes an auto-classification algorithm
of Web pages using Data mining techniques. We consider the
problem of discovering association rules between terms in a set of
Web pages belonging to a category in a search engine database, and
present an auto-classification algorithm for solving this problem that
are fundamentally based on Apriori algorithm. The proposed
technique has two phases. The first phase is a training phase where
human experts determines the categories of different Web pages, and
the supervised Data mining algorithm will combine these categories
with appropriate weighted index terms according to the highest
supported rules among the most frequent words. The second phase is
the categorization phase where a web crawler will crawl through the
World Wide Web to build a database categorized according to the
result of the data mining approach. This database contains URLs and
their categories.