The Study on the Conversed Remediation between Old and New Media in Case of Smart Phone and PC in South Korea

After Apple's first introduction its smart phone, iPhone in the end of 2009 in Korea, the number of Korean smarphone users had been rapidly increasing so that the half of Korean population became smart phone users as of February, 2012. Currently, smart phones are positioned as a major digital media with powerful influences in Korea. And, now, Koreans are leaning new information, enjoying games and communicating other people every time and everywhere. As smart phone devices' performances increased, the number of usable services became more while adequate GUI developments are required to implement various functions with smart phones. The strategy to provide similar experiences on smart phones through familiar features based on employment of existing media's functions mostly contributed to smart phones' popularization in connection with smart phone devices' iconic GUIs. The spread of Smart phone increased mobile web accesses. Therefore, the attempts to implement PC's web in the smart phone's web are continuously made. The mobile web GUI provides familiar experiences to users through designs adequately utilizing the smart phone's GUIs. As the number of users familiarized to smart phones and mobile web GUIs, opposite to reversed remediation from many parts of PCs, PCs are starting to adapt smart phone GUIs. This study defines this phenomenon as the reversed remediation, and reviews the reversed remediation cases of Smart phone GUI' characteristics of PCs. For this purpose, the established study issues are as under: · what is the reversed remediation? · what are the smart phone GUI's characteristics? · what kind of interrelationship exist s between the smart phone and PC's web site? It is meaningful in the forecast of the future GUI's change by understanding of characteristics in the paradigm changes of PC and smart phone's GUI designs. This also will be helpful to establish strategies for digital devices' development and design.

Aplication`s Aspects Of Public Relations By Nonprofit Organizations. Case Study Albania

The traditional public relations manager is usually responsible for maintaining and enhancing the reputation of the organization among key publics. While the principal focus of this effort is on support publics, it is quite clearly recognized that an organization's image has important effects on its own employees, its donors and volunteers, and its clients. The aim of paper is to define application`s aspects of public relations media and tools by nonprofit organizations in Albanian reality. Actually does used public relations media and tools, like written material, audiovisual material, organizational identity media, news, interviews and speeches, events, web sites by nonprofit organizations to attract donors? If, public relations media and tools are used, does exists a relation between public relation media and fundraising?

Electronic Markets has Weakened the “Tradeoff between Reach and Richness“ in the Internet

This paper has two main ideas. Firstly, it describes Evans and Wurster-s concepts “the trade-off between reach and richness", and relates them to the impact of technology on the virtual markets. Authors Evans and Wurster see the transfer of information as a 'trade'off between richness and reach-. Reach refers to the number of people who share particular information, with Richness ['Rich'] being a more complex concept combining: bandwidth, customization, interactivity, reliability, security and currency. Traditional shopping limits the number of shops the shopper is able to visit due to time and other cost constraints; the time spent traveling consequently leaves the shopper with less time to evaluate the product. The paper concludes that although the Web provides Reach, offering Richness and the sense of community required for creating and sustaining relationships with potential clients could be difficult.

Digital Forensics for Electronic Commerce on the Web

On existing online shopping on the web, SSL and password are usually used to achieve the secure trades. SSL shields communication from the third party who is not related with the trade, and indicates that the trader's web site is authenticated by one of the certification authority. Password certifies a customer as the same person who has visited the trader's web site before, and protects the customer's privacy such as what the customer has bought on the site. However, there is no forensics for the trades in those cased above. With existing methods, no one can prove what is ordered by customers, how many products are ordered and even whether customers have ordered or not. The reason is that the third party has to guess what were traded with logs that are held by traders and by customers. The logs can easily be created, deleted and forged since they are electronically stored. To enhance security with digital forensics for electronic commerce on the web, I indicate a secure method with cellular phones.

A Framework for Ranking Quality of Information on Weblog

The vast amount of information on the World Wide Web is created and published by many different types of providers. Unlike books and journals, most of this information is not subject to editing or peer review by experts. This lack of quality control and the explosion of web sites make the task of finding quality information on the web especially critical. Meanwhile new facilities for producing web pages such as Blogs make this issue more significant because Blogs have simple content management tools enabling nonexperts to build easily updatable web diaries or online journals. On the other hand despite a decade of active research in information quality (IQ) there is no framework for measuring information quality on the Blogs yet. This paper presents a novel experimental framework for ranking quality of information on the Weblog. The results of data analysis revealed seven IQ dimensions for the Weblog. For each dimension, variables and related coefficients were calculated so that presented framework is able to assess IQ of Weblogs automatically.

Deep Web Content Mining

The rapid expansion of the web is causing the constant growth of information, leading to several problems such as increased difficulty of extracting potentially useful knowledge. Web content mining confronts this problem gathering explicit information from different web sites for its access and knowledge discovery. Query interfaces of web databases share common building blocks. After extracting information with parsing approach, we use a new data mining algorithm to match a large number of schemas in databases at a time. Using this algorithm increases the speed of information matching. In addition, instead of simple 1:1 matching, they do complex (m:n) matching between query interfaces. In this paper we present a novel correlation mining algorithm that matches correlated attributes with smaller cost. This algorithm uses Jaccard measure to distinguish positive and negative correlated attributes. After that, system matches the user query with different query interfaces in special domain and finally chooses the nearest query interface with user query to answer to it.

A Multi-Level WEB Based Parallel Processing System A Hierarchical Volunteer Computing Approach

Over the past few years, a number of efforts have been exerted to build parallel processing systems that utilize the idle power of LAN-s and PC-s available in many homes and corporations. The main advantage of these approaches is that they provide cheap parallel processing environments for those who cannot afford the expenses of supercomputers and parallel processing hardware. However, most of the solutions provided are not very flexible in the use of available resources and very difficult to install and setup. In this paper, a multi-level web-based parallel processing system (MWPS) is designed (appendix). MWPS is based on the idea of volunteer computing, very flexible, easy to setup and easy to use. MWPS allows three types of subscribers: simple volunteers (single computers), super volunteers (full networks) and end users. All of these entities are coordinated transparently through a secure web site. Volunteer nodes provide the required processing power needed by the system end users. There is no limit on the number of volunteer nodes, and accordingly the system can grow indefinitely. Both volunteer and system users must register and subscribe. Once, they subscribe, each entity is provided with the appropriate MWPS components. These components are very easy to install. Super volunteer nodes are provided with special components that make it possible to delegate some of the load to their inner nodes. These inner nodes may also delegate some of the load to some other lower level inner nodes .... and so on. It is the responsibility of the parent super nodes to coordinate the delegation process and deliver the results back to the user. MWPS uses a simple behavior-based scheduler that takes into consideration the current load and previous behavior of processing nodes. Nodes that fulfill their contracts within the expected time get a high degree of trust. Nodes that fail to satisfy their contract get a lower degree of trust. MWPS is based on the .NET framework and provides the minimal level of security expected in distributed processing environments. Users and processing nodes are fully authenticated. Communications and messages between nodes are very secure. The system has been implemented using C#. MWPS may be used by any group of people or companies to establish a parallel processing or grid environment.

A Novel Security Framework for the Web System

In this paper, a framework is presented trying to make the most secure web system out of the available generic and web security technology which can be used as a guideline for organizations building their web sites. The framework is designed to provide necessary security services, to address the known security threats, and to provide some cover to other security problems especially unknown threats. The requirements for the design are discussed which guided us to the design of secure web system. The designed security framework is then simulated and various quality of service (QoS) metrics are calculated to measure the performance of this system.

Information Quality Evaluation Framework: Extending ISO 25012 Data Quality Model

The world wide web coupled with the ever-increasing sophistication of online technologies and software applications puts greater emphasis on the need of even more sophisticated and consistent quality requirements modeling than traditional software applications. Web sites and Web applications (WebApps) are becoming more information driven and content-oriented raising the concern about their information quality (InQ). The consistent and consolidated modeling of InQ requirements for WebApps at different stages of the life cycle still poses a challenge. This paper proposes an approach to specify InQ requirements for WebApps by reusing and extending the ISO 25012:2008(E) data quality model. We also discuss learnability aspect of information quality for the WebApps. The proposed ISO 25012 based InQ framework is a step towards a standardized approach to evaluate WebApps InQ.

Finding Authoritative Researchers on Academic Web Sites

In this paper, we present a methodology for finding authoritative researchers by analyzing academic Web sites. We show a case study in which we concentrate on a set of Czech computer science departments- Web sites. We analyze the relations between them via hyperlinks and find the most important ones using several common ranking algorithms. We then examine the contents of the research papers present on these sites and determine the most authoritative Czech authors.

Tracking Activity of Real Individuals in Web Logs

This paper describes an enhanced cookie-based method for counting the visitors of web sites by using a web log processing system that aims to cope with the ambitious goal of creating countrywide statistics about the browsing practices of real human individuals. The focus is put on describing a new more efficient way of detecting human beings behind web users by placing different identifiers on the client computers. We briefly introduce our processing system designed to handle the massive amount of data records continuously gathered from the most important content providers of the Hungary. We conclude by showing statistics of different time spans comparing the efficiency of multiple visitor counting methods to the one presented here, and some interesting charts about content providers and web usage based on real data recorded in 2007 will also be presented.

A New Model of English-Vietnamese Bilingual Information Retrieval System

In this paper, we propose a new model of English- Vietnamese bilingual Information Retrieval system. Although there are so many CLIR systems had been researched and built, the accuracy of searching results in different languages that the CLIR system supports still need to improve, especially in finding bilingual documents. The problems identified in this paper are the limitation of machine translation-s result and the extra large collections of document to be found. So we try to establish a different model to overcome these problems.

Web Based Real Time Laboratory Applications of Analog and Digital Communication Courses with Lab VIEW Access

Developments in scientific and technical area cause to use new methods and techniques in education, as is the case in all fields. Especially, the internet contributes a variety of new methods to design virtual and real time laboratory applications in education. In this study, a real time virtual laboratory is designed and implemented for analog and digital communications laboratory experiments by using Lab VIEW program for Marmara University Electronics-Communication Department. In this application, students can access the virtual laboratory web site and perform their experiments without any limitation of time and location so as the students can observe the signals by changing the parameters of the experiment and evaluate the results.

Localizing and Experiencing Electronic Questionnaires in an Educational Web Site

One of the main research methods in humanistic studies is the collection and process of data through questionnaires. This paper reports our experiences of localizing and adapting the phpESP package of electronic surveys, which led to a friendly on-line questionnaire environment offered through our department web site. After presenting the characteristics of this environment, we identify the expected benefits and present a questionnaire carried out through both the traditional and electronic way. We present the respondents' feedback and then we report the researchers' opinions.Finally, we propose ideas we intend to implement in order to further assist and enhance the research based on this web accessed,electronic questionnaire environment.

Web Personalization to Build Trust in E-Commerce: A Design Science Approach

With the development of the Internet, E-commerce is growing at an exponential rate, and lots of online stores are built up to sell their goods online. A major factor influencing the successful adoption of E-commerce is consumer-s trust. For new or unknown Internet business, consumers- lack of trust has been cited as a major barrier to its proliferation. As web sites provide key interface for consumer use of E-Commerce, we investigate the design of web site to build trust in E-Commerce from a design science approach. A conceptual model is proposed in this paper to describe the ontology of online transaction and human-computer interaction. Based on this conceptual model, we provide a personalized webpage design approach using Bayesian networks learning method. Experimental evaluation are designed to show the effectiveness of web personalization in improving consumer-s trust in new or unknown online store.

Folksonomy-based Recommender Systems with User-s Recent Preferences

Social bookmarking is an environment in which the user gradually changes interests over time so that the tag data associated with the current temporal period is usually more important than tag data temporally far from the current period. This implies that in the social tagging system, the newly tagged items by the user are more relevant than older items. This study proposes a novel recommender system that considers the users- recent tag preferences. The proposed system includes the following stages: grouping similar users into clusters using an E-M clustering algorithm, finding similar resources based on the user-s bookmarks, and recommending the top-N items to the target user. The study examines the system-s information retrieval performance using a dataset from del.icio.us, which is a famous social bookmarking web site. Experimental results show that the proposed system is better and more effective than traditional approaches.

Efficient Web-Learning Collision Detection Tool on Five-Axis Machine

As networking has become popular, Web-learning tends to be a trend while designing a tool. Moreover, five-axis machining has been widely used in industry recently; however, it has potential axial table colliding problems. Thus this paper aims at proposing an efficient web-learning collision detection tool on five-axis machining. However, collision detection consumes heavy resource that few devices can support, thus this research uses a systematic approach based on web knowledge to detect collision. The methodologies include the kinematics analyses for five-axis motions, separating axis method for collision detection, and computer simulation for verification. The machine structure is modeled as STL format in CAD software. The input to the detection system is the g-code part program, which describes the tool motions to produce the part surface. This research produced a simulation program with C programming language and demonstrated a five-axis machining example with collision detection on web site. The system simulates the five-axis CNC motion for tool trajectory and detects for any collisions according to the input g-codes and also supports high-performance web service benefiting from C. The result shows that our method improves 4.5 time of computational efficiency, comparing to the conventional detection method.

The Path to Web Intelligence Maturity

Web intelligence, if made personal, can fuel the process of building communications around the interests and preferences of each individual customer or prospect, by providing specific behavioral insights about each individual. To become fully efficient, Web intelligence must reach a stage of a high-level maturity, passing throughout a process that involves five steps: (1) Web site analysis; (2) Web site and advertising optimization; (3) Segment targeting; (4) Interactive marketing (online only); and (5) Interactive marketing (online and offline). Discussing these steps in detail, the paper uncovers the real gold mine that is personal-level Web intelligence.

Parental Attitudes as a Predictor of Cyber Bullying among Primary School Children

Problem Statement:Rapid technological developments of the 21st century have advanced our daily lives in various ways. Particularly in education, students frequently utilize technological resources to aid their homework and to access information. listen to radio or watch television (26.9 %) and e-mails (34.2 %) [26]. Not surprisingly, the increase in the use of technologies also resulted in an increase in the use of e-mail, instant messaging, chat rooms, mobile phones, mobile phone cameras and web sites by adolescents to bully peers. As cyber bullying occurs in the cyber space, lesser access to technologies would mean lesser cyber-harm. Therefore, the frequency of technology use is a significant predictor of cyber bullying and cyber victims. Cyber bullies try to harm the victim using various media. These tools include sending derogatory texts via mobile phones, sending threatening e-mails and forwarding confidential emails to everyone on the contacts list. Another way of cyber bullying is to set up a humiliating website and invite others to post comments. In other words, cyber bullies use e-mail, chat rooms, instant messaging, pagers, mobile texts and online voting tools to humiliate and frighten others and to create a sense of helplessness. No matter what type of bullying it is, it negatively affects its victims. Children who bully exhibit more emotional inhibition and attribute themselves more negative self-statements compared to non-bullies. Students whose families are not sympathetic and who receive lower emotional support are more prone to bully their peers. Bullies have authoritarian families and do not get along well with them. The family is the place where the children-s physical, social and psychological needs are satisfied and where their personalities develop. As the use of the internet became prevalent so did parents- restrictions on their children-s internet use. However, parents are unaware of the real harm. Studies that explain the relationship between parental attitudes and cyber bullying are scarce in literature. Thus, this study aims to investigate the relationship between cyber bullying and parental attitudes in the primary school. Purpose of Study: This study aimed to investigate the relationship between cyber bullying and parental attitudes. A second aim was to determine whether parental attitudes could predict cyber bullying and if so which variables could predict it significantly. Methods:The study had a cross-sectional and relational survey model. A demographics information form, questions about cyber bullying and a Parental Attitudes Inventory were conducted with a total of 346 students (189 females and 157 males) registered at various primary schools. Data was analysed by multiple regression analysis using the software package SPSS 16.

Efficiency Evaluation of E-Commerce Websites

This study suggests a model of a new set of evaluation criteria that will be used to measure the efficiency of real-world E-commerce websites. Evaluation criteria include design, usability and performance for websites, the Data Envelopment Analysis (DEA) technique has been used to measure the websites efficiency. An efficient Web site is defined as a site that generates the most outputs, using the smallest amount of inputs. Inputs refer to measurements representing the amount of effort required to build, maintain and perform the site. Output is amount of traffic the site generates. These outputs are measured as the average number of daily hits and the average number of daily unique visitors.