Abstract: Nowadays, computer worms, viruses and Trojan horse
become popular, and they are collectively called malware. Those
malware just spoiled computers by deleting or rewriting important
files a decade ago. However, recent malware seems to be born to earn
money. Some of malware work for collecting personal information so
that malicious people can find secret information such as password for
online banking, evidence for a scandal or contact address which relates
with the target. Moreover, relation between money and malware
becomes more complex. Many kinds of malware bear bots to get
springboards. Meanwhile, for ordinary internet users,
countermeasures against malware come up against a blank wall.
Pattern matching becomes too much waste of computer resources,
since matching tools have to deal with a lot of patterns derived from
subspecies. Virus making tools can automatically bear subspecies of
malware. Moreover, metamorphic and polymorphic malware are no
longer special. Recently there appears malware checking sites that
check contents in place of users' PC. However, there appears a new
type of malicious sites that avoids check by malware checking sites. In
this paper, existing protocols and methods related with the web are
reconsidered in terms of protection from current attacks, and new
protocol and method are indicated for the purpose of security of the
web.
Abstract: Independent spanning trees (ISTs) provide a number of advantages in data broadcasting. One can cite the use in fault tolerance network protocols for distributed computing and bandwidth. However, the problem of constructing multiple ISTs is considered hard for arbitrary graphs. In this paper we present an efficient algorithm to construct ISTs on hypercubes that requires minimum resources to be performed.
Abstract: In this paper we describe the design and implementation of a parallel algorithm for data assimilation with ensemble Kalman filter (EnKF) for oil reservoir history matching problem. The use of large number of observations from time-lapse seismic leads to a large turnaround time for the analysis step, in addition to the time consuming simulations of the realizations. For efficient parallelization it is important to consider parallel computation at the analysis step. Our experiments show that parallelization of the analysis step in addition to the forecast step has good scalability, exploiting the same set of resources with some additional efforts.
Abstract: Simulation is a very powerful method used for highperformance
and high-quality design in distributed system, and now
maybe the only one, considering the heterogeneity, complexity and
cost of distributed systems. In Grid environments, foe example, it is
hard and even impossible to perform scheduler performance
evaluation in a repeatable and controllable manner as resources and
users are distributed across multiple organizations with their own
policies. In addition, Grid test-beds are limited and creating an
adequately-sized test-bed is expensive and time consuming.
Scalability, reliability and fault-tolerance become important
requirements for distributed systems in order to support distributed
computation. A distributed system with such characteristics is called
dependable. Large environments, like Cloud, offer unique
advantages, such as low cost, dependability and satisfy QoS for all
users. Resource management in large environments address
performant scheduling algorithm guided by QoS constrains. This
paper presents the performance evaluation of scheduling heuristics
guided by different optimization criteria. The algorithms for
distributed scheduling are analyzed in order to satisfy users
constrains considering in the same time independent capabilities of
resources. This analysis acts like a profiling step for algorithm
calibration. The performance evaluation is based on simulation. The
simulator is MONARC, a powerful tool for large scale distributed
systems simulation. The novelty of this paper consists in synthetic
analysis results that offer guidelines for scheduler service
configuration and sustain the empirical-based decision. The results
could be used in decisions regarding optimizations to existing Grid
DAG Scheduling and for selecting the proper algorithm for DAG
scheduling in various actual situations.
Abstract: To provide a better understanding of fair share policies supported by current production schedulers and their impact on scheduling performance, A relative fair share policy supported in four well-known production job schedulers is evaluated in this study. The experimental results show that fair share indeed reduces heavy-demand users from dominating the system resources. However, the detailed per-user performance analysis show that some types of users may suffer unfairness under fair share, possibly due to priority mechanisms used by the current production schedulers. These users typically are not heavy-demands users but they have mixture of jobs that do not spread out.
Abstract: In the micro and nano-technology industry, the
«clean-rooms» dedicated to manufacturing chip, are equipped with
the most sophisticated equipment-tools. There use a large number of
resources in according to strict specifications for an optimum
working and result. The distribution of «utilities» to the production is
assured by teams who use a supervision tool.
The studies show the interest to control the various parameters of
production or/and distribution, in real time, through a reliable and
effective supervision tool. This document looks at a large part of the
functions that the supervisor must assure, with complementary
functionalities to help the diagnosis and simulation that prove very
useful in our case where the supervised installations are complexed
and in constant evolution.
Abstract: Next Generation Wireless Network (NGWN) is
expected to be a heterogeneous network which integrates all different
Radio Access Technologies (RATs) through a common platform. A
major challenge is how to allocate users to the most suitable RAT for
them. An optimized solution can lead to maximize the efficient use
of radio resources, achieve better performance for service providers
and provide Quality of Service (QoS) with low costs to users.
Currently, Radio Resource Management (RRM) is implemented
efficiently for the RAT that it was developed. However, it is not
suitable for a heterogeneous network. Common RRM (CRRM) was
proposed to manage radio resource utilization in the heterogeneous
network. This paper presents a user level Markov model for a three
co-located RAT networks. The load-balancing based and service
based CRRM algorithms have been studied using the presented
Markov model. A comparison for the performance of load-balancing
based and service based CRRM algorithms is studied in terms of
traffic distribution, new call blocking probability, vertical handover
(VHO) call dropping probability and throughput.
Abstract: Technological innovation capability (TIC) is
defined as a comprehensive set of characteristics of a firm that
facilities and supports its technological innovation strategies.
An audit to evaluate the TICs of a firm may trigger
improvement in its future practices. Such an audit can be used
by the firm for self assessment or third-party independent
assessment to identify problems of its capability status. This
paper attempts to develop such an auditing framework that
can help to determine the subtle links between innovation
capabilities and business performance; and to enable the
auditor to determine whether good practice is in place. The
seven TICs in this study include learning, R&D, resources
allocation, manufacturing, marketing, organization and
strategic planning capabilities. Empirical data was acquired
through a survey study of 200 manufacturing firms in the
Hong Kong/Pearl River Delta (HK/PRD) region. Structural
equation modelling was employed to examine the
relationships among TICs and various performance indicators:
sales performance, innovation performance, product
performance, and sales growth. The results revealed that
different TICs have different impacts on different
performance measures. Organization capability was found to
have the most influential impact. Hong Kong manufacturers
are now facing the challenge of high-mix-low-volume
customer orders. In order to cope with this change, good
capability in organizing different activities among various
departments is critical to the success of a company.
Abstract: Economic Load Dispatch (ELD) is a method of determining
the most efficient, low-cost and reliable operation of a power
system by dispatching available electricity generation resources to
supply load on the system. The primary objective of economic
dispatch is to minimize total cost of generation while honoring
operational constraints of available generation resources. In this paper
an intelligent water drop (IWD) algorithm has been proposed to
solve ELD problem with an objective of minimizing the total cost of
generation. Intelligent water drop algorithm is a swarm-based natureinspired
optimization algorithm, which has been inspired from natural
rivers. A natural river often finds good paths among lots of possible
paths in its ways from source to destination and finally find almost
optimal path to their destination. These ideas are embedded into
the proposed algorithm for solving economic load dispatch problem.
The main advantage of the proposed technique is easy is implement
and capable of finding feasible near global optimal solution with
less computational effort. In order to illustrate the effectiveness of
the proposed method, it has been tested on 6-unit and 20-unit test
systems with incremental fuel cost functions taking into account the
valve point-point loading effects. Numerical results shows that the
proposed method has good convergence property and better in quality
of solution than other algorithms reported in recent literature.
Abstract: Reliability Centered Maintenance(RCM) is one of
most widely used methods in the modern power system to schedule a
maintenance cycle and determine the priority of inspection. In order
to apply the RCM method to the Smart Grid, a precedence study for
the new structure of rearranged system should be performed due to
introduction of additional installation such as renewable and
sustainable energy resources, energy storage devices and advanced
metering infrastructure. This paper proposes a new method to
evaluate the priority of maintenance and inspection of the power
system facilities in the Smart Grid using the Risk Priority Number. In
order to calculate that risk index, it is required that the reliability
block diagram should be analyzed for the Smart Grid system. Finally,
the feasible technical method is discussed to estimate the risk
potential as part of the RCM procedure.
Abstract: Grid computing provides a virtual framework for
controlled sharing of resources across institutional boundaries.
Recently, trust has been recognised as an important factor for
selection of optimal resources in a grid. We introduce a new method
that provides a quantitative trust value, based on the past interactions
and present environment characteristics. This quantitative trust value
is used to select a suitable resource for a job and eliminates run time
failures arising from incompatible user-resource pairs. The proposed
work will act as a tool to calculate the trust values of the various
components of the grid and there by improves the success rate of the
jobs submitted to the resource on the grid. The access to a resource
not only depend on the identity and behaviour of the resource but
also upon its context of transaction, time of transaction, connectivity
bandwidth, availability of the resource and load on the resource. The
quality of the recommender is also evaluated based on the accuracy
of the feedback provided about a resource. The jobs are submitted for
execution to the selected resource after finding the overall trust value
of the resource. The overall trust value is computed with respect to
the subjective and objective parameters.
Abstract: This paper proposes a novel methodology for enabling
debugging and tracing of production web applications without
affecting its normal flow and functionality. This method of debugging
enables developers and maintenance engineers to replace a set of
existing resources such as images, server side scripts, cascading
style sheets with another set of resources per web session. The new
resources will only be active in the debug session and other sessions
will not be affected. This methodology will help developers in tracing
defects, especially those that appear only in production environments
and in exploring the behaviour of the system. A realization of the
proposed methodology has been implemented in Java.
Abstract: Enterprises need a strategic plan to retain their skillful employees and provide their career management, sustain their existence, to have growth and leadership qualities, to reach the objectives to increase the value of the enterprise and to not to be affected from changing demographic structure. In the cases when the long term career expectations of skillful employees are in integrity with the enterprise’s interests, skill management process is directly related to the career management. With a long term plan, the enterprises should cover the labor force need that may arise in the future by using systematic career development programs and be prepared against developments for all times. Skill management is considered as a practice with which career mobility is planned for the skillful employee to be prepared for high level positions. Career planning is the planning of an employee’s progress or promotion within an organization for which he works by developing his knowledge, skills, abilities and motives. Career planning is considered as an individual’s planning his future and the position which he wants to have, the area which he want to work in, the objectives which he want to reach. With the aim of contributing the abovementioned discussion process, career management concept and its perception manner are examined in this study in a comparative manner.
Abstract: The increase on the demand of IT resources diverts
the enterprises to use the cloud as a cheap and scalable solution.
Cloud computing promises achieved by using the virtual machine as a
basic unite of computation. However, the virtual machine pre-defined
settings might be not enough to handle jobs QoS requirements. This
paper addresses the problem of mapping jobs have critical start
deadlines to virtual machines that have predefined specifications.
These virtual machines hosted by physical machines and shared a
fixed amount of bandwidth. This paper proposed an algorithm that
uses the idle virtual machines bandwidth to increase the quote of other
virtual machines nominated as executors to urgent jobs. An algorithm
with empirical study have been given to evaluate the impact of the
proposed model on impatient jobs. The results show the importance
of dynamic bandwidth allocation in virtualized environment and its
affect on throughput metric.
Abstract: Our adaptive multimodal system aims at correctly
presenting a mathematical expression to visually impaired users.
Given an interaction context (i.e. combination of user, environment
and system resources) as well as the complexity of the expression
itself and the user-s preferences, the suitability scores of different
presentation format are calculated. Unlike the current state-of-the art
solutions, our approach takes into account the user-s situation and not
imposes a solution that is not suitable to his context and capacity. In
this wok, we present our methodology for calculating the
mathematical expression complexity and the results of our
experiment. Finally, this paper discusses the concepts and principles
applied on our system as well as their validation through cases
studies. This work is our original contribution to an ongoing research
to make informatics more accessible to handicapped users.
Abstract: The understanding of the system level of biological behavior and phenomenon variously needs some elements such as gene sequence, protein structure, gene functions and metabolic pathways. Challenging problems are representing, learning and reasoning about these biochemical reactions, gene and protein structure, genotype and relation between the phenotype, and expression system on those interactions. The goal of our work is to understand the behaviors of the interactions networks and to model their evolution in time and in space. We propose in this study an ontological meta-model for the knowledge representation of the genetic regulatory networks. Ontology in artificial intelligence means the fundamental categories and relations that provide a framework for knowledge models. Domain ontology's are now commonly used to enable heterogeneous information resources, such as knowledge-based systems, to communicate with each other. The interest of our model is to represent the spatial, temporal and spatio-temporal knowledge. We validated our propositions in the genetic regulatory network of the Aarbidosis thaliana flower
Abstract: In Virtual organization, Knowledge Discovery (KD)
service contains distributed data resources and computing grid nodes.
Computational grid is integrated with data grid to form Knowledge
Grid, which implements Apriori algorithm for mining association
rule on grid network. This paper describes development of parallel
and distributed version of Apriori algorithm on Globus Toolkit using
Message Passing Interface extended with Grid Services (MPICHG2).
The creation of Knowledge Grid on top of data and
computational grid is to support decision making in real time
applications. In this paper, the case study describes design and
implementation of local and global mining of frequent item sets. The
experiments were conducted on different configurations of grid
network and computation time was recorded for each operation. We
analyzed our result with various grid configurations and it shows
speedup of computation time is almost superlinear.
Abstract: This is a conceptual paper on the application of open
innovation in three case examples of Apple, Nintendo, and Nokia.
Utilizing key concepts from research into managerial and
organizational cognition, we describe how each company overcame
barriers to utilizing open innovation strategy in R&D and
commercialization projects. We identify three levels of barriers:
cognitive, behavioral, and institutional, and describe the companies
balanced between internal and external resources to launch products
that were instrumental in companies reinventing themselves in
mature markets.
Abstract: This paper explores the social and political imperatives in the sphere of public policy relating to social justice. In India, the colonial legacy and post-colonial social and political pressures sustained the appropriation of 'caste' category in allocating public resources to the backward class of citizens. For several reasons, 'economic' category could not be placed in allocating resources. This paper examines the reasons behind the deliberative exercises and formulating policies and seeks an alternative framework in realizing social justice in terms of a unified category. This attempt can be viewed as a reconciliation of traditional and modern values for a viable alternative in public policy making.
Abstract: Frauds in insurance industry are one of the major
sources of operational risk of insurance companies and constitute a
significant portion of their losses. Every reasonable company on the
market aims for improving their processes of uncovering frauds and
invests their resources to reduce them. This article is addressing fraud
management area from the view of extension of existing Business
Intelligence solution. We describe the frame of such solution and
would like to share with readers all benefits brought to insurance
companies by adopting this approach in their fight against insurance
frauds.