Abstract: Nowadays, the dissemination of information touches the distributed world, where selecting the relevant servers to a user request is an important problem in distributed information retrieval. During the last decade, several research studies on this issue have been launched to find optimal solutions and many approaches of collection selection have been proposed. In this paper, we propose a new collection selection approach that takes into consideration the number of documents in a collection that contains terms of the query and the weights of those terms in these documents. We tested our method and our studies show that this technique can compete with other state-of-the-art algorithms that we choose to test the performance of our approach.
Abstract: Web application architecture is important to achieve the desired performance for the application. Performance analysis studies are conducted to evaluate existing or planned systems. Web applications are used by hundreds of thousands of users simultaneously, which sometimes increases the risk of server failure in real time operations. We use Coloured Petri Net (CPN), a very powerful tool for modelling dynamic behaviour of a web application system. CPNs extend the vocabulary of ordinary Petri nets and add features that make them suitable for modelling large systems. The major focus of this work is on server side of web applications. The presented work focuses on modelling restructuring aspects, with major focus on concurrency and architecture, using CPN. It also focuses on bringing out the appropriate architecture for web and database servers given the number of concurrent users.
Abstract: Digital technologies offer many opportunities in the
design and implementation of brand communication and advertising.
Augmented reality (AR) is an innovative technology in marketing
communication that focuses on the fact that virtual interaction with a
product ad offers additional value to consumers. AR enables
consumers to obtain (almost) real product experiences by the way of
virtual information even before the purchase of a certain product.
Aim of AR applications in relation with advertising is in-depth
examination of product characteristics to enhance product knowledge
as well as brand knowledge. Interactive design of advertising
provides observers with an intense examination of a specific
advertising message and therefore leads to better brand knowledge.
The elaboration likelihood model and the central route to persuasion
strongly support this argumentation. Nevertheless, AR in brand
communication is still in an initial stage and therefore scientific
findings about the impact of AR on information processing and brand
attitude are rare. The aim of this paper is to empirically investigate
the potential of AR applications in combination with traditional print
advertising. To that effect an experimental design with different
levels of interactivity is built to measure the impact of interactivity of
an ad on different variables o advertising effectiveness.
Abstract: While the feature sizes of recent Complementary Metal
Oxid Semiconductor (CMOS) devices decrease the influence of static
power prevails their energy consumption. Thus, power savings that
benefit from Dynamic Frequency and Voltage Scaling (DVFS) are
diminishing and temporal shutdown of cores or other microchip
components become more worthwhile. A consequence of powering off unused parts of a chip is that the
relative difference between idle and fully loaded power consumption
is increased. That means, future chips and whole server systems gain
more power saving potential through power-aware load balancing,
whereas in former times this power saving approach had only
limited effect, and thus, was not widely adopted. While powering
off complete servers was used to save energy, it will be superfluous
in many cases when cores can be powered down. An important
advantage that comes with that is a largely reduced time to respond
to increased computational demand. We include the above developments in a server power model
and quantify the advantage. Our conclusion is that strategies from
datacenters when to power off server systems might be used in the
future on core level, while load balancing mechanisms previously
used at core level might be used in the future at server level.
Abstract: In this paper we propose a computer-aided solution
with Genetic Algorithms in order to reduce the drafting of reports:
FMEA analysis and Control Plan required in the manufacture of the
product launch and improved knowledge development teams for
future projects. The solution allows to the design team to introduce
data entry required to FMEA. The actual analysis is performed using
Genetic Algorithms to find optimum between RPN risk factor and
cost of production. A feature of Genetic Algorithms is that they are
used as a means of finding solutions for multi criteria optimization
problems. In our case, along with three specific FMEA risk factors is
considered and reduce production cost. Analysis tool will generate
final reports for all FMEA processes. The data obtained in FMEA
reports are automatically integrated with other entered parameters in
Control Plan. Implementation of the solution is in the form of an
application running in an intranet on two servers: one containing
analysis and plan generation engine and the other containing the
database where the initial parameters and results are stored. The
results can then be used as starting solutions in the synthesis of other
projects. The solution was applied to welding processes, laser cutting
and bending to manufacture chassis for buses. Advantages of the
solution are efficient elaboration of documents in the current project
by automatically generating reports FMEA and Control Plan using
multiple criteria optimization of production and build a solid
knowledge base for future projects. The solution which we propose is
a cheap alternative to other solutions on the market using Open
Source tools in implementation.
Abstract: Cloud computing is a new technology in industry and
academia. The technology has grown and matured in last half decade
and proven their significant role in changing environment of IT
infrastructure where cloud services and resources are offered over the
network. Cloud technology enables users to use services and
resources without being concerned about the technical implications of
technology. There are substantial research work has been performed
for the usage of cloud computing in educational institutes and
majority of them provides cloud services over high-end blade servers
or other high-end CPUs. However, this paper proposes a new stack
called “CiCKAStack” which provide cloud services over unutilized
computing resources, named as commodity computers.
“CiCKAStack” provides IaaS and PaaS using underlying commodity
computers. This will not only increasing the utilization of existing
computing resources but also provide organize file system, on
demand computing resource and design and development
environment.
Abstract: In this paper we are presenting some spamming
techniques their behaviour and possible solutions. We have analyzed
how Spammers enters into online social networking sites (OSNSs) to
target them and diverse techniques used by them for this purpose.
Spamming is very common issue in present era of Internet
especially through Online Social Networking Sites (like Facebook,
Twitter, and Google+ etc.). Spam messages keep wasting Internet
bandwidth and the storage space of servers. On social networking
sites; spammers often disguise themselves by creating fake accounts
and hijacking user’s accounts for personal gains. They behave like
normal user and they continue to change their spamming strategy.
Following spamming techniques are discussed in this paper like
clickjacking, social engineered attacks, cross site scripting, URL
shortening, and drive by download. We have used elgg framework
for demonstration of some of spamming threats and respective
implementation of solutions.
Abstract: In this paper, we propose an automatic verification
technology of software patches for user virtual environments on IaaS
Cloud to decrease verification costs of patches. In these days, IaaS
services have been spread and many users can customize virtual
machines on IaaS Cloud like their own private servers. Regarding to
software patches of OS or middleware installed on virtual machines,
users need to adopt and verify these patches by themselves. This task
increases operation costs of users. Our proposed method replicates
user virtual environments, extracts verification test cases for user
virtual environments from test case DB, distributes patches to virtual
machines on replicated environments and conducts those test cases
automatically on replicated environments. We have implemented the
proposed method on OpenStack using Jenkins and confirmed the
feasibility. Using the implementation, we confirmed the effectiveness
of test case creation efforts by our proposed idea of 2-tier abstraction
of software functions and test cases. We also evaluated the automatic
verification performance of environment replications, test cases
extractions and test cases conductions.
Abstract: In-memory database systems are becoming popular
due to the availability and affordability of sufficiently large RAM and
processors in modern high-end servers with the capacity to manage
large in-memory database transactions. While fast and reliable inmemory
systems are still being developed to overcome cache misses,
CPU/IO bottlenecks and distributed transaction costs, disk-based data
stores still serve as the primary persistence. In addition, with the
recent growth in multi-tenancy cloud applications and associated
security concerns, many organisations consider the trade-offs and
continue to require fast and reliable transaction processing of diskbased
database systems as an available choice. For these
organizations, the only way of increasing throughput is by improving
the performance of disk-based concurrency control. This warrants a
hybrid database system with the ability to selectively apply an
enhanced disk-based data management within the context of inmemory
systems that would help improve overall throughput.
The general view is that in-memory systems substantially
outperform disk-based systems. We question this assumption and
examine how a modified variation of access invariance that we call
enhanced memory access, (EMA) can be used to allow very high
levels of concurrency in the pre-fetching of data in disk-based
systems. We demonstrate how this prefetching in disk-based systems
can yield close to in-memory performance, which paves the way for
improved hybrid database systems. This paper proposes a novel EMA
technique and presents a comparative study between disk-based EMA
systems and in-memory systems running on hardware configurations
of equivalent power in terms of the number of processors and their
speeds. The results of the experiments conducted clearly substantiate
that when used in conjunction with all concurrency control
mechanisms, EMA can increase the throughput of disk-based systems
to levels quite close to those achieved by in-memory system. The
promising results of this work show that enhanced disk-based
systems facilitate in improving hybrid data management within the
broader context of in-memory systems.
Abstract: Validity, integrity, and impacts of the IT systems of
the US federal courts have been studied as part of the Human Rights
Alert-NGO (HRA) submission for the 2015 Universal Periodic
Review (UPR) of human rights in the United States by the Human
Rights Council (HRC) of the United Nations (UN). The current
report includes overview of IT system analysis, data-mining and case
studies. System analysis and data-mining show: Development and
implementation with no lawful authority, servers of unverified
identity, invalidity in implementation of electronic signatures,
authentication instruments and procedures, authorities and
permissions; discrimination in access against the public and
unrepresented (pro se) parties and in favor of attorneys; widespread
publication of invalid judicial records and dockets, leading to their
false representation and false enforcement. A series of case studies
documents the impacts on individuals' human rights, on banking
regulation, and on international matters. Significance is discussed in
the context of various media and expert reports, which opine
unprecedented corruption of the US justice system today, and which
question, whether the US Constitution was in fact suspended. Similar
findings were previously reported in IT systems of the State of
California and the State of Israel, which were incorporated, subject to
professional HRC staff review, into the UN UPR reports (2010 and
2013). Solutions are proposed, based on the principles of publicity of
the law and the separation of power: Reliance on US IT and legal
experts under accountability to the legislative branch, enhancing
transparency, ongoing vigilance by human rights and internet
activists. IT experts should assume more prominent civic duties in the
safeguard of civil society in our era.
Abstract: The growth of wireless devices affects the availability
of limited frequencies or spectrum bands as it has been known that
spectrum bands are a natural resource that cannot be added.
Meanwhile, the licensed frequencies are idle most of the time.
Cognitive radio is one of the solutions to solve those problems.
Cognitive radio is a promising technology that allows the unlicensed
users known as secondary users (SUs) to access licensed bands
without making interference to licensed users or primary users (PUs).
As cloud computing has become popular in recent years, cognitive
radio networks (CRNs) can be integrated with cloud platform. One of
the important issues in CRNs is security. It becomes a problem since
CRNs use radio frequencies as a medium for transmitting and CRNs
share the same issues with wireless communication systems. Another
critical issue in CRNs is performance. Security has adverse effect to
performance and there are trade-offs between them. The goal of this
paper is to investigate the performance related to security trade-off in
CRNs with supporting cloud platforms. Furthermore, Queuing
Network Models with preemptive resume and preemptive repeat
identical priority are applied in this project to measure the impact of
security to performance in CRNs with or without cloud platform. The
generalized exponential (GE) type distribution is used to reflect the
bursty inter-arrival and service times at the servers. The results show
that the best performance is obtained when security is disabled and
cloud platform is enabled.
Abstract: Cloud outsource storage is one of important services in cloud computing. Cloud users upload data to cloud servers to reduce the cost of managing data and maintaining hardware and software. To ensure data confidentiality, users can encrypt their files before uploading them to a cloud system. However, retrieving the target file from the encrypted files exactly is difficult for cloud server. This study proposes a protocol for performing multikeyword searches for encrypted cloud data by applying k-nearest neighbor technology. The protocol ranks the relevance scores of encrypted files and keywords, and prevents cloud servers from learning search keywords submitted by a cloud user. To reduce the costs of file transfer communication, the cloud server returns encrypted files in order of relevance. Moreover, when a cloud user inputs an incorrect keyword and the number of wrong alphabet does not exceed a given threshold; the user still can retrieve the target files from cloud server. In addition, the proposed scheme satisfies security requirements for outsourced data storage.
Abstract: Homemade HPC clusters are widely used in many small labs, because they are easy to build and cost-effective. Even though incremental growth is an advantage of clusters, it results in heterogeneous systems anyhow. Instead of adding new nodes to the cluster, we can extend clusters to include some other Internet servers working independently on the same LAN, so that we can make use of their idle times, especially during the night. However extension across a firewall raises some security problems with NFS. In this paper, we propose a method to solve such a problem using SSH tunneling, and suggest a modified structure of the cluster that implements it.
Abstract: The goal of the present paper is to model two classic lines of research in which employees starred, organizational justice and citizenship behavior (OCB), but that have never been studied together when targeting customers. The suggestion is made that a hotel’s fair treatment (in terms of distributive, procedural, and interactional justice) toward customers will be appreciated by the employees, who will reciprocate in kind by favoring the hotel with increased customer-oriented behaviors (COBs). Data were collected from 204 employees at eight upscale hotels in the Canary Islands (Spain). Unlike in the case of perceptions of distributive justice, results of structural equation modeling demonstrate that employees substantively react to interactional and procedural justice toward guests by engaging in customer-oriented behaviors (COBs). The findings offer new reasons why employees decide to engage in COBs, and they highlight potentially beneficial effects of fair treatment toward guests bring to hospitality through promoting COBs.
Abstract: Cloud computing technology is very useful in present day to day life, it uses the internet and the central remote servers to provide and maintain data as well as applications. Such applications in turn can be used by the end users via the cloud communications without any installation. Moreover, the end users’ data files can be accessed and manipulated from any other computer using the internet services. Despite the flexibility of data and application accessing and usage that cloud computing environments provide, there are many questions still coming up on how to gain a trusted environment that protect data and applications in clouds from hackers and intruders. This paper surveys the “keys generation and management” mechanism and encryption/decryption algorithms used in cloud computing environments, we proposed new security architecture for cloud computing environment that considers the various security gaps as much as possible. A new cryptographic environment that implements quantum mechanics in order to gain more trusted with less computation cloud communications is given.
Abstract: In this paper, our concern is the management of mobile transactions in the shared area among many servers, when the mobile user moves from one cell to another in online partiallyreplicated distributed mobile database environment. We defined the concept of transaction and classified the different types of transactions. Based on this analysis, we propose an algorithm that handles the disconnection due to moving among sites.
Abstract: In this paper, we use Generalized Hamiltonian systems approach to synchronize a modified sixth-order Chua's circuit, which generates hyperchaotic dynamics. Synchronization is obtained between the master and slave dynamics with the slave being given by an observer. We apply this approach to transmit private information (analog and binary), while the encoding remains potentially secure.
Abstract: Worm propagation profiles have significantly changed
since 2003-2004: sudden world outbreaks like Blaster or Slammer
have progressively disappeared and slower but stealthier worms
appeared since, most of them for botnets dissemination. Decreased
worm virulence results in more difficult detection.
In this paper, we describe a stealth worm propagation model
which has been extensively simulated and analysed on a huge virtual
network. The main features of this model is its ability to infect any
Internet-like network in a few seconds, whatever may be its size while
greatly limiting the reinfection attempt overhead of already infected
hosts. The main simulation results shows that the combinatorial
topology of routing may have a huge impact on the worm propagation
and thus some servers play a more essential and significant role than
others. The real-time capability to identify them may be essential to
greatly hinder worm propagation.
Abstract: Based on general proportional integral (GPI) observers and sliding mode control technique, a robust control method is proposed for the master-slave synchronization of chaotic systems in the presence of parameter uncertainty and with partially measurable output signal. By using GPI observer, the master dynamics are reconstructed by the observations from a measurable output under the differential algebraic framework. Driven by the signals provided by GPI observer, a sliding mode control technique is used for the tracking control and synchronization of the master-slave dynamics. The convincing numerical results reveal the proposed method is effective, and successfully accommodate the system uncertainties, disturbances, and noisy corruptions.
Abstract: Server provisioning is one of the most attractive topics in virtualization systems. Virtualization is a method of running multiple independent virtual operating systems on a single physical computer. It is a way of maximizing physical resources to maximize the investment in hardware. Additionally, it can help to consolidate servers, improve hardware utilization and reduce the consumption of power and physical space in the data center. However, management of heterogeneous workloads, especially for resource utilization of the server, or so called provisioning becomes a challenge. In this paper, a new concept for managing workloads based on user behavior is presented. The experimental results show that user behaviors are different in each type of service workload and time. Understanding user behaviors may improve the efficiency of management in provisioning concept. This preliminary study may be an approach to improve management of data centers running heterogeneous workloads for provisioning in virtualization system.