Abstract: Safety analysis for multi-agent systems is complicated by the, potentially nonlinear, interactions between agents. This paper proposes a method for analyzing the safety of multi-agent systems by explicitly focusing on interactions and the accident data of systems that are similar in structure and function to the system being analyzed. The method creates a Bayesian network using the accident data from similar systems. A feature of our method is that the events in accident data are labeled with HAZOP guide words. Our method uses an Ontology to abstract away from the details of a multi-agent implementation. Using the ontology, our methods then constructs an “Interaction Map,” a graphical representation of the patterns of interactions between agents and other artifacts. Interaction maps combined with statistical data from accidents and the HAZOP classifications of events can be converted into a Bayesian Network. Bayesian networks allow designers to explore “what it” scenarios and make design trade-offs that maintain safety. We show how to use the Bayesian networks, and the interaction maps to improve multi-agent system designs.
Abstract: In this paper, an approach is presented to investigate the performance of Pixel Factor Mapping (PFM) and Extended PMM (Pixel Mapping Method) through the qualitative and quantitative approach. These methods are tested against a number of well-known image similarity metrics and statistical distribution techniques. The PFM has been performed in spatial domain as well as frequency domain and the Extended PMM has also been performed in spatial domain through large set of images available in the internet.
Abstract: The Information Retrieval community is facing the problem of effective representation of Web search results. When we organize web search results into clusters it becomes easy to the users to quickly browse through search results. The traditional search engines organize search results into clusters for ambiguous queries, representing each cluster for each meaning of the query. The clusters are obtained according to the topical similarity of the retrieved search results, but it is possible for results to be totally dissimilar and still correspond to the same meaning of the query. People search is also one of the most common tasks on the Web nowadays, but when a particular person’s name is queried the search engines return web pages which are related to different persons who have the same queried name. By placing the burden on the user of disambiguating and collecting pages relevant to a particular person, in this paper, we have developed an approach that clusters web pages based on the association of the web pages to the different people and clusters that are based on generic entity search.
Abstract: Due to the increasing growth of internet users, the emerging applications of multicast are growing day by day and there is a requisite for the design of high-speed switches/routers. Huge amounts of effort have been done into the research area of multicast switch fabric design and algorithms. Different traffic scenarios are the influencing factor which affect the throughput and delay of the switch. The pointer based multicast scheduling algorithms are not performed well under non-uniform traffic conditions. In this work, performance of the switch has been analyzed by applying the advanced multicast scheduling algorithm OQSMS (Optimal Queue Selection Based Multicast Scheduling Algorithm), MDDR (Multicast Due Date Round-Robin Scheduling Algorithm) and MDRR (Multicast Dual Round-Robin Scheduling Algorithm). The results show that OQSMS achieves better switching performance than other algorithms under the uniform, non-uniform and bursty traffic conditions and it estimates optimal queue in each time slot so that it achieves maximum possible throughput.
Abstract: Nowadays, ontologies are used for achieving a
common understanding within a user community and for sharing
domain knowledge. However, the de-centralized nature of the web
makes indeed inevitable that small communities will use their own
ontologies to describe their data and to index their own resources.
Certainly, accessing to resources from various ontologies created
independently is an important challenge for answering end user
queries. Ontology mapping is thus required for combining ontologies.
However, mapping complete ontologies at run time is a
computationally expensive task. This paper proposes a system in
which mappings between concepts may be generated dynamically as
the concepts are encountered during user queries. In this way, the
interaction itself defines the context in which small and relevant
portions of ontologies are mapped. We illustrate application of the
proposed system in the context of Technology Enhanced Learning
(TEL) where learners need to access to learning resources covering
specific concepts.
Abstract: Augmented and Virtual Realties is quickly becoming
a hotbed of activity with millions of dollars being spent on R & D
and companies such as Google and Microsoft rushing to stake their
claim. Augmented reality (AR) is however marching ahead due to the
spread of the ideal AR device – the smartphone. Despite its potential,
there remains a deep digital divide between the Developed and
Developing Countries. The Technological Acceptance Model (TAM)
and Hofstede cultural dimensions also predict the behaviour intention
to uptake AR in India will be large. This paper takes a quantified
approach by collecting 340 survey responses to AR scenarios and
analyzing them through statistics. The Survey responses show that
the Intention to Use, Perceived Usefulness and Perceived Enjoyment
dimensions are high among the urban population in India. This along
with the exponential smartphone indicates that India is on the cusp of
a boom in the AR sector.
Abstract: With 40% of total world energy consumption,
building systems are developing into technically complex large
energy consumers suitable for application of sophisticated power
management approaches to largely increase the energy efficiency
and even make them active energy market participants. Centralized
control system of building heating and cooling managed by
economically-optimal model predictive control shows promising
results with estimated 30% of energy efficiency increase. The research
is focused on implementation of such a method on a case study
performed on two floors of our faculty building with corresponding
sensors wireless data acquisition, remote heating/cooling units and
central climate controller. Building walls are mathematically modeled
with corresponding material types, surface shapes and sizes. Models
are then exploited to predict thermal characteristics and changes in
different building zones. Exterior influences such as environmental
conditions and weather forecast, people behavior and comfort
demands are all taken into account for deriving price-optimal climate
control. Finally, a DC microgrid with photovoltaics, wind turbine,
supercapacitor, batteries and fuel cell stacks is added to make the
building a unit capable of active participation in a price-varying
energy market. Computational burden of applying model predictive
control on such a complex system is relaxed through a hierarchical
decomposition of the microgrid and climate control, where the
former is designed as higher hierarchical level with pre-calculated
price-optimal power flows control, and latter is designed as lower
level control responsible to ensure thermal comfort and exploit
the optimal supply conditions enabled by microgrid energy flows
management. Such an approach is expected to enable the inclusion
of more complex building subsystems into consideration in order to
further increase the energy efficiency.
Abstract: A Reconfigurable Wilkinson power divider is
proposed in this paper. In existing system only a limited number of
bandwidth is used at the output ports, in the proposed Wilkinson
power divider different band of frequencies are obtained by using
PIN diode. By tuning the PIN diode, different frequencies are
achieved. The size of the power divider is reduced for the operating
frequency and increases the fractional bandwidth.
Abstract: Radio Frequency Identification (RFID) has become a
key technology in the emerging concept of Internet of Things (IoT).
Naturally, business applications would require the deployment of
various RFID systems developed by different vendors that use
different data formats and structures. This heterogeneity poses a
challenge in developing real-life IoT systems with RFID, as
integration is becoming very complex and challenging. Semantic
integration is a key approach to deal with this challenge. To do so,
ontology for RFID systems need to be developed in order to
annotated semantically RFID systems, and hence, facilitate their
integration. Accordingly, in this paper, we propose ontology for
RFID systems. The proposed ontology can be used to semantically
enrich RFID systems, and hence, improve their usage and reasoning.
Abstract: As computing technology advances, smartphone
applications can assist student learning in a pervasive way. For
example, the idea of using mobile apps for the PA Common Trees,
Pests, Pathogens, in the field as a reference tool allows middle school
students to learn about trees and associated pests/pathogens without
bringing a textbook. While working on the development of three heterogeneous mobile
apps, we ran into numerous challenges. Both the traditional waterfall
model and the more modern agile methodologies failed in practice.
The waterfall model emphasizes the planning of the duration for each
phase. When the duration of each phase is not consistent with the
availability of developers, the waterfall model cannot be employed.
When applying Agile Methodologies, we cannot maintain the high
frequency of the iterative development review process, known as
‘sprints’. In this paper, we discuss the challenges and solutions. We
propose a hybrid model known as the Relay Race Methodology to
reflect the concept of racing and relaying during the process of
software development in practice. Based on the development project,
we observe that the modeling of the relay race transition between any
two phases is manifested naturally. Thus, we claim that the RRM
model can provide a de fecto rather than a de jure basis for the core
concept in the software development model. In this paper, the background of the project is introduced first.
Then, the challenges are pointed out followed by our solutions.
Finally, the experiences learned and the future works are presented.
Abstract: This paper is meant to analyze the ranking of
University of Malaysia Terengganu, UMT’s website in the World
Wide Web. There are only few researches have been done on
comparing the ranking of universities’ websites so this research will
be able to determine whether the existing UMT’s website is serving
its purpose which is to introduce UMT to the world. The ranking is
based on hub and authority values which are accordance to the
structure of the website. These values are computed using two websearching
algorithms, HITS and SALSA. Three other universities’
websites are used as the benchmarks which are UM, Harvard and
Stanford. The result is clearly showing that more work has to be done
on the existing UMT’s website where important pages according to
the benchmarks, do not exist in UMT’s pages. The ranking of UMT’s
website will act as a guideline for the web-developer to develop a
more efficient website.
Abstract: Underwater acoustic networks have attracted great
attention in the last few years because of its numerous applications.
High data rate can be achieved by efficiently modeling the physical
layer in the network protocol stack. In Acoustic medium,
propagation speed of the acoustic waves is dependent on many
parameters such as temperature, salinity, density, and depth.
Acoustic propagation speed cannot be modeled using standard
empirical formulas such as Urick and Thorp descriptions. In this
paper, we have modeled the acoustic channel using real time data of
temperature, salinity, and speed of Bay of Bengal (Indian Coastal
Region). We have modeled the acoustic channel by using Mackenzie
speed equation and real time data obtained from National Institute of
Oceanography and Technology. It is found that acoustic propagation
speed varies between 1503 m/s to 1544 m/s as temperature and
depth differs. The simulation results show that temperature, salinity,
depth plays major role in acoustic propagation and data rate
increases with appropriate data sets substituted in the simulated
model.
Abstract: In this paper a very simple and effective user
administration view of computing clusters systems is implemented in
order of friendly provide the configuration and monitoring of
distributed application executions. The user view, the administrator
view, and an internal control module create an illusionary
management environment for better system usability. The
architecture, properties, performance, and the comparison with others
software for cluster management are briefly commented.
Abstract: Obturator Foramen is a specific structure in Pelvic
bone images and recognition of it is a new concept in medical image
processing. Moreover, segmentation of bone structures such as
Obturator Foramen plays an essential role for clinical research in
orthopedics. In this paper, we present a novel method to analyze the
similarity between the substructures of the imaged region and a hand
drawn template as a preprocessing step for computation of Pelvic
bone rotation on hip radiographs. This method consists of integrated
usage of Marker-controlled Watershed segmentation and Zernike
moment feature descriptor and it is used to detect Obturator Foramen
accurately. Marker-controlled Watershed segmentation is applied to
separate Obturator Foramen from the background effectively. Then,
Zernike moment feature descriptor is used to provide matching
between binary template image and the segmented binary image for
final extraction of Obturator Foramens. Finally, Pelvic bone rotation
rate calculation for each hip radiograph is performed automatically to
select and eliminate hip radiographs for further studies which depend
on Pelvic bone angle measurements. The proposed method is tested
on randomly selected 100 hip radiographs. The experimental results
demonstrated that the proposed method is able to segment Obturator
Foramen with 96% accuracy.
Abstract: Software testing has become a mandatory process in
assuring the software product quality. Hence, test management is
needed in order to manage the test activities conducted in the
software test life cycle. This paper discusses on the challenges faced
in the software test life cycle, and how the test processes and test
activities, mainly on test cases creation, test execution, and test
reporting is being managed and automated using several test
automation tools, i.e. Jira, Robot Framework, and Jenkins.
Abstract: Recently, numerous documents including large
volumes of unstructured data and text have been created because of the
rapid increase in the use of social media and the Internet. Usually,
these documents are categorized for the convenience of users. Because
the accuracy of manual categorization is not guaranteed, and such
categorization requires a large amount of time and incurs huge costs.
Many studies on automatic categorization have been conducted to help
mitigate the limitations of manual categorization. Unfortunately, most
of these methods cannot be applied to categorize complex documents
with multiple topics because they work on the assumption that
individual documents can be categorized into single categories only.
Therefore, to overcome this limitation, some studies have attempted to
categorize each document into multiple categories. However, the
learning process employed in these studies involves training using a
multi-categorized document set. These methods therefore cannot be
applied to the multi-categorization of most documents unless
multi-categorized training sets using traditional multi-categorization
algorithms are provided. To overcome this limitation, in this study, we
review our novel methodology for extending the category of a
single-categorized document to multiple categorizes, and then
introduce a survey-based verification scenario for estimating the
accuracy of our automatic categorization methodology.
Abstract: Clustering is a process of grouping objects and data
into groups of clusters to ensure that data objects from the same
cluster are identical to each other. Clustering algorithms in one of the
area in data mining and it can be classified into partition, hierarchical,
density based and grid based. Therefore, in this paper we do survey
and review four major hierarchical clustering algorithms called
CURE, ROCK, CHAMELEON and BIRCH. The obtained state of
the art of these algorithms will help in eliminating the current
problems as well as deriving more robust and scalable algorithms for
clustering.
Abstract: The McEliece cryptosystem is an asymmetric type of
cryptography based on error correction code. The classical McEliece
used irreducible binary Goppa code which considered unbreakable
until now especially with parameter [1024, 524, and 101], but it is
suffering from large public key matrix which leads to be difficult to
be used practically. In this work Irreducible and Separable Goppa
codes have been introduced. The Irreducible and Separable Goppa
codes used are with flexible parameters and dynamic error vectors. A
Comparison between Separable and Irreducible Goppa code in
McEliece Cryptosystem has been done. For encryption stage, to get
better result for comparison, two types of testing have been chosen;
in the first one the random message is constant while the parameters
of Goppa code have been changed. But for the second test, the
parameters of Goppa code are constant (m=8 and t=10) while the
random message have been changed. The results show that the time
needed to calculate parity check matrix in separable are higher than
the one for irreducible McEliece cryptosystem, which is considered
expected results due to calculate extra parity check matrix in
decryption process for g2(z) in separable type, and the time needed to
execute error locator in decryption stage in separable type is better
than the time needed to calculate it in irreducible type. The proposed
implementation has been done by Visual studio C#.
Abstract: This paper introduces the concept and principle of data
cleaning, analyzes the types and causes of dirty data, and proposes
several key steps of typical cleaning process, puts forward a well
scalability and versatility data cleaning framework, in view of data
with attribute dependency relation, designs several of violation data
discovery algorithms by formal formula, which can obtain inconsistent
data to all target columns with condition attribute dependent no matter
data is structured (SQL) or unstructured (NoSql), and gives 6 data
cleaning methods based on these algorithms.
Abstract: The practical efficient approach is suggested to estimate the high-speed objects instant bounds in C-OTDR monitoring systems. In case of super-dynamic objects (trains, cars) is difficult to obtain the adequate estimate of the instantaneous object localization because of estimation lag. In other words, reliable estimation coordinates of monitored object requires taking some time for data observation collection by means of C-OTDR system, and only if the required sample volume will be collected the final decision could be issued. But it is contrary to requirements of many real applications. For example, in rail traffic management systems we need to get data of the dynamic objects localization in real time. The way to solve this problem is to use the set of statistical independent parameters of C-OTDR signals for obtaining the most reliable solution in real time. The parameters of this type we can call as «signaling parameters» (SP). There are several the SP’s which carry information about dynamic objects instant localization for each of COTDR channels. The problem is that some of these parameters are very sensitive to dynamics of seismoacoustic emission sources, but are non-stable. On the other hand, in case the SP is very stable it becomes insensitive as rule. This report contains describing of the method for SP’s co-processing which is designed to get the most effective dynamic objects localization estimates in the C-OTDR monitoring system framework.