Automatic Checkpoint System Using Face and Card Information

In the deep south of Thailand, checkpoints for people verification are necessary for the security management of risk zones, such as official buildings in the conflict area. In this paper, we propose an automatic checkpoint system that verifies persons using information from ID cards and facial features. The methods for a person’s information abstraction and verification are introduced based on useful information such as ID number and name, extracted from official cards, and facial images from videos. The proposed system shows promising results and has a real impact on the local society.

AMBICOM: An Ambient Computing Middleware Architecture for Heterogeneous Environments

Ambient Computing or Ambient Intelligence (AmI) is emerging area in computer science aiming to create intelligently connected environments and Internet of Things. In this paper, we propose communication middleware architecture for AmI. This middleware architecture addresses problems of communication, networking, and abstraction of applications, although there are other aspects (e.g. HCI and Security) within general AmI framework. Within this middleware architecture, any application developer might address HCI and Security issues with extensibility features of this platform.

Using Analytic Hierarchy Process as a Decision-Making Tool in Project Portfolio Management

Project Portfolio Management (PPM) is an essential component of an organisation’s strategic procedures, which requires attention of several factors to envisage a range of long-term outcomes to support strategic project portfolio decisions. To evaluate overall efficiency at the portfolio level, it is essential to identify the functionality of specific projects as well as to aggregate those findings in a mathematically meaningful manner that indicates the strategic significance of the associated projects at a number of levels of abstraction. PPM success is directly associated with the quality of decisions made and poor judgment increases portfolio costs. Hence, various Multi-Criteria Decision Making (MCDM) techniques have been designed and employed to support the decision-making functions. This paper reviews possible options to enhance the decision-making outcomes in organisational portfolio management processes using the Analytic Hierarchy Process (AHP) both from academic and practical perspectives and will examine the usability, certainty and quality of the technique. The results of the study will also provide insight into the technical risk associated with current decision-making model to underpin initiative tracking and strategic portfolio management.

Networking the Biggest Challenge in Hybrid Cloud Deployment

Cloud computing has emerged as a promising direction for cost efficient and reliable service delivery across data communication networks. The dynamic location of service facilities and the virtualization of hardware and software elements are stressing the communication networks and protocols, especially when data centres are interconnected through the internet. Although the computing aspects of cloud technologies have been largely investigated, lower attention has been devoted to the networking services without involving IT operating overhead. Cloud computing has enabled elastic and transparent access to infrastructure services without involving IT operating overhead. Virtualization has been a key enabler for cloud computing. While resource virtualization and service abstraction have been widely investigated, networking in cloud remains a difficult puzzle. Even though network has significant role in facilitating hybrid cloud scenarios, it hasn't received much attention in research community until recently. We propose Network as a Service (NaaS), which forms the basis of unifying public and private clouds. In this paper, we identify various challenges in adoption of hybrid cloud. We discuss the design and implementation of a cloud platform.

Semantic Indexing Approach of a Corpora Based On Ontology

The growth in the volume of text data such as books and articles in libraries for centuries has imposed to establish effective mechanisms to locate them. Early techniques such as abstraction, indexing and the use of classification categories have marked the birth of a new field of research called "Information Retrieval". Information Retrieval (IR) can be defined as the task of defining models and systems whose purpose is to facilitate access to a set of documents in electronic form (corpus) to allow a user to find the relevant ones for him, that is to say, the contents which matches with the information needs of the user. This paper presents a new semantic indexing approach of a documentary corpus. The indexing process starts first by a term weighting phase to determine the importance of these terms in the documents. Then the use of a thesaurus like Wordnet allows moving to the conceptual level. Each candidate concept is evaluated by determining its level of representation of the document, that is to say, the importance of the concept in relation to other concepts of the document. Finally, the semantic index is constructed by attaching to each concept of the ontology, the documents of the corpus in which these concepts are found.

Designing a Tool for Software Maintenance

The aim of software maintenance is to maintain the software system in accordance with advancement in software and hardware technology. One of the early works on software maintenance is to extract information at higher level of abstraction. In this paper, we present the process of how to design an information extraction tool for software maintenance. The tool can extract the basic information from old programs such as about variables, based classes, derived classes, objects of classes, and functions. The tool have two main parts; the lexical analyzer module that can read the input file character by character, and the searching module which users can get the basic information from the existing programs. We implemented this tool for a patterned sub-C++ language as an input file.

Automatic Verification Technology of Virtual Machine Software Patch on IaaS Cloud

In this paper, we propose an automatic verification technology of software patches for user virtual environments on IaaS Cloud to decrease verification costs of patches. In these days, IaaS services have been spread and many users can customize virtual machines on IaaS Cloud like their own private servers. Regarding to software patches of OS or middleware installed on virtual machines, users need to adopt and verify these patches by themselves. This task increases operation costs of users. Our proposed method replicates user virtual environments, extracts verification test cases for user virtual environments from test case DB, distributes patches to virtual machines on replicated environments and conducts those test cases automatically on replicated environments. We have implemented the proposed method on OpenStack using Jenkins and confirmed the feasibility. Using the implementation, we confirmed the effectiveness of test case creation efforts by our proposed idea of 2-tier abstraction of software functions and test cases. We also evaluated the automatic verification performance of environment replications, test cases extractions and test cases conductions.

Information Retrieval: A Comparative Study of Textual Indexing Using an Oriented Object Database (db4o) and the Inverted File

The growth in the volume of text data such as books and articles in libraries for centuries has imposed to establish effective mechanisms to locate them. Early techniques such as abstraction, indexing and the use of classification categories have marked the birth of a new field of research called "Information Retrieval". Information Retrieval (IR) can be defined as the task of defining models and systems whose purpose is to facilitate access to a set of documents in electronic form (corpus) to allow a user to find the relevant ones for him, that is to say, the contents which matches with the information needs of the user. Most of the models of information retrieval use a specific data structure to index a corpus which is called "inverted file" or "reverse index". This inverted file collects information on all terms over the corpus documents specifying the identifiers of documents that contain the term in question, the frequency of each term in the documents of the corpus, the positions of the occurrences of the word... In this paper we use an oriented object database (db4o) instead of the inverted file, that is to say, instead to search a term in the inverted file, we will search it in the db4o database. The purpose of this work is to make a comparative study to see if the oriented object databases may be competing for the inverse index in terms of access speed and resource consumption using a large volume of data.

Adopting Collaborative Business Processes to Prevent the Loss of Information in Public Administration Organisations

Recently, the use of web 2.0 tools has increased in companies and public administration organisations. This phenomenon, known as "Enterprise 2.0", has, de facto, modified common organisational and operative practices. This has led “knowledge workers” to change their working practices through the use of Web 2.0 communication tools. Unfortunately, these tools have not been integrated with existing enterprise information systems, a situation that could potentially lead to a loss of information. This is an important problem in an organisational context, because knowledge of information exchanged within the organisation is needed to increase the efficiency and competitiveness of the organisation. In this article we demonstrate that it is possible to capture this knowledge using collaboration processes, which are processes of abstraction created in accordance with design patterns and applied to new organisational operative practices.

Unsupervised Feature Learning by Pre-Route Simulation of Auto-Encoder Behavior Model

This paper describes a cycle accurate simulation results of weight values learned by an auto-encoder behavior model in terms of pre-route simulation. Given the results we visualized the first layer representations with natural images. Many common deep learning threads have focused on learning high-level abstraction of unlabeled raw data by unsupervised feature learning. However, in the process of handling such a huge amount of data, the learning method’s computation complexity and time limited advanced research. These limitations came from the fact these algorithms were computed by using only single core CPUs. For this reason, parallel-based hardware, FPGAs, was seen as a possible solution to overcome these limitations. We adopted and simulated the ready-made auto-encoder to design a behavior model in VerilogHDL before designing hardware. With the auto-encoder behavior model pre-route simulation, we obtained the cycle accurate results of the parameter of each hidden layer by using MODELSIM. The cycle accurate results are very important factor in designing a parallel-based digital hardware. Finally this paper shows an appropriate operation of behavior model based pre-route simulation. Moreover, we visualized learning latent representations of the first hidden layer with Kyoto natural image dataset.

Effects of Heavy Pumping and Artificial Groundwater Recharge Pond on the Aquifer System of Langat Basin, Malaysia

The paper aims at evaluating the effects of heavy groundwater withdrawal and artificial groundwater recharge of an ex-mining pond to the aquifer system of the Langat Basin through the three-dimensional (3D) numerical modeling. Many mining sites have been left behind from the massive mining exploitations in Malaysia during the England colonization era and from the last few decades. These sites are able to accommodate more than a million cubic meters of water from precipitation, runoff, groundwater, and river. Most of the time, the mining sites are turned into ponds for recreational activities. In the current study, an artificial groundwater recharge from an ex-mining pond in the Langat Basin was proposed due to its capacity to store >50 million m3 of water. The location of the pond is near the Langat River and opposite a steel company where >4 million gallons of groundwater is withdrawn on a daily basis. The 3D numerical simulation was developed using the Groundwater Modeling System (GMS). The calibrated model (error about 0.7 m) was utilized to simulate two scenarios (1) Case 1: artificial recharge pond with no pumping and (2) Case 2: artificial pond with pumping. The results showed that in Case 1, the pond played a very important role in supplying additional water to the aquifer and river. About 90,916 m3/d of water from the pond, 1,173 m3/d from the Langat River, and 67,424 m3/d from the direct recharge of precipitation infiltrated into the aquifer system. In Case 2, due to the abstraction of groundwater from a company, it caused a steep depression around the wells, river, and pond. The result of the water budget showed an increase rate of inflow in the pond and river with 92,493m3/d and 3,881m3/d respectively. The outcome of the current study provides useful information of the aquifer behavior of the Langat Basin.

An Evaluation of Software Connection Methods for Heterogeneous Sensor Networks

The transfer rate of messages in distributed sensor network applications is a critical factor in a system's performance. The Sensor Abstraction Layer (SAL) is one such system. SAL is a middleware integration platform for abstracting sensor specific technology in order to integrate heterogeneous types of sensors in a network. SAL uses Java Remote Method Invocation (RMI) as its connection method, which has unsatisfying transfer rates, especially for streaming data. This paper analyses different connection methods to optimize data transmission in SAL by replacing RMI. Our results show that the most promising Java-based connections were frameworks for Java New Input/Output (NIO) including Apache MINA, JBoss Netty, and xSocket. A test environment was implemented to evaluate each respective framework based on transfer rate, resource usage, and scalability. Test results showed the most suitable connection method to improve data transmission in SAL JBoss Netty as it provides a performance enhancement of 68%.

Optimising Data Transmission in Heterogeneous Sensor Networks

The transfer rate of messages in distributed sensor network applications is a critical factor in a system's performance. The Sensor Abstraction Layer (SAL) is one such system. SAL is a middleware integration platform for abstracting sensor specific technology in order to integrate heterogeneous types of sensors in a network. SAL uses Java Remote Method Invocation (RMI) as its connection method, which has unsatisfying transfer rates, especially for streaming data.  This paper analyses different connection methods to optimize data transmission in SAL by replacing RMI.  Our results show that the most promising Java-based connections were frameworks for Java New Input/Output (NIO) including Apache MINA, JBoss Netty, and xSocket. A test environment was implemented to evaluate each respective framework based on transfer rate, resource usage, and scalability. Test results showed the most suitable connection method to improve data transmission in SAL JBoss Netty as it provides a performance enhancement of 68%.

Context Modeling and Context-Aware Service Adaptation for Pervasive Computing Systems

Devices in a pervasive computing system (PCS) are characterized by their context-awareness. It permits them to provide proactively adapted services to the user and applications. To do so, context must be well understood and modeled in an appropriate form which enhance its sharing between devices and provide a high level of abstraction. The most interesting methods for modeling context are those based on ontology however the majority of the proposed methods fail in proposing a generic ontology for context which limit their usability and keep them specific to a particular domain. The adaptation task must be done automatically and without an explicit intervention of the user. Devices of a PCS must acquire some intelligence which permits them to sense the current context and trigger the appropriate service or provide a service in a better suitable form. In this paper we will propose a generic service ontology for context modeling and a context-aware service adaptation based on a service oriented definition of context.

SystemC Modeling of Adaptive Least Mean Square Filter

In this paper, we demonstrate the adaptive least-mean-square (LMS) filter modeling using SystemC. SystemC is a modeling language that allows designer to model both hardware and software component and makes it possible to design from high level system of abstraction to low level system of abstraction. We produced five adaptive least-mean-square filter models that are classed as five abstraction levels using SystemC proceeding from the abstract model to the more concrete model.

Danger Theory and Intelligent Data Processing

Artificial Immune System (AIS) is relatively naive paradigm for intelligent computations. The inspiration for AIS is derived from natural Immune System (IS). Classically it is believed that IS strives to discriminate between self and non-self. Most of the existing AIS research is based on this approach. Danger Theory (DT) argues this approach and proposes that IS fights against danger producing elements and tolerates others. We, the computational researchers, are not concerned with the arguments among immunologists but try to extract from it novel abstractions for intelligent computation. This paper aims to follow DT inspiration for intelligent data processing. The approach may introduce new avenue in intelligent processing. The data used is system calls data that is potentially significant in intrusion detection applications.

Concept Indexing using Ontology and Supervised Machine Learning

Nowadays, ontologies are the only widely accepted paradigm for the management of sharable and reusable knowledge in a way that allows its automatic interpretation. They are collaboratively created across the Web and used to index, search and annotate documents. The vast majority of the ontology based approaches, however, focus on indexing texts at document level. Recently, with the advances in ontological engineering, it became clear that information indexing can largely benefit from the use of general purpose ontologies which aid the indexing of documents at word level. This paper presents a concept indexing algorithm, which adds ontology information to words and phrases and allows full text to be searched, browsed and analyzed at different levels of abstraction. This algorithm uses a general purpose ontology, OntoRo, and an ontologically tagged corpus, OntoCorp, both developed for the purpose of this research. OntoRo and OntoCorp are used in a two-stage supervised machine learning process aimed at generating ontology tagging rules. The first experimental tests show a tagging accuracy of 78.91% which is encouraging in terms of the further improvement of the algorithm.

Design of Domain-Specific Software Systems with Parametric Code Templates

Domain-specific languages describe specific solutions to problems in the application domain. Traditionally they form a solution composing black-box abstractions together. This, usually, involves non-deep transformations over the target model. In this paper we argue that it is potentially powerful to operate with grey-box abstractions to build a domain-specific software system. We present parametric code templates as grey-box abstractions and conceptual tools to encapsulate and manipulate these templates. Manipulations introduce template-s merging routines and can be defined in a generic way. This involves reasoning mechanisms at the code templates level. We introduce the concept of Neurath Modelling Language (NML) that operates with parametric code templates and specifies a visualisation mapping mechanism for target models. Finally we provide an example of calculating a domain-specific software system with predefined NML elements.

A Probabilistic Reinforcement-Based Approach to Conceptualization

Conceptualization strengthens intelligent systems in generalization skill, effective knowledge representation, real-time inference, and managing uncertain and indefinite situations in addition to facilitating knowledge communication for learning agents situated in real world. Concept learning introduces a way of abstraction by which the continuous state is formed as entities called concepts which are connected to the action space and thus, they illustrate somehow the complex action space. Of computational concept learning approaches, action-based conceptualization is favored because of its simplicity and mirror neuron foundations in neuroscience. In this paper, a new biologically inspired concept learning approach based on the probabilistic framework is proposed. This approach exploits and extends the mirror neuron-s role in conceptualization for a reinforcement learning agent in nondeterministic environments. In the proposed method, instead of building a huge numerical knowledge, the concepts are learnt gradually from rewards through interaction with the environment. Moreover the probabilistic formation of the concepts is employed to deal with uncertain and dynamic nature of real problems in addition to the ability of generalization. These characteristics as a whole distinguish the proposed learning algorithm from both a pure classification algorithm and typical reinforcement learning. Simulation results show advantages of the proposed framework in terms of convergence speed as well as generalization and asymptotic behavior because of utilizing both success and failures attempts through received rewards. Experimental results, on the other hand, show the applicability and effectiveness of the proposed method in continuous and noisy environments for a real robotic task such as maze as well as the benefits of implementing an incremental learning scenario in artificial agents.

Suitability of Requirements Abstraction Model (RAM) Requirements for High-Level System Testing

The Requirements Abstraction Model (RAM) helps in managing abstraction in requirements by organizing them at four levels (product, feature, function and component). The RAM is adaptable and can be tailored to meet the needs of the various organizations. Because software requirements are an important source of information for developing high-level tests, organizations willing to adopt the RAM model need to know the suitability of the RAM requirements for developing high-level tests. To investigate this suitability, test cases from twenty randomly selected requirements were developed, analyzed and graded. Requirements were selected from the requirements document of a Course Management System, a web based software system that supports teachers and students in performing course related tasks. This paper describes the results of the requirements document analysis. The results show that requirements at lower levels in the RAM are suitable for developing executable tests whereas it is hard to develop from requirements at higher levels.