Software Architectural Design Ontology

Software Architecture plays a key role in software development but absence of formal description of Software Architecture causes different impede in software development. To cope with these difficulties, ontology has been used as artifact. This paper proposes ontology for Software Architectural design based on IEEE model for architecture description and Kruchten 4+1 model for viewpoints classification. For categorization of style and views, ISO/IEC 42010 has been used. Corpus method has been used to evaluate ontology. The main aim of the proposed ontology is to classify and locate Software Architectural design information.

An Inter-banking Auditing Security Solution for Detecting Unauthorised Financial Transactions entered by Authorised Insiders

Insider abuse has recently been reported as one of the more frequently occurring security incidents, suggesting that more security is required for detecting and preventing unauthorised financial transactions entered by authorised users. To address the problem, and based on the observation that all authorised interbanking financial transactions trigger or are triggered by other transactions in a workflow, we have developed a security solution based on a redefined understanding of an audit workflow. One audit workflow where there is a log file containing the complete workflow activity of financial transactions directly related to one financial transaction (an electronic deal recorded at an e-trading system). The new security solution contemplates any two parties interacting on the basis of financial transactions recorded by their users in related but distinct automated financial systems. In the new definition interorganizational and intra-organization interactions can be described in one unique audit trail. This concept expands the current ideas of audit trails by adapting them to actual e-trading workflow activity, i.e. intra-organizational and inter-organizational activity. With the above, a security auditing service is designed to detect integrity drifts with and between organizations in order to detect unauthorised financial transactions entered by authorised users.

Machine Learning Methods for Environmental Monitoring and Flood Protection

More and more natural disasters are happening every year: floods, earthquakes, volcanic eruptions, etc. In order to reduce the risk of possible damages, governments all around the world are investing into development of Early Warning Systems (EWS) for environmental applications. The most important task of the EWS is identification of the onset of critical situations affecting environment and population, early enough to inform the authorities and general public. This paper describes an approach for monitoring of flood protections systems based on machine learning methods. An Artificial Intelligence (AI) component has been developed for detection of abnormal dike behaviour. The AI module has been integrated into an EWS platform of the UrbanFlood project (EU Seventh Framework Programme) and validated on real-time measurements from the sensors installed in a dike.

Improving Air Temperature Prediction with Artificial Neural Networks

The mitigation of crop loss due to damaging freezes requires accurate air temperature prediction models. Previous work established that the Ward-style artificial neural network (ANN) is a suitable tool for developing such models. The current research focused on developing ANN models with reduced average prediction error by increasing the number of distinct observations used in training, adding additional input terms that describe the date of an observation, increasing the duration of prior weather data included in each observation, and reexamining the number of hidden nodes used in the network. Models were created to predict air temperature at hourly intervals from one to 12 hours ahead. Each ANN model, consisting of a network architecture and set of associated parameters, was evaluated by instantiating and training 30 networks and calculating the mean absolute error (MAE) of the resulting networks for some set of input patterns. The inclusion of seasonal input terms, up to 24 hours of prior weather information, and a larger number of processing nodes were some of the improvements that reduced average prediction error compared to previous research across all horizons. For example, the four-hour MAE of 1.40°C was 0.20°C, or 12.5%, less than the previous model. Prediction MAEs eight and 12 hours ahead improved by 0.17°C and 0.16°C, respectively, improvements of 7.4% and 5.9% over the existing model at these horizons. Networks instantiating the same model but with different initial random weights often led to different prediction errors. These results strongly suggest that ANN model developers should consider instantiating and training multiple networks with different initial weights to establish preferred model parameters.

Description and Analysis of Embedded Firewall Techniques

With the turn of this century, many researchers started showing interest in Embedded Firewall (EF) implementations. These are not the usual firewalls that are used as checkpoints at network gateways. They are, rather, applied near those hosts that need protection. Hence by using them, individual or grouped network components can be protected from the inside as well as from external attacks. This paper presents a study of EF-s, looking at their architecture and problems. A comparative study assesses how practical each kind is. It particularly focuses on the architecture, weak points, and portability of each kind. A look at their use by different categories of users is also presented.

Ant Colony Optimization for Feature Subset Selection

The Ant Colony Optimization (ACO) is a metaheuristic inspired by the behavior of real ants in their search for the shortest paths to food sources. It has recently attracted a lot of attention and has been successfully applied to a number of different optimization problems. Due to the importance of the feature selection problem and the potential of ACO, this paper presents a novel method that utilizes the ACO algorithm to implement a feature subset search procedure. Initial results obtained using the classification of speech segments are very promising.

A Semantic Web Based Ontology in the Financial Domain

The paper describes design of an ontology in the financial domain for mutual funds. The design of this ontology consists of four steps, namely, specification, knowledge acquisition, implementation and semantic query. Specification includes a description of the taxonomy and different types mutual funds and their scope. Knowledge acquisition involves the information extraction from heterogeneous resources. Implementation describes the conceptualization and encoding of this data. Finally, semantic query permits complex queries to integrated data, mapping of these database entities to ontological concepts.

A Stereo Image Processing System for Visually Impaired

This paper presents a review on vision aided systems and proposes an approach for visual rehabilitation using stereo vision technology. The proposed system utilizes stereo vision, image processing methodology and a sonification procedure to support blind navigation. The developed system includes a wearable computer, stereo cameras as vision sensor and stereo earphones, all moulded in a helmet. The image of the scene infront of visually handicapped is captured by the vision sensors. The captured images are processed to enhance the important features in the scene in front, for navigation assistance. The image processing is designed as model of human vision by identifying the obstacles and their depth information. The processed image is mapped on to musical stereo sound for the blind-s understanding of the scene infront. The developed method has been tested in the indoor and outdoor environments and the proposed image processing methodology is found to be effective for object identification.

A General Model for Acquiring Knowledge

In this paper, based on the work in [1], we further give a general model for acquiring knowledge, which first focuses on the research of how and when things involved in problems are made then describes the goals, the energy and the time to give an optimum model to decide how many related things are supposed to be involved in. Finally, we acquire knowledge from this model in which there are the attributes, actions and connections of the things involved at the time when they are born and the time in their life. This model not only improves AI theories, but also surely brings the effectiveness and accuracy for AI system because systems are given more knowledge when reasoning or computing is used to bring about results.

A New Similarity Measure Based On Edge Counting

In the field of concepts, the measure of Wu and Palmer [1] has the advantage of being simple to implement and have good performances compared to the other similarity measures [2]. Nevertheless, the Wu and Palmer measure present the following disadvantage: in some situations, the similarity of two elements of an IS-A ontology contained in the neighborhood exceeds the similarity value of two elements contained in the same hierarchy. This situation is inadequate within the information retrieval framework. To overcome this problem, we propose a new similarity measure based on the Wu and Palmer measure. Our objective is to obtain realistic results for concepts not located in the same way. The obtained results show that compared to the Wu and Palmer approach, our measure presents a profit in terms of relevance and execution time.

Reliability Evaluation using Triangular Intuitionistic Fuzzy Numbers Arithmetic Operations

In general fuzzy sets are used to analyze the fuzzy system reliability. Here intuitionistic fuzzy set theory for analyzing the fuzzy system reliability has been used. To analyze the fuzzy system reliability, the reliability of each component of the system as a triangular intuitionistic fuzzy number is considered. Triangular intuitionistic fuzzy number and their arithmetic operations are introduced. Expressions for computing the fuzzy reliability of a series system and a parallel system following triangular intuitionistic fuzzy numbers have been described. Here an imprecise reliability model of an electric network model of dark room is taken. To compute the imprecise reliability of the above said system, reliability of each component of the systems is represented by triangular intuitionistic fuzzy numbers. Respective numerical example is presented.

Selective Mutation for Genetic Algorithms

In this paper, we propose a selective mutation method for improving the performances of genetic algorithms. In selective mutation, individuals are first ranked and then additionally mutated one bit in a part of their strings which is selected corresponding to their ranks. This selective mutation helps genetic algorithms to fast approach the global optimum and to quickly escape local optima. This results in increasing the performances of genetic algorithms. We measured the effects of selective mutation with four function optimization problems. It was found from extensive experiments that the selective mutation can significantly enhance the performances of genetic algorithms.

Impact of Height of Silicon Pillar on Vertical DG-MOSFET Device

Vertical Double Gate (DG) Metal Oxide Semiconductor Field Effect Transistor (MOSFET) is believed to suppress various short channel effect problems. The gate to channel coupling in vertical DG-MOSFET are doubled, thus resulting in higher current density. By having two gates, both gates are able to control the channel from both sides and possess better electrostatic control over the channel. In order to ensure that the transistor possess a superb turn-off characteristic, the subs-threshold swing (SS) must be kept at minimum value (60-90mV/dec). By utilizing SILVACO TCAD software, an n-channel vertical DG-MOSFET was successfully designed while keeping the sub-threshold swing (SS) value as minimum as possible. From the observation made, the value of sub-threshold swing (SS) was able to be varied by adjusting the height of the silicon pillar. The minimum value of sub-threshold swing (SS) was found to be 64.7mV/dec with threshold voltage (VTH) of 0.895V. The ideal height of the vertical DG-MOSFET pillar was found to be at 0.265 µm.

Decision Support System Based on Data Warehouse

Typical Intelligent Decision Support System is 4-based, its design composes of Data Warehouse, Online Analytical Processing, Data Mining and Decision Supporting based on models, which is called Decision Support System Based on Data Warehouse (DSSBDW). This way takes ETL,OLAP and DM as its implementing means, and integrates traditional model-driving DSS and data-driving DSS into a whole. For this kind of problem, this paper analyzes the DSSBDW architecture and DW model, and discusses the following key issues: ETL designing and Realization; metadata managing technology using XML; SQL implementing, optimizing performance, data mapping in OLAP; lastly, it illustrates the designing principle and method of DW in DSSBDW.

Secure peerTalk Using PEERT System

Multiparty voice over IP (MVoIP) systems allows a group of people to freely communicate each other via the internet, which have many applications such as online gaming, teleconferencing, online stock trading etc. Peertalk is a peer to peer multiparty voice over IP system (MVoIP) which is more feasible than existing approaches such as p2p overlay multicast and coupled distributed processing. Since the stream mixing and distribution are done by the peers, it is vulnerable to major security threats like nodes misbehavior, eavesdropping, Sybil attacks, Denial of Service (DoS), call tampering, Man in the Middle attacks etc. To thwart the security threats, a security framework called PEERTS (PEEred Reputed Trustworthy System for peertalk) is implemented so that efficient and secure communication can be carried out between peers.

Choosing R-tree or Quadtree Spatial DataIndexing in One Oracle Spatial Database System to Make Faster Showing Geographical Map in Mobile Geographical Information System Technology

The latest Geographic Information System (GIS) technology makes it possible to administer the spatial components of daily “business object," in the corporate database, and apply suitable geographic analysis efficiently in a desktop-focused application. We can use wireless internet technology for transfer process in spatial data from server to client or vice versa. However, the problem in wireless Internet is system bottlenecks that can make the process of transferring data not efficient. The reason is large amount of spatial data. Optimization in the process of transferring and retrieving data, however, is an essential issue that must be considered. Appropriate decision to choose between R-tree and Quadtree spatial data indexing method can optimize the process. With the rapid proliferation of these databases in the past decade, extensive research has been conducted on the design of efficient data structures to enable fast spatial searching. Commercial database vendors like Oracle have also started implementing these spatial indexing to cater to the large and diverse GIS. This paper focuses on the decisions to choose R-tree and quadtree spatial indexing using Oracle spatial database in mobile GIS application. From our research condition, the result of using Quadtree and R-tree spatial data indexing method in one single spatial database can save the time until 42.5%.

Web Log Mining by an Improved AprioriAll Algorithm

This paper sets forth the possibility and importance about applying Data Mining in Web logs mining and shows some problems in the conventional searching engines. Then it offers an improved algorithm based on the original AprioriAll algorithm which has been used in Web logs mining widely. The new algorithm adds the property of the User ID during the every step of producing the candidate set and every step of scanning the database by which to decide whether an item in the candidate set should be put into the large set which will be used to produce next candidate set. At the meantime, in order to reduce the number of the database scanning, the new algorithm, by using the property of the Apriori algorithm, limits the size of the candidate set in time whenever it is produced. Test results show the improved algorithm has a more lower complexity of time and space, better restrain noise and fit the capacity of memory.

A Multi-Level WEB Based Parallel Processing System A Hierarchical Volunteer Computing Approach

Over the past few years, a number of efforts have been exerted to build parallel processing systems that utilize the idle power of LAN-s and PC-s available in many homes and corporations. The main advantage of these approaches is that they provide cheap parallel processing environments for those who cannot afford the expenses of supercomputers and parallel processing hardware. However, most of the solutions provided are not very flexible in the use of available resources and very difficult to install and setup. In this paper, a multi-level web-based parallel processing system (MWPS) is designed (appendix). MWPS is based on the idea of volunteer computing, very flexible, easy to setup and easy to use. MWPS allows three types of subscribers: simple volunteers (single computers), super volunteers (full networks) and end users. All of these entities are coordinated transparently through a secure web site. Volunteer nodes provide the required processing power needed by the system end users. There is no limit on the number of volunteer nodes, and accordingly the system can grow indefinitely. Both volunteer and system users must register and subscribe. Once, they subscribe, each entity is provided with the appropriate MWPS components. These components are very easy to install. Super volunteer nodes are provided with special components that make it possible to delegate some of the load to their inner nodes. These inner nodes may also delegate some of the load to some other lower level inner nodes .... and so on. It is the responsibility of the parent super nodes to coordinate the delegation process and deliver the results back to the user. MWPS uses a simple behavior-based scheduler that takes into consideration the current load and previous behavior of processing nodes. Nodes that fulfill their contracts within the expected time get a high degree of trust. Nodes that fail to satisfy their contract get a lower degree of trust. MWPS is based on the .NET framework and provides the minimal level of security expected in distributed processing environments. Users and processing nodes are fully authenticated. Communications and messages between nodes are very secure. The system has been implemented using C#. MWPS may be used by any group of people or companies to establish a parallel processing or grid environment.

Optimized Delay Constrained QoS Routing

QoS Routing aims to find paths between senders and receivers satisfying the QoS requirements of the application which efficiently using the network resources and underlying routing algorithm to be able to find low-cost paths that satisfy given QoS constraints. The problem of finding least-cost routing is known to be NP-hard or complete and some algorithms have been proposed to find a near optimal solution. But these heuristics or algorithms either impose relationships among the link metrics to reduce the complexity of the problem which may limit the general applicability of the heuristic, or are too costly in terms of execution time to be applicable to large networks. In this paper, we concentrate an algorithm that finds a near-optimal solution fast and we named this algorithm as optimized Delay Constrained Routing (ODCR), which uses an adaptive path weight function together with an additional constraint imposed on the path cost, to restrict search space and hence ODCR finds near optimal solution in much quicker time.

Auto Classification for Search Intelligence

This paper proposes an auto-classification algorithm of Web pages using Data mining techniques. We consider the problem of discovering association rules between terms in a set of Web pages belonging to a category in a search engine database, and present an auto-classification algorithm for solving this problem that are fundamentally based on Apriori algorithm. The proposed technique has two phases. The first phase is a training phase where human experts determines the categories of different Web pages, and the supervised Data mining algorithm will combine these categories with appropriate weighted index terms according to the highest supported rules among the most frequent words. The second phase is the categorization phase where a web crawler will crawl through the World Wide Web to build a database categorized according to the result of the data mining approach. This database contains URLs and their categories.