Methods for Case Maintenance in Case-Based Reasoning

Case-Based Reasoning (CBR) is one of machine learning algorithms for problem solving and learning that caught a lot of attention over the last few years. In general, CBR is composed of four main phases: retrieve the most similar case or cases, reuse the case to solve the problem, revise or adapt the proposed solution, and retain the learned cases before returning them to the case base for learning purpose. Unfortunately, in many cases, this retain process causes the uncontrolled case base growth. The problem affects competence and performance of CBR systems. This paper proposes competence-based maintenance method based on deletion policy strategy for CBR. There are three main steps in this method. Step 1, formulate problems. Step 2, determine coverage and reachability set based on coverage value. Step 3, reduce case base size. The results obtained show that this proposed method performs better than the existing methods currently discussed in literature.

Image Segmentation Using Suprathreshold Stochastic Resonance

In this paper a new concept of partial complement of a graph G is introduced and using the same a new graph parameter, called completion number of a graph G, denoted by c(G) is defined. Some basic properties of graph parameter, completion number, are studied and upperbounds for completion number of classes of graphs are obtained , the paper includes the characterization also.

A Contribution to the Application of the Structural Analysis Method in Entrepreneurial Practice

Quantitative methods of economic decision-making as the methodological base of the so called operational research represent an important set of tools for managing complex economic systems,both at the microeconomic level and on the macroeconomic scale. Mathematical models of controlled and controlling processes allow, by means of artificial experiments, obtaining information foroptimalor optimum approaching managerial decision-making.The quantitative methods of economic decision-making usually include a methodology known as structural analysis -an analysisof interdisciplinary production-consumption relations.

Bridging the Gap Between CBR and VBR for H264 Standard

This paper provides a flexible way of controlling Variable-Bit-Rate (VBR) of compressed digital video, applicable to the new H264 video compression standard. The entire video sequence is assessed in advance and the quantisation level is then set such that bit rate (and thus the frame rate) remains within predetermined limits compatible with the bandwidth of the transmission system and the capabilities of the remote end, while at the same time providing constant quality similar to VBR encoding. A process for avoiding buffer starvation by selectively eliminating frames from the encoded output at times when the frame rate is slow (large number of bits per frame) will be also described. Finally, the problem of buffer overflow will be solved by selectively eliminating frames from the received input to the decoder. The decoder detects the omission of the frames and resynchronizes the transmission by monitoring time stamps and repeating frames if necessary.

The Open Knowledge Kernel

Web services are pieces of software that can be invoked via a standardized protocol. They can be combined via formalized taskflow languages. The Open Knowledge system is a fully distributed system using P2P technology, that allows users to publish the setaskflows, and programmers to register their web services or publish implementations of them, for the roles described in these workflows.Besides this, the system offers the functionality to select a peer that could coordinate such an interaction model and inform web services when it is their 'turn'. In this paper we describe the architecture and implementation of the Open Knowledge Kernel which provides the core functionality of the Open Knowledge system.

RAPD Analysis of Genetic Diversity of Castor Bean

The aim of this work was to detect genetic variability among the set of 40 castor genotypes using 8 RAPD markers. Amplification of genomic DNA of 40 genotypes, using RAPD analysis, yielded in 66 fragments, with an average of 8.25 polymorphic fragments per primer. Number of amplified fragments ranged from 3 to 13, with the size of amplicons ranging from 100 to 1200 bp. Values of the polymorphic information content (PIC) value ranged from 0.556 to 0.895 with an average of 0.784 and diversity index (DI) value ranged from 0.621 to 0.896 with an average of 0.798. The dendrogram based on hierarchical cluster analysis using UPGMA algorithm was prepared and analyzed genotypes were grouped into two main clusters and only two genotypes could not be distinguished. Knowledge on the genetic diversity of castor can be used for future breeding programs for increased oil production for industrial uses.

Comparison of MFCC and Cepstral Coefficients as a Feature Set for PCG Biometric Systems

Heart sound is an acoustic signal and many techniques used nowadays for human recognition tasks borrow speech recognition techniques. One popular choice for feature extraction of accoustic signals is the Mel Frequency Cepstral Coefficients (MFCC) which maps the signal onto a non-linear Mel-Scale that mimics the human hearing. However the Mel-Scale is almost linear in the frequency region of heart sounds and thus should produce similar results with the standard cepstral coefficients (CC). In this paper, MFCC is investigated to see if it produces superior results for PCG based human identification system compared to CC. Results show that the MFCC system is still superior to CC despite linear filter-banks in the lower frequency range, giving up to 95% correct recognition rate for MFCC and 90% for CC. Further experiments show that the high recognition rate is due to the implementation of filter-banks and not from Mel-Scaling.

Extended Least Squares LS–SVM

Among neural models the Support Vector Machine (SVM) solutions are attracting increasing attention, mostly because they eliminate certain crucial questions involved by neural network construction. The main drawback of standard SVM is its high computational complexity, therefore recently a new technique, the Least Squares SVM (LS–SVM) has been introduced. In this paper we present an extended view of the Least Squares Support Vector Regression (LS–SVR), which enables us to develop new formulations and algorithms to this regression technique. Based on manipulating the linear equation set -which embodies all information about the regression in the learning process- some new methods are introduced to simplify the formulations, speed up the calculations and/or provide better results.

Optimal Prices under Revenue Sharing Contract in a Supply Chain with Direct Channel

Westudy a dual-channel supply chain under decentralized setting in which manufacturer sells to retailer and to customers directly usingan online channel. A customer chooses the purchase-channel based on price and service quality. Also, to buy product from the retail store, the customer incurs a transportation cost influenced by the fluctuating gasoline cost. Both companies are under the revenue sharing contract. In this contract the retailer share a portion of the revenue to the manufacturer while the manufacturer will charge the lower wholesales price. The numerical result shows that the effects of gasoline costs, the revenue sharing ratio and the wholesale price play an important role in determining optimal prices. The result shows that when the gasoline price fluctuatesthe optimal on-line priceis relatively stable while the optimal retail price moves in the opposite direction of the gasoline prices.

An Efficient Method for Load−Flow Solution of Radial Distribution Networks

This paper reports a new and accurate method for load-flow solution of radial distribution networks with minimum data preparation. The node and branch numbering need not to be sequential like other available methods. The proposed method does not need sending-node, receiving-node and branch numbers if these are sequential. The proposed method uses the simple equation to compute the voltage magnitude and has the capability to handle composite load modelling. The proposed method uses the set of nodes of feeder, lateral(s) and sub lateral(s). The effectiveness of the proposed method is compared with other methods using two examples. The detailed load-flow results for different kind of load-modellings are also presented.

Intuitionistic Fuzzy Points in Semigroups

The notion of intuitionistic fuzzy sets was introduced by Atanassov as a generalization of the notion of fuzzy sets. Y.B. Jun and S.Z. Song introduced the notion of intuitionistic fuzzy points. In this paper we find some relations between the intuitionistic fuzzy ideals of a semigroup S and the set of all intuitionistic fuzzy points of S.

Architecture Exception Governance

The article presents the whole model of IS/IT architecture exception governance. As first, the assumptions of presented model are set. As next, there is defined a generic governance model that serves as a basis for the architecture exception governance. The architecture exception definition and its attributes follow. The model respects well known approaches to the area that are described in the text, but it adopts higher granularity in description and expands the process view with all the next necessary governance components as roles, principles and policies, tools to enable the implementation of the model into organizations. The architecture exception process is decomposed into a set of processes related to the architecture exception lifecycle consisting of set of phases and architecture exception states. Finally, there is information about my future research related to this area.

Simultaneous Saccharification and Fermentation(SSF) of Sugarcane Bagasse - Kinetics and Modeling

Simultaneous Saccharification and Fermentation (SSF) of sugarcane bagasse by cellulase and Pachysolen tannophilus MTCC *1077 were investigated in the present study. Important process variables for ethanol production form pretreated bagasse were optimized using Response Surface Methodology (RSM) based on central composite design (CCD) experiments. A 23 five level CCD experiments with central and axial points was used to develop a statistical model for the optimization of process variables such as incubation temperature (25–45°) X1, pH (5.0–7.0) X2 and fermentation time (24–120 h) X3. Data obtained from RSM on ethanol production were subjected to the analysis of variance (ANOVA) and analyzed using a second order polynomial equation and contour plots were used to study the interactions among three relevant variables of the fermentation process. The fermentation experiments were carried out using an online monitored modular fermenter 2L capacity. The processing parameters setup for reaching a maximum response for ethanol production was obtained when applying the optimum values for temperature (32°C), pH (5.6) and fermentation time (110 h). Maximum ethanol concentration (3.36 g/l) was obtained from 50 g/l pretreated sugarcane bagasse at the optimized process conditions in aerobic batch fermentation. Kinetic models such as Monod, Modified Logistic model, Modified Logistic incorporated Leudeking – Piret model and Modified Logistic incorporated Modified Leudeking – Piret model have been evaluated and the constants were predicted.

A Study on Optimal Determination of Partial Transmission Ratios of Helical Gearboxes with Second-Step Double Gear-Sets

In this paper, a study on the applications of the optimization and regression techniques for optimal calculation of partial ratios of helical gearboxes with second-step double gear-sets for minimal cross section dimension is introduced. From the condition of the moment equilibrium of a mechanic system including three gear units and their regular resistance condition, models for calculation of the partial ratios of helical gearboxes with second-step double gear-sets were given. Especially, by regression analysis, explicit models for calculation of the partial ratios are introduced. These models allow determining the partial ratios accurately and simply.

Markov Game Controller Design Algorithms

Markov games are a generalization of Markov decision process to a multi-agent setting. Two-player zero-sum Markov game framework offers an effective platform for designing robust controllers. This paper presents two novel controller design algorithms that use ideas from game-theory literature to produce reliable controllers that are able to maintain performance in presence of noise and parameter variations. A more widely used approach for controller design is the H∞ optimal control, which suffers from high computational demand and at times, may be infeasible. Our approach generates an optimal control policy for the agent (controller) via a simple Linear Program enabling the controller to learn about the unknown environment. The controller is facing an unknown environment, and in our formulation this environment corresponds to the behavior rules of the noise modeled as the opponent. Proposed controller architectures attempt to improve controller reliability by a gradual mixing of algorithmic approaches drawn from the game theory literature and the Minimax-Q Markov game solution approach, in a reinforcement-learning framework. We test the proposed algorithms on a simulated Inverted Pendulum Swing-up task and compare its performance against standard Q learning.

Moisture Diffusivity of AAC with Different Densities

Method of determining of moisture diffusivity on two types of autoclaved aerated concretes with different bulk density is represented in the paper. On the specimens were measured one dimensional water transport only on liquid phase. Ever evaluation was done from moisture profiles measured in specific times by capacitance moisture meter. All values from capacitance meter were recalculated to moisture content by mass. Moisture diffusivity was determined in dependence on both moisture and temperature. The experiment temperatures were set at values 55, 65, 75 and 85°C.

Emotion Classification using Adaptive SVMs

The study of the interaction between humans and computers has been emerging during the last few years. This interaction will be more powerful if computers are able to perceive and respond to human nonverbal communication such as emotions. In this study, we present the image-based approach to emotion classification through lower facial expression. We employ a set of feature points in the lower face image according to the particular face model used and consider their motion across each emotive expression of images. The vector of displacements of all feature points input to the Adaptive Support Vector Machines (A-SVMs) classifier that classify it into seven basic emotions scheme, namely neutral, angry, disgust, fear, happy, sad and surprise. The system was tested on the Japanese Female Facial Expression (JAFFE) dataset of frontal view facial expressions [7]. Our experiments on emotion classification through lower facial expressions demonstrate the robustness of Adaptive SVM classifier and verify the high efficiency of our approach.

Integrated Subset Split for Balancing Network Utilization and Quality of Routing

The overlay approach has been widely used by many service providers for Traffic Engineering (TE) in large Internet backbones. In the overlay approach, logical connections are set up between edge nodes to form a full mesh virtual network on top of the physical topology. IP routing is then run over the virtual network. Traffic engineering objectives are achieved through carefully routing logical connections over the physical links. Although the overlay approach has been implemented in many operational networks, it has a number of well-known scaling issues. This paper proposes a new approach to achieve traffic engineering without full-mesh overlaying with the help of integrated approach and equal subset split method. Traffic engineering needs to determine the optimal routing of traffic over the existing network infrastructure by efficiently allocating resource in order to optimize traffic performance on an IP network. Even though constraint-based routing [1] of Multi-Protocol Label Switching (MPLS) is developed to address this need, since it is not widely tested or debugged, Internet Service Providers (ISPs) resort to TE methods under Open Shortest Path First (OSPF), which is the most commonly used intra-domain routing protocol. Determining OSPF link weights for optimal network performance is an NP-hard problem. As it is not possible to solve this problem, we present a subset split method to improve the efficiency and performance by minimizing the maximum link utilization in the network via a small number of link weight modifications. The results of this method are compared against results of MPLS architecture [9] and other heuristic methods.

A Real-time 4M Collecting Method for Production Information System

It can be said that the business sector is faced with a range of challenges–a rapidly changing business environment, an increase and diversification of customers- demands and the consequent need for quick response–for having in place flexible management and production info systems. As a matter of fact, many manufacturers have adopted production info management systems such as MES and ERP. Nevertheless, managers are having difficulties obtaining ever-changing production process information in real time, or responding quickly to any change in production related needs on the basis of such information. This is because they rely on poor production info systems which are not capable of providing real-time factory settings. If the manufacturer doesn-t have a capacity for collecting or digitalizing the 4 Ms (Man, Machine, Material, Method), which are resources for production, on a real time basis, it might to difficult to effectively maintain the information on production process. In this regard, this paper will introduce some new alternatives to the existing methods of collecting the 4 Ms in real time, which are currently comprise the production field.

Limitations of the Analytic Hierarchy Process Technique with Respect to Geographically Distributed Stakeholders

The selection of appropriate requirements for product releases can make a big difference in a product success. The selection of requirements is done by different requirements prioritization techniques. These techniques are based on pre-defined and systematic steps to calculate the requirements relative weight. Prioritization is complicated by new development settings, shifting from traditional co-located development to geographically distributed development. Stakeholders, connected to a project, are distributed all over the world. These geographically distributions of stakeholders make it hard to prioritize requirements as each stakeholder have their own perception and expectations of the requirements in a software project. This paper discusses limitations of the Analytical Hierarchy Process with respect to geographically distributed stakeholders- (GDS) prioritization of requirements. This paper also provides a solution, in the form of a modified AHP, in order to prioritize requirements for GDS. We will conduct two experiments in this paper and will analyze the results in order to discuss AHP limitations with respect to GDS. The modified AHP variant is also validated in this paper.