Heuristic Continuous-time Associative Memories

In this paper, a novel associative memory model will be proposed and applied to memory retrievals based on the conventional continuous time model. The conventional model presents memory capacity is very low and retrieval process easily converges to an equilibrium state which is very different from the stored patterns. Genetic Algorithms is well-known with the capability of global optimal search escaping local optimum on progress to reach a global optimum. Based on the well-known idea of Genetic Algorithms, this work proposes a heuristic rule to make a mutation when the state of the network is trapped in a spurious memory. The proposal heuristic associative memory show the stored capacity does not depend on the number of stored patterns and the retrieval ability is up to ~ 1.

Shift Invariant Support Vector Machines Face Recognition System

In this paper, we present a new method for incorporating global shift invariance in support vector machines. Unlike other approaches which incorporate a feature extraction stage, we first scale the image and then classify it by using the modified support vector machines classifier. Shift invariance is achieved by replacing dot products between patterns used by the SVM classifier with the maximum cross-correlation value between them. Unlike the normal approach, in which the patterns are treated as vectors, in our approach the patterns are treated as matrices (or images). Crosscorrelation is computed by using computationally efficient techniques such as the fast Fourier transform. The method has been tested on the ORL face database. The tests indicate that this method can improve the recognition rate of an SVM classifier.

An Agent-based Model for Analyzing Interaction of Two Stable Social Networks

In this research, the authors analyze network stability using agent-based simulation. Firstly, the authors focus on analyzing large networks (eight agents) by connecting different two stable small social networks (A small stable network is consisted on four agents.). Secondly, the authors analyze the network (eight agents) shape which is added one agent to a stable network (seven agents). Thirdly, the authors analyze interpersonal comparison of utility. The “star-network "was not found on the result of interaction among stable two small networks. On the other hand, “decentralized network" was formed from several combination. In case of added one agent to a stable network (seven agents), if the value of “c"(maintenance cost of per a link) was larger, the number of patterns of stable network was also larger. In this case, the authors identified the characteristics of a large stable network. The authors discovered the cases of decreasing personal utility under condition increasing total utility.

Classifier Based Text Mining for Neural Network

Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In Neural Network that address classification problems, training set, testing set, learning rate are considered as key tasks. That is collection of input/output patterns that are used to train the network and used to assess the network performance, set the rate of adjustments. This paper describes a proposed back propagation neural net classifier that performs cross validation for original Neural Network. In order to reduce the optimization of classification accuracy, training time. The feasibility the benefits of the proposed approach are demonstrated by means of five data sets like contact-lenses, cpu, weather symbolic, Weather, labor-nega-data. It is shown that , compared to exiting neural network, the training time is reduced by more than 10 times faster when the dataset is larger than CPU or the network has many hidden units while accuracy ('percent correct') was the same for all datasets but contact-lences, which is the only one with missing attributes. For contact-lences the accuracy with Proposed Neural Network was in average around 0.3 % less than with the original Neural Network. This algorithm is independent of specify data sets so that many ideas and solutions can be transferred to other classifier paradigms.

FEA for Teeth Preparations Marginal Geometry

Knowledge of factors, which influence stress and its distribution, is of key importance to the successful production of durable restorations. One of this is the marginal geometry. The objective of this study was to evaluate, by finite element analysis (FEA), the influence of different marginal designs on the stress distribution in teeth prepared for cast metal crowns. Five margin designs were taken into consideration: shoulderless, chamfer, shoulder, sloped shoulder and shoulder with bevel. For each kind of preparation three dimensional finite element analyses were initiated. Maximal equivalent stresses were calculated and stress patterns were represented in order to compare the marginal designs. Within the limitation of this study, the shoulder and beveled shoulder margin preparations of the teeth are preferred for cast metal crowns from biomechanical point of view.

Some Results of Sign patterns Allowing Simultaneous Unitary Diagonalizability

Allowing diagonalizability of sign pattern is still an open problem. In this paper, we make a carefully discussion about allowing unitary diagonalizability of two sign pattern. Some sufficient and necessary conditions of allowing unitary diagonalizability are also obtained.

A Local Decisional Algorithm Using Agent- Based Management in Constrained Energy Environment

Energy Efficiency Management is the heart of a worldwide problem. The capability of a multi-agent system as a technology to manage the micro-grid operation has already been proved. This paper deals with the implementation of a decisional pattern applied to a multi-agent system which provides intelligence to a distributed local energy network considered at local consumer level. Development of multi-agent application involves agent specifications, analysis, design, and realization. Furthermore, it can be implemented by following several decisional patterns. The purpose of present article is to suggest a new approach for a decisional pattern involving a multi-agent system to control a distributed local energy network in a decentralized competitive system. The proposed solution is the result of a dichotomous approach based on environment observation. It uses an iterative process to solve automatic learning problems and converges monotonically very fast to system attracting operation point.

Quantifying the Stability of Software Systems via Simulation in Dependency Networks

The stability of a software system is one of the most important quality attributes affecting the maintenance effort. Many techniques have been proposed to support the analysis of software stability at the architecture, file, and class level of software systems, but little effort has been made for that at the feature (i.e., method and attribute) level. And the assumptions the existing techniques based on always do not meet the practice to a certain degree. Considering that, in this paper, we present a novel metric, Stability of Software (SoS), to measure the stability of object-oriented software systems by software change propagation analysis using a simulation way in software dependency networks at feature level. The approach is evaluated by case studies on eight open source Java programs using different software structures (one employs design patterns versus one does not) for the same object-oriented program. The results of the case studies validate the effectiveness of the proposed metric. The approach has been fully automated by a tool written in Java.

An Efficient Approach for Optimal Placement of TCSC in Double Auction Power Market

This paper proposes an investment cost recovery based efficient and fast sequential optimization approach to optimal allocation of thyristor controlled series compensator (TCSC) in competitive power market. The optimization technique has been used with an objective to maximizing the social welfare and minimizing the device installation cost by suitable location and rating of TCSC in the system. The effectiveness of proposed approach for location of TCSC has been compared with some existing methods of TCSC placement, in terms of its impact on social welfare, TCSC investment recovery and optimal generation as well as load patterns. The results have been obtained on modified IEEE 14-bus system.

Data Mining for Cancer Management in Egypt Case Study: Childhood Acute Lymphoblastic Leukemia

Data Mining aims at discovering knowledge out of data and presenting it in a form that is easily comprehensible to humans. One of the useful applications in Egypt is the Cancer management, especially the management of Acute Lymphoblastic Leukemia or ALL, which is the most common type of cancer in children. This paper discusses the process of designing a prototype that can help in the management of childhood ALL, which has a great significance in the health care field. Besides, it has a social impact on decreasing the rate of infection in children in Egypt. It also provides valubale information about the distribution and segmentation of ALL in Egypt, which may be linked to the possible risk factors. Undirected Knowledge Discovery is used since, in the case of this research project, there is no target field as the data provided is mainly subjective. This is done in order to quantify the subjective variables. Therefore, the computer will be asked to identify significant patterns in the provided medical data about ALL. This may be achieved through collecting the data necessary for the system, determimng the data mining technique to be used for the system, and choosing the most suitable implementation tool for the domain. The research makes use of a data mining tool, Clementine, so as to apply Decision Trees technique. We feed it with data extracted from real-life cases taken from specialized Cancer Institutes. Relevant medical cases details such as patient medical history and diagnosis are analyzed, classified, and clustered in order to improve the disease management.

Antibiotic Resistance Profile of Bacterial Isolates from Animal Farming Aquatic Environments and Meats in a Peri-Urban Community in South Korea

The increasing usage of antibiotics in the animal farming industry is an emerging worldwide problem contributing to the development of antibiotic resistance. The purpose of this work was to investigate the prevalence and antibiotic resistance profile of bacterial isolates collected from aquatic environments and meats in a peri-urban community in Daejeon, Korea. In an antibacterial susceptibility test, the bacterial isolates showed a high incidence of resistance (~ 26.04 %) to cefazolin, tetracycline, gentamycin, norfloxacin, erythromycin and vancomycin. The results from a test for multiple antibiotic resistance indicated that the isolates were displaying an approximately 5-fold increase in the incidence of multiple antibiotic resistance to combinations of two different antibiotics compared to combinations of three or more antibiotics. Most of the isolates showed multi-antibiotic resistance, and the resistance patterns were similar among the sampling groups. Sequencing data analysis of 16S rRNA showed that most of the resistant isolates appeared to be dominated by the classes Betaproteobacteria and Gammaproteobacteria in the phylum Proteobacteria.

A Decision Support System for Predicting Hospitalization of Hemodialysis Patients

Hemodialysis patients might suffer from unhealthy care behaviors or long-term dialysis treatments. Ultimately they need to be hospitalized. If the hospitalization rate of a hemodialysis center is high, its quality of service would be low. Therefore, how to decrease hospitalization rate is a crucial problem for health care. In this study we combined temporal abstraction with data mining techniques for analyzing the dialysis patients' biochemical data to develop a decision support system. The mined temporal patterns are helpful for clinicians to predict hospitalization of hemodialysis patients and to suggest them some treatments immediately to avoid hospitalization.

A Pattern Language for Software Debugging

In spite of all advancement in software testing, debugging remains a labor-intensive, manual, time consuming, and error prone process. A candidate solution to enhance debugging process is to fuse it with testing process. To achieve this integration, a possible solution may be categorizing common software tests and errors followed by the effort on fixing the errors through general solutions for each test/error pair. Our approach to address this issue is based on Christopher Alexander-s pattern and pattern language concepts. The patterns in this language are grouped into three major sections and connect the three concepts of test, error, and debug. These patterns and their hierarchical relationship shape a pattern language that introduces a solution to solve software errors in a known testing context. Finally, we will introduce our developed framework ADE as a sample implementation to support a pattern of proposed language, which aims to automate the whole process of evolving software design via evolutionary methods.

Repairing and Strengthening Earthquake Damaged RC Beams with Composites

The dominant judgment for earthquake damaged reinforced concrete (RC) structures is to rebuild them with the new ones. Consequently, this paper estimates if there is chance to repair earthquake RC beams and obtain economical contribution to modern day society. Therefore, the totally damaged (damaged in shear under cyclic load) reinforced concrete (RC) beams repaired and strengthened by externally bonded carbon fibre reinforced polymer (CFRP) strips in this study. Four specimens, apart from the reference beam, were separated into two distinct groups. Two experimental beams in the first group primarily tested up to failure then appropriately repaired and strengthened with CFRP strips. Two undamaged specimens from the second group were not repaired but strengthened by the identical strengthening scheme as the first group for comparison. This study studies whether earthquake damaged RC beams that have been repaired and strengthened will validate similar strength and behavior to equally strengthened, undamaged RC beams. Accordingly, a strength correspondence according to strengthened specimens was acquired for the repaired and strengthened specimens. Test results confirmed that repair and strengthening, which were estimated in the experimental program, were effective for the specimens with the cracking patterns considered in the experimental program. 

Spatial Distribution and Risk Assessment of As, Hg, Co and Cr in Kaveh Industrial City, using Geostatistic and GIS

The concentrations of As, Hg, Co, Cr and Cd were tested for each soil sample, and their spatial patterns were analyzed by the semivariogram approach of geostatistics and geographical information system technology. Multivariate statistic approaches (principal component analysis and cluster analysis) were used to identify heavy metal sources and their spatial pattern. Principal component analysis coupled with correlation between heavy metals showed that primary inputs of As, Hg and Cd were due to anthropogenic while, Co, and Cr were associated with pedogenic factors. Ordinary kriging was carried out to map the spatial patters of heavy metals. The high pollution sources evaluated was related with usage of urban and industrial wastewater. The results of this study helpful for risk assessment of environmental pollution for decision making for industrial adjustment and remedy soil pollution.

Moving Data Mining Tools toward a Business Intelligence System

Data mining (DM) is the process of finding and extracting frequent patterns that can describe the data, or predict unknown or future values. These goals are achieved by using various learning algorithms. Each algorithm may produce a mining result completely different from the others. Some algorithms may find millions of patterns. It is thus the difficult job for data analysts to select appropriate models and interpret the discovered knowledge. In this paper, we describe a framework of an intelligent and complete data mining system called SUT-Miner. Our system is comprised of a full complement of major DM algorithms, pre-DM and post-DM functionalities. It is the post-DM packages that ease the DM deployment for business intelligence applications.

Quality Properties of Fermented Mugworts and Rapid Pattern Analysis of Their Volatile Flavor Components by Electric Nose Based On SAW (Surface Acoustic Wave) Sensor in GC System

The changes in quality properties and nutritional components in two fermented mugworts (Artemisia capillaries Thumberg, Artemisiaeasiaticae Nakai) were characterized followed by the rapid pattern analysis of volatile flavor compounds by Electric Nose based on SAW(Surface Acoustic Wave) sensor in GC system. There were remarkable decreases in the pH and small changes in the total soluble solids after fermentation. The L (lightness) and b (yellowness) values in Hunter's color system were shown to be decreased, whilst the a (redness) value was increased by fermentation. The HPLC analysis demonstrated that total amino acids were increased in quantity and the essential amino acids were contained higher in A. asiaticaeNakai than in A. capillaries Thumberg. While the total polyphenol contents were not affected by fermentation, the total sugar contents were dramatically decreased. Scopoletinwere highly abundant in A. capillarisThumberg, however, it was not detected in A. asiaticaeNakai. Volatile flavor compounds by Electric Nose showed that the intensity of several peaks were increased much and seven additional flavor peaks were newly produced after fermentation. The flavor differences of two mugworts were clearly distinguished from the image patterns of VaporPrintTM which indicate that the fermentation enables the two mugworts to have subtle flavor differences.

Bandwidth Estimation Algorithms for the Dynamic Adaptation of Voice Codec

In the recent years multimedia traffic and in particular VoIP services are growing dramatically. We present a new algorithm to control the resource utilization and to optimize the voice codec selection during SIP call setup on behalf of the traffic condition estimated on the network path. The most suitable methodologies and the tools that perform realtime evaluation of the available bandwidth on a network path have been integrated with our proposed algorithm: this selects the best codec for a VoIP call in function of the instantaneous available bandwidth on the path. The algorithm does not require any explicit feedback from the network, and this makes it easily deployable over the Internet. We have also performed intensive tests on real network scenarios with a software prototype, verifying the algorithm efficiency with different network topologies and traffic patterns between two SIP PBXs. The promising results obtained during the experimental validation of the algorithm are now the basis for the extension towards a larger set of multimedia services and the integration of our methodology with existing PBX appliances.

Classifying Bio-Chip Data using an Ant Colony System Algorithm

Bio-chips are used for experiments on genes and contain various information such as genes, samples and so on. The two-dimensional bio-chips, in which one axis represent genes and the other represent samples, are widely being used these days. Instead of experimenting with real genes which cost lots of money and much time to get the results, bio-chips are being used for biological experiments. And extracting data from the bio-chips with high accuracy and finding out the patterns or useful information from such data is very important. Bio-chip analysis systems extract data from various kinds of bio-chips and mine the data in order to get useful information. One of the commonly used methods to mine the data is classification. The algorithm that is used to classify the data can be various depending on the data types or number characteristics and so on. Considering that bio-chip data is extremely large, an algorithm that imitates the ecosystem such as the ant algorithm is suitable to use as an algorithm for classification. This paper focuses on finding the classification rules from the bio-chip data using the Ant Colony algorithm which imitates the ecosystem. The developed system takes in consideration the accuracy of the discovered rules when it applies it to the bio-chip data in order to predict the classes.