Iteration Acceleration for Nonlinear Coupled Parabolic-Hyperbolic System

A Picard-Newton iteration method is studied to accelerate the numerical solution procedure of a class of two-dimensional nonlinear coupled parabolic-hyperbolic system. The Picard-Newton iteration is designed by adding higher-order terms of small quantity to an existing Picard iteration. The discrete functional analysis and inductive hypothesis reasoning techniques are used to overcome difficulties coming from nonlinearity and coupling, and theoretical analysis is made for the convergence and approximation properties of the iteration scheme. The Picard-Newton iteration has a quadratic convergent ratio, and its solution has second order spatial approximation and first order temporal approximation to the exact solution of the original problem. Numerical tests verify the results of the theoretical analysis, and show the Picard-Newton iteration is more efficient than the Picard iteration.

Accurate Fault Classification and Section Identification Scheme in TCSC Compensated Transmission Line using SVM

This paper presents a new approach for the protection of Thyristor-Controlled Series Compensator (TCSC) line using Support Vector Machine (SVM). One SVM is trained for fault classification and another for section identification. This method use three phase current measurement that results in better speed and accuracy than other SVM based methods which used single phase current measurement. This makes it suitable for real-time protection. The method was tested on 10,000 data instances with a very wide variation in system conditions such as compensation level, source impedance, location of fault, fault inception angle, load angle at source bus and fault resistance. The proposed method requires only local current measurement.

Globalisation, ICTs and National Identity: The Consequences of ICT Policy in Malaysia

For the past thirty years the Malaysian economy has been said to contribute well to the progress of the nations. However, the intensification of global economy activity and the extensive use of Information Communication Technologies (ICTs) in recent years are challenging government-s effort to further develop Malaysian society. The competition posed by the low wage economies such as China and Vietnam have made the government realise the importance of engaging in high-skill and high technology industries. It is hoped this will be the basis of attracting more foreign direct investment (FDI) in order to help the country to compete in globalised world. Using Vision 2020 as it targeted vision, the government has decided to engage in the use of ICTs and introduce many policies pertaining to it. Mainly based on the secondary analysis approach, the findings show that policy pertaining to ICTs in Malaysia contributes to economic growth, but the consequences of this have resulted in greater division within society. Although some of the divisions such as gender and ethnicity are narrowing down, the gap in important areas such as regions and class differences is becoming wider. The widespread use of ICTs might contribute to the further establishment of democracy in Malaysia, but the increasing number of foreign entities such as FDI and foreign workers, cultural hybridisation and to some extent cultural domination are contributing to neocolonialism in Malaysia. This has obvious consequences for the government-s effort to create a Malaysian national identity. An important finding of this work is that there are contradictions within ICT policy between the effort to develop the economy and society.

Effective Class of Discreet Programing Problems

We consider herein a concise view of discreet programming models and methods. There has been conducted the models and methods analysis. On the basis of discreet programming models there has been elaborated and offered a new class of problems, i.e. block-symmetry models and methods of applied tasks statements and solutions.

Refined Buckling Analysis of Rectangular Plates Under Uniaxial and Biaxial Compression

In the traditional buckling analysis of rectangular plates the classical thin plate theory is generally applied, so neglecting the plating shear deformation. It seems quite clear that this method is not totally appropriate for the analysis of thick plates, so that in the following the two variable refined plate theory proposed by Shimpi (2006), that permits to take into account the transverse shear effects, is applied for the buckling analysis of simply supported isotropic rectangular plates, compressed in one and two orthogonal directions. The relevant results are compared with the classical ones and, for rectangular plates under uniaxial compression, a new direct expression, similar to the classical Bryan-s formula, is proposed for the Euler buckling stress. As the buckling analysis is a widely diffused topic for a variety of structures, such as ship ones, some applications for plates uniformly compressed in one and two orthogonal directions are presented and the relevant theoretical results are compared with those ones obtained by a FEM analysis, carried out by ANSYS, to show the feasibility of the presented method.

Evolutionary Cobreeding of Cooperative and Competitive Subcultures

Neoclassical and functionalist explanations of self organization in multiagent systems have been criticized on several accounts including unrealistic explication of overadapted agents and failure to resolve problems of externality. The paper outlines a more elaborate and dynamic model that is capable of resolving these dilemmas. An illustrative example where behavioral diversity is cobred in a repeated nonzero sum task via evolutionary computing is presented.

Generating Class-Based Test Cases for Interface Classes of Object-Oriented Black Box Frameworks

An application framework provides a reusable design and implementation for a family of software systems. Application developers extend the framework to build their particular applications using hooks. Hooks are the places identified to show how to use and customize the framework. Hooks define the Framework Interface Classes (FICs) and their possible specifications, which helps in building reusable test cases for the implementations of these classes. This paper introduces a novel technique called all paths-state to generate state-based test cases to test the FICs at class level. The technique is experimentally evaluated. The empirical evaluation shows that all paths-state technique produces test cases with a high degree of coverage for the specifications of the implemented FICs comparing to test cases generated using round-trip path and all-transition techniques.

A Hybridization of Constructive Beam Search with Local Search for Far From Most Strings Problem

The Far From Most Strings Problem (FFMSP) is to obtain a string which is far from as many as possible of a given set of strings. All the input and the output strings are of the same length, and two strings are said to be far if their hamming distance is greater than or equal to a given positive integer. FFMSP belongs to the class of sequences consensus problems which have applications in molecular biology. The problem is NP-hard; it does not admit a constant-ratio approximation either, unless P = NP. Therefore, in addition to exact and approximate algorithms, (meta)heuristic algorithms have been proposed for the problem in recent years. On the other hand, in the recent years, hybrid algorithms have been proposed and successfully used for many hard problems in a variety of domains. In this paper, a new metaheuristic algorithm, called Constructive Beam and Local Search (CBLS), is investigated for the problem, which is a hybridization of constructive beam search and local search algorithms. More specifically, the proposed algorithm consists of two phases, the first phase is to obtain several candidate solutions via the constructive beam search and the second phase is to apply local search to the candidate solutions obtained by the first phase. The best solution found is returned as the final solution to the problem. The proposed algorithm is also similar to memetic algorithms in the sense that both use local search to further improve individual solutions. The CBLS algorithm is compared with the most recent published algorithm for the problem, GRASP, with significantly positive results; the improvement is by order of magnitudes in most cases.

A Probabilistic Reinforcement-Based Approach to Conceptualization

Conceptualization strengthens intelligent systems in generalization skill, effective knowledge representation, real-time inference, and managing uncertain and indefinite situations in addition to facilitating knowledge communication for learning agents situated in real world. Concept learning introduces a way of abstraction by which the continuous state is formed as entities called concepts which are connected to the action space and thus, they illustrate somehow the complex action space. Of computational concept learning approaches, action-based conceptualization is favored because of its simplicity and mirror neuron foundations in neuroscience. In this paper, a new biologically inspired concept learning approach based on the probabilistic framework is proposed. This approach exploits and extends the mirror neuron-s role in conceptualization for a reinforcement learning agent in nondeterministic environments. In the proposed method, instead of building a huge numerical knowledge, the concepts are learnt gradually from rewards through interaction with the environment. Moreover the probabilistic formation of the concepts is employed to deal with uncertain and dynamic nature of real problems in addition to the ability of generalization. These characteristics as a whole distinguish the proposed learning algorithm from both a pure classification algorithm and typical reinforcement learning. Simulation results show advantages of the proposed framework in terms of convergence speed as well as generalization and asymptotic behavior because of utilizing both success and failures attempts through received rewards. Experimental results, on the other hand, show the applicability and effectiveness of the proposed method in continuous and noisy environments for a real robotic task such as maze as well as the benefits of implementing an incremental learning scenario in artificial agents.

A Content Vector Model for Text Classification

As a popular rank-reduced vector space approach, Latent Semantic Indexing (LSI) has been used in information retrieval and other applications. In this paper, an LSI-based content vector model for text classification is presented, which constructs multiple augmented category LSI spaces and classifies text by their content. The model integrates the class discriminative information from the training data and is equipped with several pertinent feature selection and text classification algorithms. The proposed classifier has been applied to email classification and its experiments on a benchmark spam testing corpus (PU1) have shown that the approach represents a competitive alternative to other email classifiers based on the well-known SVM and naïve Bayes algorithms.

Learning Based On Computer Science Unplugged in Computer Science Education: Design, Development, and Assessment

Although, all high school students in Japan are required to learn informatics, many of them do not learn this topic sufficiently. In response to this situation, we propose a support package for high school informatics classes. To examine what students learned and if they sufficiently understood the context of the lessons, a questionnaire survey was distributed to 186 students. We analyzed the results of the questionnaire and determined the weakest units, which were “basic computer configuration” and “memory and secondary storage”. We then developed a package for teaching these units. We propose that our package be applied in high school classrooms.

A System for Performance Evaluation of Embedded Software

Developers need to evaluate software's performance to make software efficient. This paper suggests a performance evaluation system for embedded software. The suggested system consists of code analyzer, testing agents, data analyzer, and report viewer. The code analyzer inserts additional code dependent on target system into source code and compiles the source code. The testing agents execute performance test. The data analyzer translates raw-level results data to class-level APIs for reporting viewer. The report viewer offers users graphical report views by using the APIs. We hope that the suggested tool will be useful for embedded-related software development,because developers can easily and intuitively analyze software's performance and resource utilization.

Binarization of Text Region based on Fuzzy Clustering and Histogram Distribution in Signboards

In this paper, we present a novel approach to accurately detect text regions including shop name in signboard images with complex background for mobile system applications. The proposed method is based on the combination of text detection using edge profile and region segmentation using fuzzy c-means method. In the first step, we perform an elaborate canny edge operator to extract all possible object edges. Then, edge profile analysis with vertical and horizontal direction is performed on these edge pixels to detect potential text region existing shop name in a signboard. The edge profile and geometrical characteristics of each object contour are carefully examined to construct candidate text regions and classify the main text region from background. Finally, the fuzzy c-means algorithm is performed to segment and detected binarize text region. Experimental results show that our proposed method is robust in text detection with respect to different character size and color and can provide reliable text binarization result.

Dynamic Clustering using Particle Swarm Optimization with Application in Unsupervised Image Classification

A new dynamic clustering approach (DCPSO), based on Particle Swarm Optimization, is proposed. This approach is applied to unsupervised image classification. The proposed approach automatically determines the "optimum" number of clusters and simultaneously clusters the data set with minimal user interference. The algorithm starts by partitioning the data set into a relatively large number of clusters to reduce the effects of initial conditions. Using binary particle swarm optimization the "best" number of clusters is selected. The centers of the chosen clusters is then refined via the Kmeans clustering algorithm. The experiments conducted show that the proposed approach generally found the "optimum" number of clusters on the tested images.

Anomaly Detection and Characterization to Classify Traffic Anomalies Case Study: TOT Public Company Limited Network

This paper represents four unsupervised clustering algorithms namely sIB, RandomFlatClustering, FarthestFirst, and FilteredClusterer that previously works have not been used for network traffic classification. The methodology, the result, the products of the cluster and evaluation of these algorithms with efficiency of each algorithm from accuracy are shown. Otherwise, the efficiency of these algorithms considering form the time that it use to generate the cluster quickly and correctly. Our work study and test the best algorithm by using classify traffic anomaly in network traffic with different attribute that have not been used before. We analyses the algorithm that have the best efficiency or the best learning and compare it to the previously used (K-Means). Our research will be use to develop anomaly detection system to more efficiency and more require in the future.

The Effect of Ethylene Glycol to Soy Polyurethane Foam Classifications

Soy polyol obtained from hydroxylation of soy epoxide with ethylene glycol were prepared as pre-polyurethane. The two step process method were applied in the polyurethane synthesis. The blending of soy polyol with synthetic polyol then simultaneously carried out to TDI (2,4): MDI (4,4-) (80:20), blowing agent, and surfactant. Ethylene glycol were not taking part in the polyurethane synthesis. The inclusion of ethylene glycol were used as a control. Characterization of polyurethane foam through impact resillience, indentation deflection, and density can visualize the polyurethane classifications.

Authentic Learning for Computer Network with Mobile Device-Based Hands-On Labware

Computer network courses are essential parts of college computer science curriculum and hands-on networking experience is well recognized as an effective approach to help students understand better about the network concepts, the layered architecture of network protocols, and the dynamics of the networks. However, existing networking labs are usually server-based and relatively cumbersome, which require a certain level of specialty and resource to set up and maintain the lab environment. Many universities/colleges lack the resources and build-ups in this field and have difficulty to provide students with hands-on practice labs. A new affordable and easily-adoptable approach to networking labs is desirable to enhance network teaching and learning. In addition, current network labs are short on providing hands-on practice for modern wireless and mobile network learning. With the prevalence of smart mobile devices, wireless and mobile network are permeating into various aspects of our information society. The emerging and modern mobile technology provides computer science students with more authentic learning experience opportunities especially in network learning. A mobile device based hands-on labware can provide an excellent ‘real world’ authentic learning environment for computer network especially for wireless network study. In this paper, we present our mobile device-based hands-on labware (series of lab module) for computer network learning which is guided by authentic learning principles to immerse students in a real world relevant learning environment. We have been using this labware in teaching computer network, mobile security, and wireless network classes. The student feedback shows that students can learn more when they have hands-on authentic learning experience. 

Induced Acyclic Graphoidal Covers in a Graph

An induced acyclic graphoidal cover of a graph G is a collection ψ of open paths in G such that every path in ψ has atleast two vertices, every vertex of G is an internal vertex of at most one path in ψ, every edge of G is in exactly one path in ψ and every member of ψ is an induced path. The minimum cardinality of an induced acyclic graphoidal cover of G is called the induced acyclic graphoidal covering number of G and is denoted by ηia(G) or ηia. Here we find induced acyclic graphoidal cover for some classes of graphs.

Using Non-Linear Programming Techniques in Determination of the Most Probable Slip Surface in 3D Slopes

Among many different methods that are used for optimizing different engineering problems mathematical (numerical) optimization techniques are very important because they can easily be used and are consistent with most of engineering problems. Many studies and researches are done on stability analysis of three dimensional (3D) slopes and the relating probable slip surfaces and determination of factors of safety, but in most of them force equilibrium equations, as in simplified 2D methods, are considered only in two directions. In other words for decreasing mathematical calculations and also for simplifying purposes the force equilibrium equation in 3rd direction is omitted. This point is considered in just a few numbers of previous studies and most of them have only given a factor of safety and they haven-t made enough effort to find the most probable slip surface. In this study shapes of the slip surfaces are modeled, and safety factors are calculated considering the force equilibrium equations in all three directions, and also the moment equilibrium equation is satisfied in the slip direction, and using nonlinear programming techniques the shape of the most probable slip surface is determined. The model which is used in this study is a 3D model that is composed of three upper surfaces which can cover all defined and probable slip surfaces. In this research the meshing process is done in a way that all elements are prismatic with quadrilateral cross sections, and the safety factor is defined on this quadrilateral surface in the base of the element which is a part of the whole slip surface. The method that is used in this study to find the most probable slip surface is the non-linear programming method in which the objective function that must get optimized is the factor of safety that is a function of the soil properties and the coordinates of the nodes on the probable slip surface. The main reason for using non-linear programming method in this research is its quick convergence to the desired responses. The final results show a good compatibility with the previously used classical and 2D methods and also show a reasonable convergence speed.

A New Robust Stability Criterion for Dynamical Neural Networks with Mixed Time Delays

In this paper, we investigate the problem of the existence, uniqueness and global asymptotic stability of the equilibrium point for a class of neural networks, the neutral system has mixed time delays and parameter uncertainties. Under the assumption that the activation functions are globally Lipschitz continuous, we drive a new criterion for the robust stability of a class of neural networks with time delays by utilizing the Lyapunov stability theorems and the Homomorphic mapping theorem. Numerical examples are given to illustrate the effectiveness and the advantage of the proposed main results.