Artificial Neural Networks Application to Improve Shunt Active Power Filter

Active Power Filters (APFs) are today the most widely used systems to eliminate harmonics compensate power factor and correct unbalanced problems in industrial power plants. We propose to improve the performances of conventional APFs by using artificial neural networks (ANNs) for harmonics estimation. This new method combines both the strategies for extracting the three-phase reference currents for active power filters and DC link voltage control method. The ANNs learning capabilities to adaptively choose the power system parameters for both to compute the reference currents and to recharge the capacitor value requested by VDC voltage in order to ensure suitable transit of powers to supply the inverter. To investigate the performance of this identification method, the study has been accomplished using simulation with the MATLAB Simulink Power System Toolbox. The simulation study results of the new (SAPF) identification technique compared to other similar methods are found quite satisfactory by assuring good filtering characteristics and high system stability.

An Efficient 3D Animation Data Reduction Using Frame Removal

Existing methods in which the animation data of all frames are stored and reproduced as with vertex animation cannot be used in mobile device environments because these methods use large amounts of the memory. So 3D animation data reduction methods aimed at solving this problem have been extensively studied thus far and we propose a new method as follows. First, we find and remove frames in which motion changes are small out of all animation frames and store only the animation data of remaining frames (involving large motion changes). When playing the animation, the removed frame areas are reconstructed using the interpolation of the remaining frames. Our key contribution is to calculate the accelerations of the joints of individual frames and the standard deviations of the accelerations using the information of joint locations in the relevant 3D model in order to find and delete frames in which motion changes are small. Our methods can reduce data sizes by approximately 50% or more while providing quality which is not much lower compared to original animations. Therefore, our method is expected to be usefully used in mobile device environments or other environments in which memory sizes are limited.

Analysis of Textual Data Based On Multiple 2-Class Classification Models

This paper proposes a new method for analyzing textual data. The method deals with items of textual data, where each item is described based on various viewpoints. The method acquires 2- class classification models of the viewpoints by applying an inductive learning method to items with multiple viewpoints. The method infers whether the viewpoints are assigned to the new items or not by using the models. The method extracts expressions from the new items classified into the viewpoints and extracts characteristic expressions corresponding to the viewpoints by comparing the frequency of expressions among the viewpoints. This paper also applies the method to questionnaire data given by guests at a hotel and verifies its effect through numerical experiments.

Multiresolution Approach to Subpixel Registration by Linear Approximation of PSF

Linear approximation of point spread function (PSF) is a new method for determining subpixel translations between images. The problem with the actual algorithm is the inability of determining translations larger than 1 pixel. In this paper a multiresolution technique is proposed to deal with the problem. Its performance is evaluated by comparison with two other well known registration method. In the proposed technique the images are downsampled in order to have a wider view. Progressively decreasing the downsampling rate up to the initial resolution and using linear approximation technique at each step, the algorithm is able to determine translations of several pixels in subpixel levels.

Restarted GMRES Method Augmented with the Combination of Harmonic Ritz Vectors and Error Approximations

Restarted GMRES methods augmented with approximate eigenvectors are widely used for solving large sparse linear systems. Recently a new scheme of augmenting with error approximations is proposed. The main aim of this paper is to develop a restarted GMRES method augmented with the combination of harmonic Ritz vectors and error approximations. We demonstrate that the resulted combination method can gain the advantages of two approaches: (i) effectively deflate the small eigenvalues in magnitude that may hamper the convergence of the method and (ii) partially recover the global optimality lost due to restarting. The effectiveness and efficiency of the new method are demonstrated through various numerical examples.

The Assessment of Reforms in Different Countries by Social-Economic Development Integral Index

The purpose of this report is to suggest a new methodology for the assessment of the comparative efficiency of the reforms made in different countries by an integral index. We have highlighted the reforms made in post-crisis period in 21 former socialist countries. The integral index describes the social-economic development level. The integral index contains of six indexes: The Global Competitiveness Index, Doing Business, The Corruption Perception, The Index of Economic Freedom, The Human Development, and The Democracy Index, which are reported by different international organizations. With the help of our methodology we first summarized the above-mentioned 6 indexes and attained 1 general index, besides, our new method enables us to assess the comparative efficiency of the reforms made in different countries by analyzing them. The purpose is to reveal the opportunities and threats of socialeconomic reforms in different directions.

Highly Scalable, Reversible and Embedded Image Compression System

A new method for low complexity image coding is presented, that permits different settings and great scalability in the generation of the final bit stream. This coding presents a continuoustone still image compression system that groups loss and lossless compression making use of finite arithmetic reversible transforms. Both transformation in the space of color and wavelet transformation are reversible. The transformed coefficients are coded by means of a coding system in depending on a subdivision into smaller components (CFDS) similar to the bit importance codification. The subcomponents so obtained are reordered by means of a highly configure alignment system depending on the application that makes possible the re-configure of the elements of the image and obtaining different levels of importance from which the bit stream will be generated. The subcomponents of each level of importance are coded using a variable length entropy coding system (VBLm) that permits the generation of an embedded bit stream. This bit stream supposes itself a bit stream that codes a compressed still image. However, the use of a packing system on the bit stream after the VBLm allows the realization of a final highly scalable bit stream from a basic image level and one or several enhance levels.

Fast Dummy Sequence Insertion Method for PAPR Reduction in WiMAX Systems

In literatures, many researches proposed various methods to reduce PAPR (Peak to Average Power Ratio). Among those, DSI (Dummy Sequence Insertion) is one of the most attractive methods for WiMAX systems because it does not require side information transmitted along with user data. However, the conventional DSI methods find dummy sequence by performing an iterative procedure until achieving PAPR under a desired threshold. This causes a significant delay on finding dummy sequence and also effects to the overall performances in WiMAX systems. In this paper, the new method based on DSI is proposed by finding dummy sequence without the need of iterative procedure. The fast DSI method can reduce PAPR without either delays or required side information. The simulation results confirm that the proposed method is able to carry out PAPR performances as similar to the other methods without any delays. In addition, the simulations of WiMAX system with adaptive modulations are also investigated to realize the use of proposed methods on various fading schemes. The results suggest the WiMAX designers to modify a new Signal to Noise Ratio (SNR) criteria for adaptation.

Analysis and Circuit Modeling of APDs

In this paper a new method for increasing the speed of SAGCM-APD is proposed. Utilizing carrier rate equations in different regions of the structure, a circuit model for the structure is obtained. In this research, in addition to frequency response, the effect of added new charge layer on some transient parameters like slew-rate, rising and falling times have been considered. Finally, by trading-off among some physical parameters such as different layers widths and droppings, a noticeable decrease in breakdown voltage has been achieved. The results of simulation, illustrate some features of proposed structure improvement in comparison with conventional SAGCM-APD structures.

Evaluating Complexity – Ethical Challenges in Computational Design Processes

Complexity, as a theoretical background has made it easier to understand and explain the features and dynamic behavior of various complex systems. As the common theoretical background has confirmed, borrowing the terminology for design from the natural sciences has helped to control and understand urban complexity. Phenomena like self-organization, evolution and adaptation are appropriate to describe the formerly inaccessible characteristics of the complex environment in unpredictable bottomup systems. Increased computing capacity has been a key element in capturing the chaotic nature of these systems. A paradigm shift in urban planning and architectural design has forced us to give up the illusion of total control in urban environment, and consequently to seek for novel methods for steering the development. New methods using dynamic modeling have offered a real option for more thorough understanding of complexity and urban processes. At best new approaches may renew the design processes so that we get a better grip on the complex world via more flexible processes, support urban environmental diversity and respond to our needs beyond basic welfare by liberating ourselves from the standardized minimalism. A complex system and its features are as such beyond human ethics. Self-organization or evolution is either good or bad. Their mechanisms are by nature devoid of reason. They are common in urban dynamics in both natural processes and gas. They are features of a complex system, and they cannot be prevented. Yet their dynamics can be studied and supported. The paradigm of complexity and new design approaches has been criticized for a lack of humanity and morality, but the ethical implications of scientific or computational design processes have not been much discussed. It is important to distinguish the (unexciting) ethics of the theory and tools from the ethics of computer aided processes based on ethical decisions. Urban planning and architecture cannot be based on the survival of the fittest; however, the natural dynamics of the system cannot be impeded on grounds of being “non-human". In this paper the ethical challenges of using the dynamic models are contemplated in light of a few examples of new architecture and dynamic urban models and literature. It is suggested that ethical challenges in computational design processes could be reframed under the concepts of responsibility and transparency.

Automatically Driven Vector for Guidewire Segmentation in 2D and Biplane Fluoroscopy

The segmentation of endovascular tools in fluoroscopy images can be accurately performed automatically or by minimum user intervention, using known modern techniques. It has been proven in literature, but no clinical implementation exists so far because the computational time requirements of such technology have not yet been met. A classical segmentation scheme is composed of edge enhancement filtering, line detection, and segmentation. A new method is presented that consists of a vector that propagates in the image to track an edge as it advances. The filtering is performed progressively in the projected path of the vector, whose orientation allows for oriented edge detection, and a minimal image area is globally filtered. Such an algorithm is rapidly computed and can be implemented in real-time applications. It was tested on medical fluoroscopy images from an endovascular cerebral intervention. Ex- periments showed that the 2D tracking was limited to guidewires without intersection crosspoints, while the 3D implementation was able to cope with such planar difficulties.

Ranking Genes from DNA Microarray Data of Cervical Cancer by a local Tree Comparison

The major objective of this paper is to introduce a new method to select genes from DNA microarray data. As criterion to select genes we suggest to measure the local changes in the correlation graph of each gene and to select those genes whose local changes are largest. More precisely, we calculate the correlation networks from DNA microarray data of cervical cancer whereas each network represents a tissue of a certain tumor stage and each node in the network represents a gene. From these networks we extract one tree for each gene by a local decomposition of the correlation network. The interpretation of a tree is that it represents the n-nearest neighbor genes on the n-th level of a tree, measured by the Dijkstra distance, and, hence, gives the local embedding of a gene within the correlation network. For the obtained trees we measure the pairwise similarity between trees rooted by the same gene from normal to cancerous tissues. This evaluates the modification of the tree topology due to tumor progression. Finally, we rank the obtained similarity values from all tissue comparisons and select the top ranked genes. For these genes the local neighborhood in the correlation networks changes most between normal and cancerous tissues. As a result we find that the top ranked genes are candidates suspected to be involved in tumor growth. This indicates that our method captures essential information from the underlying DNA microarray data of cervical cancer.

Semi-Automatic Artifact Rejection Procedure Based on Kurtosis, Renyi's Entropy and Independent Component Scalp Maps

Artifact rejection plays a key role in many signal processing applications. The artifacts are disturbance that can occur during the signal acquisition and that can alter the analysis of the signals themselves. Our aim is to automatically remove the artifacts, in particular from the Electroencephalographic (EEG) recordings. A technique for the automatic artifact rejection, based on the Independent Component Analysis (ICA) for the artifact extraction and on some high order statistics such as kurtosis and Shannon-s entropy, was proposed some years ago in literature. In this paper we try to enhance this technique proposing a new method based on the Renyi-s entropy. The performance of our method was tested and compared to the performance of the method in literature and the former proved to outperform the latter.

Educational Quiz Board Games for Adaptive E-Learning

Internet computer games turn to be more and more attractive within the context of technology enhanced learning. Educational games as quizzes and quests have gained significant success in appealing and motivating learners to study in a different way and provoke steadily increasing interest in new methods of application. Board games are specific group of games where figures are manipulated in competitive play mode with race conditions on a surface according predefined rules. The article represents a new, formalized model of traditional quizzes, puzzles and quests shown as multimedia board games which facilitates the construction process of such games. Authors provide different examples of quizzes and their models in order to demonstrate the model is quite general and does support not only quizzes, mazes and quests but also any set of teaching activities. The execution process of such models is explained and, as well, how they can be useful for creation and delivery of adaptive e-learning courseware.

An ACO Based Algorithm for Distribution Networks Including Dispersed Generations

With Power system movement toward restructuring along with factors such as life environment pollution, problems of transmission expansion and with advancement in construction technology of small generation units, it is expected that small units like wind turbines, fuel cells, photovoltaic, ... that most of the time connect to the distribution networks play a very essential role in electric power industry. With increase in developing usage of small generation units, management of distribution networks should be reviewed. The target of this paper is to present a new method for optimal management of active and reactive power in distribution networks with regard to costs pertaining to various types of dispersed generations, capacitors and cost of electric energy achieved from network. In other words, in this method it-s endeavored to select optimal sources of active and reactive power generation and controlling equipments such as dispersed generations, capacitors, under load tapchanger transformers and substations in a way that firstly costs in relation to them are minimized and secondly technical and physical constraints are regarded. Because the optimal management of distribution networks is an optimization problem with continuous and discrete variables, the new evolutionary method based on Ant Colony Algorithm has been applied. The simulation results of the method tested on two cases containing 23 and 34 buses exist and will be shown at later sections.

New Wavelet-Based Superresolution Algorithm for Speckle Reduction in SAR Images

This paper describes a novel projection algorithm, the Projection Onto Span Algorithm (POSA) for wavelet-based superresolution and removing speckle (in wavelet domain) of unknown variance from Synthetic Aperture Radar (SAR) images. Although the POSA is good as a new superresolution algorithm for image enhancement, image metrology and biometric identification, here one will use it like a tool of despeckling, being the first time that an algorithm of super-resolution is used for despeckling of SAR images. Specifically, the speckled SAR image is decomposed into wavelet subbands; POSA is applied to the high subbands, and reconstruct a SAR image from the modified detail coefficients. Experimental results demonstrate that the new method compares favorably to several other despeckling methods on test SAR images.

Fault Detection and Isolation using RBF Networks for Polymer Electrolyte Membrane Fuel Cell

This paper presents a new method of fault detection and isolation (FDI) for polymer electrolyte membrane (PEM) fuel cell (FC) dynamic systems under an open-loop scheme. This method uses a radial basis function (RBF) neural network to perform fault identification, classification and isolation. The novelty is that the RBF model of independent mode is used to predict the future outputs of the FC stack. One actuator fault, one component fault and three sensor faults have been introduced to the PEMFC systems experience faults between -7% to +10% of fault size in real-time operation. To validate the results, a benchmark model developed by Michigan University is used in the simulation to investigate the effect of these five faults. The developed independent RBF model is tested on MATLAB R2009a/Simulink environment. The simulation results confirm the effectiveness of the proposed method for FDI under an open-loop condition. By using this method, the RBF networks able to detect and isolate all five faults accordingly and accurately.

Elliptical Features Extraction Using Eigen Values of Covariance Matrices, Hough Transform and Raster Scan Algorithms

In this paper, we introduce a new method for elliptical object identification. The proposed method adopts a hybrid scheme which consists of Eigen values of covariance matrices, Circular Hough transform and Bresenham-s raster scan algorithms. In this approach we use the fact that the large Eigen values and small Eigen values of covariance matrices are associated with the major and minor axial lengths of the ellipse. The centre location of the ellipse can be identified using circular Hough transform (CHT). Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain a small number of nonzero elements they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of circumference pixels is identified using raster scan algorithm which uses the geometrical symmetry property. This method does not require the evaluation of tangents or curvature of edge contours, which are generally very sensitive to noise working conditions. The proposed method has the advantages of small storage, high speed and accuracy in identifying the feature. The new method has been tested on both synthetic and real images. Several experiments have been conducted on various images with considerable background noise to reveal the efficacy and robustness. Experimental results about the accuracy of the proposed method, comparisons with Hough transform and its variants and other tangential based methods are reported.

A Novel Approach for Coin Identification using Eigenvalues of Covariance Matrix, Hough Transform and Raster Scan Algorithms

In this paper we present a new method for coin identification. The proposed method adopts a hybrid scheme using Eigenvalues of covariance matrix, Circular Hough Transform (CHT) and Bresenham-s circle algorithm. The statistical and geometrical properties of the small and large Eigenvalues of the covariance matrix of a set of edge pixels over a connected region of support are explored for the purpose of circular object detection. Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain only a small number of non-zero elements, they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of the circumference pixels is identified using Raster scan algorithm which uses geometrical symmetry property. After finding circular objects, the proposed method uses the texture on the surface of the coins called texton, which are unique properties of coins, refers to the fundamental micro structure in generic natural images. This method has been tested on several real world images including coin and non-coin images. The performance is also evaluated based on the noise withstanding capability.

Strategy Analysis and Creation by Simulation in the General Game

In this paper the General Game problem is described. In this problem the competition or cooperation dilemma occurs as the two basic types of strategies. The strategy possibilities have been analyzed for finding winning strategy in uncertain situations (no information about the number of players and their strategy types). The winning strategy is missing, but a good solution can be found by simulation by varying the ratio of the two types of strategies. This new method has been used in a real contest with human players, where the created strategies by simulation have reached very good ranks. This construction can be applied in other real social games as well.