STLF Based on Optimized Neural Network Using PSO

The quality of short term load forecasting can improve the efficiency of planning and operation of electric utilities. Artificial Neural Networks (ANNs) are employed for nonlinear short term load forecasting owing to their powerful nonlinear mapping capabilities. At present, there is no systematic methodology for optimal design and training of an artificial neural network. One has often to resort to the trial and error approach. This paper describes the process of developing three layer feed-forward large neural networks for short-term load forecasting and then presents a heuristic search algorithm for performing an important task of this process, i.e. optimal networks structure design. Particle Swarm Optimization (PSO) is used to develop the optimum large neural network structure and connecting weights for one-day ahead electric load forecasting problem. PSO is a novel random optimization method based on swarm intelligence, which has more powerful ability of global optimization. Employing PSO algorithms on the design and training of ANNs allows the ANN architecture and parameters to be easily optimized. The proposed method is applied to STLF of the local utility. Data are clustered due to the differences in their characteristics. Special days are extracted from the normal training sets and handled separately. In this way, a solution is provided for all load types, including working days and weekends and special days. The experimental results show that the proposed method optimized by PSO can quicken the learning speed of the network and improve the forecasting precision compared with the conventional Back Propagation (BP) method. Moreover, it is not only simple to calculate, but also practical and effective. Also, it provides a greater degree of accuracy in many cases and gives lower percent errors all the time for STLF problem compared to BP method. Thus, it can be applied to automatically design an optimal load forecaster based on historical data.

Pattern Recognition of Partial Discharge by Using Simplified Fuzzy ARTMAP

This paper presents the effectiveness of artificial intelligent technique to apply for pattern recognition and classification of Partial Discharge (PD). Characteristics of PD signal for pattern recognition and classification are computed from the relation of the voltage phase angle, the discharge magnitude and the repeated existing of partial discharges by using statistical and fractal methods. The simplified fuzzy ARTMAP (SFAM) is used for pattern recognition and classification as artificial intelligent technique. PDs quantities, 13 parameters from statistical method and fractal method results, are inputted to Simplified Fuzzy ARTMAP to train system for pattern recognition and classification. The results confirm the effectiveness of purpose technique.

Human Face Detection and Segmentation using Eigenvalues of Covariance Matrix, Hough Transform and Raster Scan Algorithms

In this paper we propose a novel method for human face segmentation using the elliptical structure of the human head. It makes use of the information present in the edge map of the image. In this approach we use the fact that the eigenvalues of covariance matrix represent the elliptical structure. The large and small eigenvalues of covariance matrix are associated with major and minor axial lengths of an ellipse. The other elliptical parameters are used to identify the centre and orientation of the face. Since an Elliptical Hough Transform requires 5D Hough Space, the Circular Hough Transform (CHT) is used to evaluate the elliptical parameters. Sparse matrix technique is used to perform CHT, as it squeeze zero elements, and have only a small number of non-zero elements, thereby having an advantage of less storage space and computational time. Neighborhood suppression scheme is used to identify the valid Hough peaks. The accurate position of the circumference pixels for occluded and distorted ellipses is identified using Bresenham-s Raster Scan Algorithm which uses the geometrical symmetry properties. This method does not require the evaluation of tangents for curvature contours, which are very sensitive to noise. The method has been evaluated on several images with different face orientations.

Pseudo-polynomial Motion Commands for Vibration Suppression of Belt-driven Rotary Platforms

The motion planning technique described in this paper has been developed to eliminate or reduce the residual vibrations of belt-driven rotary platforms, while maintaining unchanged the motion time and the total angular displacement of the platform. The proposed approach is based on a suitable choice of the motion command given to the servomotor that drives the mechanical device; this command is defined by some numerical coefficients which determine the shape of the displacement, velocity and acceleration profiles. Using a numerical optimization technique, these coefficients can be changed without altering the continuity conditions imposed on the displacement and its time derivatives at the initial and final time instants. The proposed technique can be easily and quickly implemented on an actual device, since it requires only a simple modification of the motion command profile mapped in the memory of the electronic motion controller.

Comparison of BER Performances for Conventional and Non-Conventional Mapping Schemes Used in OFDM

Orthogonal Frequency Division Multiplexing (OFDM) is one of the techniques for high speed data rate communication with main consideration for 4G and 5G systems. In OFDM, there are several mapping schemes which provide a way of parallel transmission. In this paper, comparisons of mapping schemes used by some standards have been made and also has been discussed about the performance of the non-conventional modulation technique. The Comparisons of Bit Error Rate (BER) performances for conventional and non-conventional modulation schemes have been done using MATLAB software. Mentioned schemes used in OFDM system can be selected on the basis of the requirement of power or spectrum efficiency and BER analysis.

Big Bang – Big Crunch Learning Method for Fuzzy Cognitive Maps

Modeling of complex dynamic systems, which are very complicated to establish mathematical models, requires new and modern methodologies that will exploit the existing expert knowledge, human experience and historical data. Fuzzy cognitive maps are very suitable, simple, and powerful tools for simulation and analysis of these kinds of dynamic systems. However, human experts are subjective and can handle only relatively simple fuzzy cognitive maps; therefore, there is a need of developing new approaches for an automated generation of fuzzy cognitive maps using historical data. In this study, a new learning algorithm, which is called Big Bang-Big Crunch, is proposed for the first time in literature for an automated generation of fuzzy cognitive maps from data. Two real-world examples; namely a process control system and radiation therapy process, and one synthetic model are used to emphasize the effectiveness and usefulness of the proposed methodology.

Risk Classification of SMEs by Early Warning Model Based on Data Mining

One of the biggest problems of SMEs is their tendencies to financial distress because of insufficient finance background. In this study, an Early Warning System (EWS) model based on data mining for financial risk detection is presented. CHAID algorithm has been used for development of the EWS. Developed EWS can be served like a tailor made financial advisor in decision making process of the firms with its automated nature to the ones who have inadequate financial background. Besides, an application of the model implemented which covered 7,853 SMEs based on Turkish Central Bank (TCB) 2007 data. By using EWS model, 31 risk profiles, 15 risk indicators, 2 early warning signals, and 4 financial road maps has been determined for financial risk mitigation.

Microwave LNA Design Based On Adaptive Network Fuzzy Inference and Evolutionary Optimization

This paper presents a novel approach for the design of microwave circuits using Adaptive Network Fuzzy Inference Optimizer (ANFIO). The method takes advantage of direct synthesis of subsections of the amplifier using very fast and accurate ANFIO models based on exact simulations using ADS. A mapping from course space to fine space known as space mapping is also used. The proposed synthesis approach takes into account the noise and scattering parameters due to parasitic elements to achieve optimal results. The overall ANFIO system is capable of designing different LNAs at different noise and scattering criteria. This approach offers significantly reduced time in the design of microwave amplifiers within the validity range of the ANFIO system. The method has been proven to work efficiently for a 2.4GHz LNA example. The S21 of 10.1 dB and noise figure (NF) of 2.7 dB achieved for ANFIO while S21 of 9.05 dB and NF of 2.6 dB achieved for ANN.

Study of Aluminum, Copper and Molybdenum Pollution in Groundwater Sources Surrounding (Miduk) Shahr-E- Babak Copper Complex Tailings Dam

Interpolated contour maps drawn for aluminum, copper and molybdenum in downstream monitoring boreholes of water dam in Miduk Copper Complex and the values of pH, redox potential (Eh) and distance from water dam indicate different trends of variation and behavior of these three elements in downward groundwater resources. As these maps exhibit, aluminum is dominant in the most alkaline (pH = 9-11) borehole (MB5) to water dam. The highest concentration of molybdenum is found in the nearest borehole (MB6) to water dam. Main concentration of copper is observed in the most oxidized borehole (MB3 with Eh=293.2mV). The spatial difference among sampling stations can be attributed to the existence of faults and diaclases in the geologic structure of Miduk region which causes the groundwater sampling sites to be impressed by different contamination sources (toe seepage and upper seepage water originated from different zones of tailings dump).

A Novel Neighborhood Defined Feature Selection on Phase Congruency Images for Recognition of Faces with Extreme Variations

A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.

Building a Personalized Multidimensional Intelligent Learning System

Currently, most of distance learning courses can only deliver standard material to students. Students receive course content passively which leads to the neglect of the goal of education – “to suit the teaching to the ability of students". Providing appropriate course content according to students- ability is the main goal of this paper. Except offering a series of conventional learning services, abundant information available, and instant message delivery, a complete online learning environment should be able to distinguish between students- ability and provide learning courses that best suit their ability. However, if a distance learning site contains well-designed course content and design but fails to provide adaptive courses, students will gradually loss their interests and confidence in learning and result in ineffective learning or discontinued learning. In this paper, an intelligent tutoring system is proposed and it consists of several modules working cooperatively in order to build an adaptive learning environment for distance education. The operation of the system is based on the result of Self-Organizing Map (SOM) to divide students into different groups according to their learning ability and learning interests and then provide them with suitable course content. Accordingly, the problem of information overload and internet traffic problem can be solved because the amount of traffic accessing the same content is reduced.

Plant Varieties Selection System

In the end of the day, meteorological data and environmental data becomes widely used such as plant varieties selection system. Variety plant selection for planted area is of almost importance for all crops, including varieties of sugarcane. Since sugarcane have many varieties. Variety plant non selection for planting may not be adapted to the climate or soil conditions for planted area. Poor growth, bloom drop, poor fruit, and low price are to be from varieties which were not recommended for those planted area. This paper presents plant varieties selection system for planted areas in Thailand from meteorological data and environmental data by the use of decision tree techniques. With this software developed as an environmental data analysis tool, it can analyze resulting easier and faster. Our software is a front end of WEKA that provides fundamental data mining functions such as classify, clustering, and analysis functions. It also supports pre-processing, analysis, and decision tree output with exporting result. After that, our software can export and display data result to Google maps API in order to display result and plot plant icons effectively.

Feature Extraction from Aerial Photos

In Geographic Information System, one of the sources of obtaining needed geographic data is digitizing analog maps and evaluation of aerial and satellite photos. In this study, a method will be discussed which can be used to extract vectorial features and creating vectorized drawing files for aerial photos. At the same time a software developed for these purpose. Converting from raster to vector is also known as vectorization and it is the most important step when creating vectorized drawing files. In the developed algorithm, first of all preprocessing on the aerial photo is done. These are; converting to grayscale if necessary, reducing noise, applying some filters and determining the edge of the objects etc. After these steps, every pixel which constitutes the photo are followed from upper left to right bottom by examining its neighborhood relationship and one pixel wide lines or polylines obtained. The obtained lines have to be erased for preventing confusion while continuing vectorization because if not erased they can be perceived as new line, but if erased it can cause discontinuity in vector drawing so the image converted from 2 bit to 8 bit and the detected pixels are expressed as a different bit. In conclusion, the aerial photo can be converted to vector form which includes lines and polylines and can be opened in any CAD application.

Application of Remote Sensing in Development of Green Space

One of the most important parameters to develop and manage urban areas is appropriate selection of land surface to develop green spaces in these areas. In this study, in order to identify the most appropriate sites and areas cultivated for ornamental species in Jiroft, Landsat Enhanced Thematic Mapper Plus (ETM+) images due to extract the most important effective climatic and adaphic parameters for growth ornamental species were used. After geometric and atmospheric corrections applied, to enhance accuracy of multi spectral (XS) bands, the fusion of Landsat XS bands by IRS-1D panchromatic band (PAN) was performed. After field sampling to evaluate the correlation between different factors in surface soil sampling location and different bands digital number (DN) of ETM+ sensor on the same points, correlation tables formed using the best computational model and the map of physical and chemical parameters of soil was produced. Then the accuracy of them was investigated by using kappa coefficient. Finally, according to produced maps, the best areas for cultivation of recommended species were introduced.

Preoperative to Intraoperative Space Registration for Management of Head Injuries

A registration framework for image-guided robotic surgery is proposed for three emergency neurosurgical procedures, namely Intracranial Pressure (ICP) Monitoring, External Ventricular Drainage (EVD) and evacuation of a Chronic Subdural Haematoma (CSDH). The registration paradigm uses CT and white light as modalities. This paper presents two simulation studies for a preliminary evaluation of the registration protocol: (1) The loci of the Target Registration Error (TRE) in the patient-s axial, coronal and sagittal views were simulated based on a Fiducial Localisation Error (FLE) of 5 mm and (2) Simulation of the actual framework using projected views from a surface rendered CT model to represent white light images of the patient. Craniofacial features were employed as the registration basis to map the CT space onto the simulated intraoperative space. Photogrammetry experiments on an artificial skull were also performed to benchmark the results obtained from the second simulation. The results of both simulations show that the proposed protocol can provide a 5mm accuracy for these neurosurgical procedures.

Tracking Control of a Linear Parabolic PDE with In-domain Point Actuators

This paper addresses the problem of asymptotic tracking control of a linear parabolic partial differential equation with indomain point actuation. As the considered model is a non-standard partial differential equation, we firstly developed a map that allows transforming this problem into a standard boundary control problem to which existing infinite-dimensional system control methods can be applied. Then, a combination of energy multiplier and differential flatness methods is used to design an asymptotic tracking controller. This control scheme consists of stabilizing state-feedback derived from the energy multiplier method and feed-forward control based on the flatness property of the system. This approach represents a systematic procedure to design tracking control laws for a class of partial differential equations with in-domain point actuation. The applicability and system performance are assessed by simulation studies.

Particle Filter Applied to Noisy Synchronization in Polynomial Chaotic Maps

Polynomial maps offer analytical properties used to obtain better performances in the scope of chaos synchronization under noisy channels. This paper presents a new method to simplify equations of the Exact Polynomial Kalman Filter (ExPKF) given in [1]. This faster algorithm is compared to other estimators showing that performances of all considered observers vanish rapidly with the channel noise making application of chaos synchronization intractable. Simulation of ExPKF shows that saturation drawn on the emitter to keep it stable impacts badly performances for low channel noise. Then we propose a particle filter that outperforms all other Kalman structured observers in the case of noisy channels.

Studying the Trend of Drought in Fars Province (Iran) using SPI Method

Drought is natural and climate phenomenon and in fact server as a part of climate in an area and also it has significant environmental, social ,and economic consequences .drought differs from the other natural disasters from this viewpoint that it s a creeping phenomenon meaning that it progresses little and its difficult to determine the time of its onset and termination .most of the drought definitions are on based on precipitation shortage and consequently ,the shortage of water some of the activities related to the water such as agriculture In this research ,drought condition in Fars province was evacuated using SPI method within a 37 year – statistical –period(1974-2010)and maps related to the drought were prepared for each of the statistical period years. According to the results obtained from this research, the years 1974, 1976, 1975, 1982 with SPI (-1.03, 0.39, -1.05, -1.49) respectively, were the doughiest years and 1996,1997,2000 with SPI (2.49, 1.49, 1.46, 1.04) respectively, the most humid within the studying time series and the rest are in more normal conditions in the term of drought.

Automatic Map Simplification for Visualization on Mobile Devices

The visualization of geographic information on mobile devices has become popular as the widespread use of mobile Internet. The mobility of these devices brings about much convenience to people-s life. By the add-on location-based services of the devices, people can have an access to timely information relevant to their tasks. However, visual analysis of geographic data on mobile devices presents several challenges due to the small display and restricted computing resources. These limitations on the screen size and resources may impair the usability aspects of the visualization applications. In this paper, a variable-scale visualization method is proposed to handle the challenge of small mobile display. By merging multiple scales of information into a single image, the viewer is able to focus on the interesting region, while having a good grasp of the surrounding context. This is essentially visualizing the map through a fisheye lens. However, the fisheye lens induces undesirable geometric distortion in the peripheral, which renders the information meaningless. The proposed solution is to apply map generalization that removes excessive information around the peripheral and an automatic smoothing process to correct the distortion while keeping the local topology consistent. The proposed method is applied on both artificial and real geographical data for evaluation.

Numerical Study of Iterative Methods for the Solution of the Dirichlet-Neumann Map for Linear Elliptic PDEs on Regular Polygon Domains

A generalized Dirichlet to Neumann map is one of the main aspects characterizing a recently introduced method for analyzing linear elliptic PDEs, through which it became possible to couple known and unknown components of the solution on the boundary of the domain without solving on its interior. For its numerical solution, a well conditioned quadratically convergent sine-Collocation method was developed, which yielded a linear system of equations with the diagonal blocks of its associated coefficient matrix being point diagonal. This structural property, among others, initiated interest for the employment of iterative methods for its solution. In this work we present a conclusive numerical study for the behavior of classical (Jacobi and Gauss-Seidel) and Krylov subspace (GMRES and Bi-CGSTAB) iterative methods when they are applied for the solution of the Dirichlet to Neumann map associated with the Laplace-s equation on regular polygons with the same boundary conditions on all edges.