STLF Based on Optimized Neural Network Using PSO

The quality of short term load forecasting can improve the efficiency of planning and operation of electric utilities. Artificial Neural Networks (ANNs) are employed for nonlinear short term load forecasting owing to their powerful nonlinear mapping capabilities. At present, there is no systematic methodology for optimal design and training of an artificial neural network. One has often to resort to the trial and error approach. This paper describes the process of developing three layer feed-forward large neural networks for short-term load forecasting and then presents a heuristic search algorithm for performing an important task of this process, i.e. optimal networks structure design. Particle Swarm Optimization (PSO) is used to develop the optimum large neural network structure and connecting weights for one-day ahead electric load forecasting problem. PSO is a novel random optimization method based on swarm intelligence, which has more powerful ability of global optimization. Employing PSO algorithms on the design and training of ANNs allows the ANN architecture and parameters to be easily optimized. The proposed method is applied to STLF of the local utility. Data are clustered due to the differences in their characteristics. Special days are extracted from the normal training sets and handled separately. In this way, a solution is provided for all load types, including working days and weekends and special days. The experimental results show that the proposed method optimized by PSO can quicken the learning speed of the network and improve the forecasting precision compared with the conventional Back Propagation (BP) method. Moreover, it is not only simple to calculate, but also practical and effective. Also, it provides a greater degree of accuracy in many cases and gives lower percent errors all the time for STLF problem compared to BP method. Thus, it can be applied to automatically design an optimal load forecaster based on historical data.

SUPAR: System for User-Centric Profiling of Association Rules in Streaming Data

With a surge of stream processing applications novel techniques are required for generation and analysis of association rules in streams. The traditional rule mining solutions cannot handle streams because they generally require multiple passes over the data and do not guarantee the results in a predictable, small time. Though researchers have been proposing algorithms for generation of rules from streams, there has not been much focus on their analysis. We propose Association rule profiling, a user centric process for analyzing association rules and attaching suitable profiles to them depending on their changing frequency behavior over a previous snapshot of time in a data stream. Association rule profiles provide insights into the changing nature of associations and can be used to characterize the associations. We discuss importance of characteristics such as predictability of linkages present in the data and propose metric to quantify it. We also show how association rule profiles can aid in generation of user specific, more understandable and actionable rules. The framework is implemented as SUPAR: System for Usercentric Profiling of Association Rules in streaming data. The proposed system offers following capabilities: i) Continuous monitoring of frequency of streaming item-sets and detection of significant changes therein for association rule profiling. ii) Computation of metrics for quantifying predictability of associations present in the data. iii) User-centric control of the characterization process: user can control the framework through a) constraint specification and b) non-interesting rule elimination.

Network Anomaly Detection using Soft Computing

One main drawback of intrusion detection system is the inability of detecting new attacks which do not have known signatures. In this paper we discuss an intrusion detection method that proposes independent component analysis (ICA) based feature selection heuristics and using rough fuzzy for clustering data. ICA is to separate these independent components (ICs) from the monitored variables. Rough set has to decrease the amount of data and get rid of redundancy and Fuzzy methods allow objects to belong to several clusters simultaneously, with different degrees of membership. Our approach allows us to recognize not only known attacks but also to detect activity that may be the result of a new, unknown attack. The experimental results on Knowledge Discovery and Data Mining- (KDDCup 1999) dataset.

Mining Sequential Patterns Using I-PrefixSpan

In this paper, we propose an improvement of pattern growth-based PrefixSpan algorithm, called I-PrefixSpan. The general idea of I-PrefixSpan is to use sufficient data structure for Seq-Tree framework and separator database to reduce the execution time and memory usage. Thus, with I-PrefixSpan there is no in-memory database stored after index set is constructed. The experimental result shows that using Java 2, this method improves the speed of PrefixSpan up to almost two orders of magnitude as well as the memory usage to more than one order of magnitude.

A Study on Removal Characteristics of (Mn2+) from Aqueous Solution by CNT

It is important to remove manganese from water because of its effects on human and the environment. Human activities are one of the biggest contributors for excessive manganese concentration in the environment. The proposed method to remove manganese in aqueous solution by using adsorption as in carbon nanotubes (CNT) at different parameters: The parameters are CNT dosage, pH, agitation speed and contact time. Different pHs are pH 6.0, pH 6.5, pH 7.0, pH 7.5 and pH 8.0, CNT dosages are 5mg, 6.25mg, 7.5mg, 8.75mg or 10mg, contact time are 10 min, 32.5 min, 55 min, 87.5 min and 120 min while the agitation speeds are 100rpm, 150rpm, 200rpm, 250rpm and 300rpm. The parameters chosen for experiments are based on experimental design done by using Central Composite Design, Design Expert 6.0 with 4 parameters, 5 levels and 2 replications. Based on the results, condition set at pH 7.0, agitation speed of 300 rpm, 7.5mg and contact time 55 minutes gives the highest removal with 75.5%. From ANOVA analysis in Design Expert 6.0, the residual concentration will be very much affected by pH and CNT dosage. Initial manganese concentration is 1.2mg/L while the lowest residual concentration achieved is 0.294mg/L, which almost satisfy DOE Malaysia Standard B requirement. Therefore, further experiments must be done to remove manganese from model water to the required standard (0.2 mg/L) with the initial concentration set to 0.294 mg/L.

Simulation of Fluid Flow and Heat Transfer in the Inclined Enclosure

Mixed convection in two-dimensional shallow rectangular enclosure is considered. The top hot wall moves with constant velocity while the cold bottom wall has no motion. Simulations are performed for Richardson number ranging from Ri = 0.001 to 100 and for Reynolds number keeping fixed at Re = 408.21. Under these conditions cavity encompasses three regimes: dominating forced, mixed and free convection flow. The Prandtl number is set to 6 and the effects of cavity inclination on the flow and heat transfer are studied for different Richardson number. With increasing the inclination angle, interesting behavior of the flow and thermal fields are observed. The streamlines and isotherm plots and the variation of the Nusselt numbers on the hot wall are presented. The average Nusselt number is found to increase with cavity inclination for Ri ³ 1 . Also it is shown that the average Nusselt number changes mildly with the cavity inclination in the dominant forced convection regime but it increases considerably in the regime with dominant natural convection.

Monotonicity of Dependence Concepts from Independent Random Vector into Dependent Random Vector

When the failure function is monotone, some monotonic reliability methods are used to gratefully simplify and facilitate the reliability computations. However, these methods often work in a transformed iso-probabilistic space. To this end, a monotonic simulator or transformation is needed in order that the transformed failure function is still monotone. This note proves at first that the output distribution of failure function is invariant under the transformation. And then it presents some conditions under which the transformed function is still monotone in the newly obtained space. These concern the copulas and the dependence concepts. In many engineering applications, the Gaussian copulas are often used to approximate the real word copulas while the available information on the random variables is limited to the set of marginal distributions and the covariances. So this note catches an importance on the conditional monotonicity of the often used transformation from an independent random vector into a dependent random vector with Gaussian copulas.

Differences in Goal Scoring and Passing Sequences between Winning and Losing Team in UEFA-EURO Championship 2012

The objective of current study is to investigate the differences of winning and losing teams in terms of goal scoring and passing sequences. Total of 31 matches from UEFA-EURO 2012 were analyzed and 5 matches were excluded from analysis due to matches end up drawn. There are two groups of variable used in the study which is; i. the goal scoring variable and: ii. passing sequences variable. Data were analyzed using Wilcoxon matched pair rank test with significant value set at p < 0.05. Current study found the timing of goal scored was significantly higher for winning team at 1st half (Z=-3.416, p=.001) and 2nd half (Z=-3.252, p=.001). The scoring frequency was also found to be increase as time progressed and the last 15 minutes of the game was the time interval the most goals scored. The indicators that were significantly differences between winning and losing team were the goal scored (Z=-4.578, p=.000), the head (Z=-2.500, p=.012), the right foot (Z=-3.788,p=.000), corner (Z=-.2.126,p=.033), open play (Z=-3.744,p=.000), inside the penalty box (Z=-4.174, p=.000) , attackers (Z=-2.976, p=.003) and also the midfielders (Z=-3.400, p=.001). Regarding the passing sequences, there are significance difference between both teams in short passing sequences (Z=-.4.141, p=.000). While for the long passing, there were no significance difference (Z=-.1.795, p=.073). The data gathered in present study can be used by the coaches to construct detailed training program based on their objectives.

Assessing and Visualizing the Stability of Feature Selectors: A Case Study with Spectral Data

Feature selection plays an important role in applications with high dimensional data. The assessment of the stability of feature selection/ranking algorithms becomes an important issue when the dataset is small and the aim is to gain insight into the underlying process by analyzing the most relevant features. In this work, we propose a graphical approach that enables to analyze the similarity between feature ranking techniques as well as their individual stability. Moreover, it works with whatever stability metric (Canberra distance, Spearman's rank correlation coefficient, Kuncheva's stability index,...). We illustrate this visualization technique evaluating the stability of several feature selection techniques on a spectral binary dataset. Experimental results with a neural-based classifier show that stability and ranking quality may not be linked together and both issues have to be studied jointly in order to offer answers to the domain experts.

No one Set of Parameter Values Can Simulate the Epidemics Due to SARS Occurring at Different Localities

A mathematical model for the transmission of SARS is developed. In addition to dividing the population into susceptible (high and low risk), exposed, infected, quarantined, diagnosed and recovered classes, we have included a class called untraced. The model simulates the Gompertz curves which are the best representation of the cumulative numbers of probable SARS cases in Hong Kong and Singapore. The values of the parameters in the model which produces the best fit of the observed data for each city are obtained by using a differential evolution algorithm. It is seen that the values for the parameters needed to simulate the observed daily behaviors of the two epidemics are different.

Hydrothermal Behavior of G-S Magnetically Stabilized Beds Consisting of Magnetic and Non-Magnetic Admixtures

The hydrothermal behavior of a bed consisting of magnetic and shale oil particle admixtures under the effect of a transverse magnetic field is investigated. The phase diagram, bed void fraction are studied under wide range of the operating conditions i.e., gas velocity, magnetic field intensity and fraction of the magnetic particles. It is found that the range of the stabilized regime is reduced as the magnetic fraction decreases. In addition, the bed voidage at the onset of fluidization decreases as the magnetic fraction decreases. On the other hand, Nusselt number and consequently the heat transfer coefficient is found to increase as the magnetic fraction decreases. An empirical equation is investigated to relate the effect of the gas velocity, magnetic field intensity and fraction of the magnetic particles on the heat transfer behavior in the bed.

Terrain Evaluation Method for Hexapod Robot

In this paper a simple terrain evaluation method for hexapod robot is introduced. This method is based on feet coordinate evaluation when all are on the ground. Depending on the feet coordinate differences the local terrain evaluation is possible. Terrain evaluation is necessary for right gait selection and/or body position correction. For terrain roughness evaluation three planes are plotted: two of them as definition points use opposite feet coordinates, third coincides with the robot body plane. The leaning angle of body plane is evaluated measuring gravity force using three-axis accelerometer. Terrain roughness evaluation method is based on angle estimation between normal vectors of these planes. Aim of this work is to present a simple method for embedded robot controller, allowing to find the best further movement settings.

Consumer Insolvency in the Czech Republic

The Czech Republic is a country whose economy has undergone a transformation since 1989. Since joining the EU it has been striving to reduce the differences in its economic standard and the quality of its institutional environment in comparison with developed countries. According to an assessment carried out by the World Bank, the Czech Republic was long classed as a country whose institutional development was seen as problematic. For many years one of the things it was rated most poorly on was its bankruptcy law. The new Insolvency Act, which is a modern law in terms of its treatment of bankruptcy, was first adopted in the Czech Republic in 2006. This law, together with other regulatory measures, offers debtridden Czech economic subjects legal instruments which are well established and in common practice in developed market economies. Since then, analyses performed by the World Bank and the London EBRD have shown that there have been significant steps forward in the quality of Czech bankruptcy law. The Czech Republic still lacks an analytical apparatus which can offer a structured characterisation of the general and specific conditions of Czech company and household debt which is subject to current changes in the global economy. This area has so far not been given the attention it deserves. The lack of research is particularly clear as regards analysis of household debt and householders- ability to settle their debts in a reasonable manner using legal and other state means of regulation. We assume that Czech households have recourse to a modern insolvency law, yet the effective application of this law is hampered by the inconsistencies in the formal and informal institutions involved in resolving debt. This in turn is based on the assumption that this lack of consistency is more marked in cases of personal bankruptcy. Our aim is to identify the symptoms which indicate that for some time the effective application of bankruptcy law in the Czech Republic will be hindered by factors originating in householders- relative inability to identify the risks of falling into debt.

Efficient Realization of an ADFE with a New Adaptive Algorithm

Decision feedback equalizers are commonly employed to reduce the error caused by intersymbol interference. Here, an adaptive decision feedback equalizer is presented with a new adaptation algorithm. The algorithm follows a block-based approach of normalized least mean square (NLMS) algorithm with set-membership filtering and achieves a significantly less computational complexity over its conventional NLMS counterpart with set-membership filtering. It is shown in the results that the proposed algorithm yields similar type of bit error rate performance over a reasonable signal to noise ratio in comparison with the latter one.

Turbine Follower Control Strategy Design Based on Developed FFPP Model

In this paper a comprehensive model of a fossil fueled power plant (FFPP) is developed in order to evaluate the performance of a newly designed turbine follower controller. Considering the drawbacks of previous works, an overall model is developed to minimize the error between each subsystem model output and the experimental data obtained at the actual power plant. The developed model is organized in two main subsystems namely; Boiler and Turbine. Considering each FFPP subsystem characteristics, different modeling approaches are developed. For economizer, evaporator, superheater and reheater, first order models are determined based on principles of mass and energy conservation. Simulations verify the accuracy of the developed models. Due to the nonlinear characteristics of attemperator, a new model, based on a genetic-fuzzy systems utilizing Pittsburgh approach is developed showing a promising performance vis-à-vis those derived with other methods like ANFIS. The optimization constraints are handled utilizing penalty functions. The effect of increasing the number of rules and membership functions on the performance of the proposed model is also studied and evaluated. The turbine model is developed based on the equation of adiabatic expansion. Parameters of all evaluated models are tuned by means of evolutionary algorithms. Based on the developed model a fuzzy PI controller is developed. It is then successfully implemented in the turbine follower control strategy of the plant. In this control strategy instead of keeping control parameters constant, they are adjusted on-line with regard to the error and the error rate. It is shown that the response of the system improves significantly. It is also shown that fuel consumption decreases considerably.

Anti-Money Laundering Requirements – Perceived Effectiveness

Anti-money laundering is commonly recognized as a set of procedures, laws or regulations designed to reduce the practice of generating income through illegal actions. In Malaysia, the government and law enforcement agencies have stepped up their capacities and efforts to curb money laundering since 2001. One of these measures was the enactment of the Anti-Money Laundering Act (AMLA) in 2001. The implementation costs on anti-money laundering requirements (AMLR) can be burdensome to those who are involved in enforcing them. The objective of this paper is to explore the perceived effectiveness of AMLR from the enforcement agencies- perspective. This is a preliminary study whose findings will help to give direction for further AML research in Malaysia. In addition, the results of this study provide empirical evidences on the perceived effectiveness of AMLR prior to further investigations on barriers and improvements of the implementation of the anti-money laundering regime in Malaysia.

Automatic Segmentation of Thigh Magnetic Resonance Images

Purpose: To develop a method for automatic segmentation of adipose and muscular tissue in thighs from magnetic resonance images. Materials and methods: Thirty obese women were scanned on a Siemens Impact Expert 1T resonance machine. 1500 images were finally used in the tests. The developed segmentation method is a recursive and multilevel process that makes use of several concepts such as shaped histograms, adaptative thresholding and connectivity. The segmentation process was implemented in Matlab and operates without the need of any user interaction. The whole set of images were segmented with the developed method. An expert radiologist segmented the same set of images following a manual procedure with the aid of the SliceOmatic software (Tomovision). These constituted our 'goal standard'. Results: The number of coincidental pixels of the automatic and manual segmentation procedures was measured. The average results were above 90 % of success in most of the images. Conclusions: The proposed approach allows effective automatic segmentation of MRIs from thighs, comparable to expert manual performance.

Approximation Algorithm for the Shortest Approximate Common Superstring Problem

The Shortest Approximate Common Superstring (SACS) problem is : Given a set of strings f={w1, w2, ... , wn}, where no wi is an approximate substring of wj, i ≠ j, find a shortest string Sa, such that, every string of f is an approximate substring of Sa. When the number of the strings n>2, the SACS problem becomes NP-complete. In this paper, we present a greedy approximation SACS algorithm. Our algorithm is a 1/2-approximation for the SACS problem. It is of complexity O(n2*(l2+log(n))) in computing time, where n is the number of the strings and l is the length of a string. Our SACS algorithm is based on computation of the Length of the Approximate Longest Overlap (LALO).

Intelligent Agents for Distributed Intrusion Detection System

This paper presents a distributed intrusion detection system IDS, based on the concept of specialized distributed agents community representing agents with the same purpose for detecting distributed attacks. The semantic of intrusion events occurring in a predetermined network has been defined. The correlation rules referring the process which our proposed IDS combines the captured events that is distributed both spatially and temporally. And then the proposed IDS tries to extract significant and broad patterns for set of well-known attacks. The primary goal of our work is to provide intrusion detection and real-time prevention capability against insider attacks in distributed and fully automated environments.

Effect of Clustering on Energy Efficiency and Network Lifetime in Wireless Sensor Networks

Wireless Sensor Network is Multi hop Self-configuring Wireless Network consisting of sensor nodes. The deployment of wireless sensor networks in many application areas, e.g., aggregation services, requires self-organization of the network nodes into clusters. Efficient way to enhance the lifetime of the system is to partition the network into distinct clusters with a high energy node as cluster head. The different methods of node clustering techniques have appeared in the literature, and roughly fall into two families; those based on the construction of a dominating set and those which are based solely on energy considerations. Energy optimized cluster formation for a set of randomly scattered wireless sensors is presented. Sensors within a cluster are expected to be communicating with cluster head only. The energy constraint and limited computing resources of the sensor nodes present the major challenges in gathering the data. In this paper we propose a framework to study how partially correlated data affect the performance of clustering algorithms. The total energy consumption and network lifetime can be analyzed by combining random geometry techniques and rate distortion theory. We also present the relation between compression distortion and data correlation.