Combating Money Laundering in the Banking Industry: Malaysian Experience

Money laundering has been described by many as the lifeblood of crime and is a major threat to the economic and social well-being of societies. It has been recognized that the banking system has long been the central element of money laundering. This is in part due to the complexity and confidentiality of the banking system itself. It is generally accepted that effective anti-money laundering (AML) measures adopted by banks will make it tougher for criminals to get their "dirty money" into the financial system. In fact, for law enforcement agencies, banks are considered to be an important source of valuable information for the detection of money laundering. However, from the banks- perspective, the main reason for their existence is to make as much profits as possible. Hence their cultural and commercial interests are totally distinct from that of the law enforcement authorities. Undoubtedly, AML laws create a major dilemma for banks as they produce a significant shift in the way banks interact with their customers. Furthermore, the implementation of the laws not only creates significant compliance problems for banks, but also has the potential to adversely affect the operations of banks. As such, it is legitimate to ask whether these laws are effective in preventing money launderers from using banks, or whether they simply put an unreasonable burden on banks and their customers. This paper attempts to address these issues and analyze them against the background of the Malaysian AML laws. It must be said that effective coordination between AML regulator and the banking industry is vital to minimize problems faced by the banks and thereby to ensure effective implementation of the laws in combating money laundering.

Design and Operation of a Multicarrier Energy System Based On Multi Objective Optimization Approach

Multi-energy systems will enhance the system reliability and power quality. This paper presents an integrated approach for the design and operation of distributed energy resources (DER) systems, based on energy hub modeling. A multi-objective optimization model is developed by considering an integrated view of electricity and natural gas network to analyze the optimal design and operating condition of DER systems, by considering two conflicting objectives, namely, minimization of total cost and the minimization of environmental impact which is assessed in terms of CO2 emissions. The mathematical model considers energy demands of the site, local climate data, and utility tariff structure, as well as technical and financial characteristics of the candidate DER technologies. To provide energy demands, energy systems including photovoltaic, and co-generation systems, boiler, central power grid are considered. As an illustrative example, a hotel in Iran demonstrates potential applications of the proposed method. The results prove that increasing the satisfaction degree of environmental objective leads to increased total cost.

Performance Evaluation of Neural Network Prediction for Data Prefetching in Embedded Applications

Embedded systems need to respect stringent real time constraints. Various hardware components included in such systems such as cache memories exhibit variability and therefore affect execution time. Indeed, a cache memory access from an embedded microprocessor might result in a cache hit where the data is available or a cache miss and the data need to be fetched with an additional delay from an external memory. It is therefore highly desirable to predict future memory accesses during execution in order to appropriately prefetch data without incurring delays. In this paper, we evaluate the potential of several artificial neural networks for the prediction of instruction memory addresses. Neural network have the potential to tackle the nonlinear behavior observed in memory accesses during program execution and their demonstrated numerous hardware implementation emphasize this choice over traditional forecasting techniques for their inclusion in embedded systems. However, embedded applications execute millions of instructions and therefore millions of addresses to be predicted. This very challenging problem of neural network based prediction of large time series is approached in this paper by evaluating various neural network architectures based on the recurrent neural network paradigm with pre-processing based on the Self Organizing Map (SOM) classification technique.

How Prior Knowledge Affects User's Understanding of System Requirements?

Requirements are critical to system validation as they guide all subsequent stages of systems development. Inadequately specified requirements generate systems that require major revisions or cause system failure entirely. Use Cases have become the main vehicle for requirements capture in many current Object Oriented (OO) development methodologies, and a means for developers to communicate with different stakeholders. In this paper we present the results of a laboratory experiment that explored whether different types of use case format are equally effective in facilitating high knowledge user-s understanding. Results showed that the provision of diagrams along with the textual use case descriptions significantly improved user comprehension of system requirements in both familiar and unfamiliar application domains. However, when comparing groups that received models of textual description accompanied with diagrams of different level of details (simple and detailed) we found no significant difference in performance.

Conditions on Blind Source Separability of Linear FIR-MIMO Systems with Binary Inputs

In this note, we investigate the blind source separability of linear FIR-MIMO systems. The concept of semi-reversibility of a system is presented. It is shown that for a semi-reversible system, if the input signals belong to a binary alphabet, then the source data can be blindly separated. One sufficient condition for a system to be semi-reversible is obtained. It is also shown that the proposed criteria is weaker than that in the literature which requires that the channel matrix is irreducible/invertible or reversible.

A Study of Grounding Grid Characteristics with Conductive Concrete

The purpose of this paper is to improve electromagnetic characteristics on grounding grid by applying the conductive concrete. The conductive concrete in this study is under an extra high voltage (EHV, 345kV) system located in a high-tech industrial park or science park. Instead of surrounding soil of grounding grid, the application of conductive concrete can reduce equipment damage and body damage caused by switching surges. The focus of the two cases on the EHV distribution system in a high-tech industrial park is presented to analyze four soil material styles. By comparing several soil material styles, the study results have shown that the conductive concrete can effectively reduce the negative damages caused by electromagnetic transient. The adoption of the style of grounding grid located 1.0 (m) underground and conductive concrete located from the ground surface to 1.25 (m) underground can obviously improve the electromagnetic characteristics so as to advance protective efficiency.

Analysis of Feature Space for a 2d/3d Vision based Emotion Recognition Method

In modern human computer interaction systems (HCI), emotion recognition is becoming an imperative characteristic. The quest for effective and reliable emotion recognition in HCI has resulted in a need for better face detection, feature extraction and classification. In this paper we present results of feature space analysis after briefly explaining our fully automatic vision based emotion recognition method. We demonstrate the compactness of the feature space and show how the 2d/3d based method achieves superior features for the purpose of emotion classification. Also it is exposed that through feature normalization a widely person independent feature space is created. As a consequence, the classifier architecture has only a minor influence on the classification result. This is particularly elucidated with the help of confusion matrices. For this purpose advanced classification algorithms, such as Support Vector Machines and Artificial Neural Networks are employed, as well as the simple k- Nearest Neighbor classifier.

An FPGA Implementation of Intelligent Visual Based Fall Detection

Falling has been one of the major concerns and threats to the independence of the elderly in their daily lives. With the worldwide significant growth of the aging population, it is essential to have a promising solution of fall detection which is able to operate at high accuracy in real-time and supports large scale implementation using multiple cameras. Field Programmable Gate Array (FPGA) is a highly promising tool to be used as a hardware accelerator in many emerging embedded vision based system. Thus, it is the main objective of this paper to present an FPGA-based solution of visual based fall detection to meet stringent real-time requirements with high accuracy. The hardware architecture of visual based fall detection which utilizes the pixel locality to reduce memory accesses is proposed. By exploiting the parallel and pipeline architecture of FPGA, our hardware implementation of visual based fall detection using FGPA is able to achieve a performance of 60fps for a series of video analytical functions at VGA resolutions (640x480). The results of this work show that FPGA has great potentials and impacts in enabling large scale vision system in the future healthcare industry due to its flexibility and scalability.

Housing Defect of Newly Completed House: An Analysis Using Condition Survey Protocol (CSP) 1 Matrix

Housing is a basic human right. The provision of new house shall be free from any defects, even for the defects that people do normally considered as 'cosmetic defects'. This paper studies about the building defects of newly completed house of 72 unit of double-storey terraced located in Bangi, Selangor. The building survey implemented using protocol 1 (visual inspection). As for new house, the survey work is very stringent in determining the defects condition and priority. Survey and reporting procedure is carried out based on CSP1 Matrix that involved scoring system, photographs and plan tagging. The analysis is done using Statistical Package for Social Sciences (SPSS). The finding reveals that there are 2119 defects recorded in 72 terraced houses. The cumulative score obtained was 27644 while the overall rating is 13.05. These results indicate that the construction quality of the newly terraced houses is low and not up to an acceptable standard as the new house should be.

Neural Network Optimal Power Flow(NN-OPF) based on IPSO with Developed Load Cluster Method

An Optimal Power Flow based on Improved Particle Swarm Optimization (OPF-IPSO) with Generator Capability Curve Constraint is used by NN-OPF as a reference to get pattern of generator scheduling. There are three stages in Designing NN-OPF. The first stage is design of OPF-IPSO with generator capability curve constraint. The second stage is clustering load to specific range and calculating its index. The third stage is training NN-OPF using constructive back propagation method. In training process total load and load index used as input, and pattern of generator scheduling used as output. Data used in this paper is power system of Java-Bali. Software used in this simulation is MATLAB.

Designing Early Warning System: Prediction Accuracy of Currency Crisis by Using k-Nearest Neighbour Method

Developing a stable early warning system (EWS) model that is capable to give an accurate prediction is a challenging task. This paper introduces k-nearest neighbour (k-NN) method which never been applied in predicting currency crisis before with the aim of increasing the prediction accuracy. The proposed k-NN performance depends on the choice of a distance that is used where in our analysis; we take the Euclidean distance and the Manhattan as a consideration. For the comparison, we employ three other methods which are logistic regression analysis (logit), back-propagation neural network (NN) and sequential minimal optimization (SMO). The analysis using datasets from 8 countries and 13 macro-economic indicators for each country shows that the proposed k-NN method with k = 4 and Manhattan distance performs better than the other methods.

Using the Keystrokes Dynamic for Systems of Personal Security

This paper presents a boarding on biometric authentication through the Keystrokes Dynamics that it intends to identify a person from its habitual rhythm to type in conventional keyboard. Seven done experiments: verifying amount of prototypes, threshold, features and the variation of the choice of the times of the features vector. The results show that the use of the Keystroke Dynamics is simple and efficient for personal authentication, getting optimum resulted using 90% of the features with 4.44% FRR and 0% FAR.

An Improved STBC Structure and Transmission Scheme for High Rate and Reliability in OFDMA Cooperative Communication

Space-time block code(STBC) has been studied to get full diversity and full rate in multiple input multiple output(MIMO) system. Achieving full rate is difficult in cooperative communications due to the each user consumes the time slots for transmitting information in cooperation phase. So combining MIMO systems with cooperative communications has been researched for full diversity and full rate. In orthogonal frequency division multiple access (OFDMA) system, it is an alternative way that each user shares their allocated subchannels instead of using the MIMO system to improve the transmission rate. In this paper, a Decode-and-forward (DF) based cooperative communication scheme is proposed. The proposed scheme has improved transmission rate and reliability in multi-path fading channel of the OFDMA up-link condition by modified STBC structure and subchannel sharing.

A Supervisory Scheme for Step-Wise Safe Switching Controllers

A supervisory scheme is proposed that implements Stepwise Safe Switching Logic. The functionality of the supervisory scheme is organized in the following eight functional units: Step- Wise Safe Switching unit, Common controllers design unit, Experimentation unit, Simulation unit, Identification unit, Trajectory cruise unit, Operating points unit and Expert system unit. The supervisory scheme orchestrates both the off-line preparative actions, as well as the on-line actions that implement the Stepwise Safe Switching Logic. The proposed scheme is a generic tool, that may be easily applied for a variety of industrial control processes and may be implemented as an automation software system, with the use of a high level programming environment, like Matlab.

Direct Torque Control - DTC of Induction Motor Used for Piloting a Centrifugal Pump Supplied by a Photovoltaic Generator

In this paper we propose the study of a centrifugal pump control system driven by a three-phase induction motor, which is supplied by a PhotoVoltaic PV generator. The system includes solar panel, a DC / DC converter equipped with its MPPT control, a voltage inverter to three-phase Pulse Width Modulation - PWM and a centrifugal pump driven by a three phase induction motor. In order to control the flow of the centrifugal pump, a Direct Torque Control - DTC of the induction machine is used. To illustrate the performances of the control, simulation results are carried out using Matlab/Simulink.

Bio-inspired Audio Content-Based Retrieval Framework (B-ACRF)

Content-based music retrieval generally involves analyzing, searching and retrieving music based on low or high level features of a song which normally used to represent artists, songs or music genre. Identifying them would normally involve feature extraction and classification tasks. Theoretically the greater features analyzed, the better the classification accuracy can be achieved but with longer execution time. Technique to select significant features is important as it will reduce dimensions of feature used in classification and contributes to the accuracy. Artificial Immune System (AIS) approach will be investigated and applied in the classification task. Bio-inspired audio content-based retrieval framework (B-ACRF) is proposed at the end of this paper where it embraces issues that need further consideration in music retrieval performances.

Comparative Study of Evolutionary Model and Clustering Methods in Circuit Partitioning Pertaining to VLSI Design

Partitioning is a critical area of VLSI CAD. In order to build complex digital logic circuits its often essential to sub-divide multi -million transistor design into manageable Pieces. This paper looks at the various partitioning techniques aspects of VLSI CAD, targeted at various applications. We proposed an evolutionary time-series model and a statistical glitch prediction system using a neural network with selection of global feature by making use of clustering method model, for partitioning a circuit. For evolutionary time-series model, we made use of genetic, memetic & neuro-memetic techniques. Our work focused in use of clustering methods - K-means & EM methodology. A comparative study is provided for all techniques to solve the problem of circuit partitioning pertaining to VLSI design. The performance of all approaches is compared using benchmark data provided by MCNC standard cell placement benchmark net lists. Analysis of the investigational results proved that the Neuro-memetic model achieves greater performance then other model in recognizing sub-circuits with minimum amount of interconnections between them.

Robot Map Building from Sonar and Laser Information using DSmT with Discounting Theory

In this paper, a new method of information fusion – DSmT (Dezert and Smarandache Theory) is introduced to apply to managing and dealing with the uncertain information from robot map building. Here we build grid map form sonar sensors and laser range finder (LRF). The uncertainty mainly comes from sonar sensors and LRF. Aiming to the uncertainty in static environment, we propose Classic DSm (DSmC) model for sonar sensors and laser range finder, and construct the general basic belief assignment function (gbbaf) respectively. Generally speaking, the evidence sources are unreliable in physical system, so we must consider the discounting theory before we apply DSmT. At last, Pioneer II mobile robot serves as a simulation experimental platform. We build 3D grid map of belief layout, then mainly compare the effect of building map using DSmT and DST. Through this simulation experiment, it proves that DSmT is very successful and valid, especially in dealing with highly conflicting information. In short, this study not only finds a new method for building map under static environment, but also supplies with a theory foundation for us to further apply Hybrid DSmT (DSmH) to dynamic unknown environment and multi-robots- building map together.

Finding Sparse Features in Face Detection Using Genetic Algorithms

Although Face detection is not a recent activity in the field of image processing, it is still an open area for research. The greatest step in this field is the work reported by Viola and its recent analogous is Huang et al. Both of them use similar features and also similar training process. The former is just for detecting upright faces, but the latter can detect multi-view faces in still grayscale images using new features called 'sparse feature'. Finding these features is very time consuming and inefficient by proposed methods. Here, we propose a new approach for finding sparse features using a genetic algorithm system. This method requires less computational cost and gets more effective features in learning process for face detection that causes more accuracy.

Hybrid Algorithm for Hammerstein System Identification Using Genetic Algorithm and Particle Swarm Optimization

This paper presents a method of model selection and identification of Hammerstein systems by hybridization of the genetic algorithm (GA) and particle swarm optimization (PSO). An unknown nonlinear static part to be estimated is approximately represented by an automatic choosing function (ACF) model. The weighting parameters of the ACF and the system parameters of the linear dynamic part are estimated by the linear least-squares method. On the other hand, the adjusting parameters of the ACF model structure are properly selected by the hybrid algorithm of the GA and PSO, where the Akaike information criterion is utilized as the evaluation value function. Simulation results are shown to demonstrate the effectiveness of the proposed hybrid algorithm.