Digital Paradoxes in Learning Theories

As a learning theory tries to borrow from science a framework to found its method, it shows paradoxes and paralysing contraddictions. This results, on one hand, from adopting a learning/teaching model as it were a mere “transfer of data" (mechanical learning approach), and on the other hand from borrowing the complexity theory (an indeterministic and non-linear model), that risks to vanish every educational effort. This work is aimed at describing existing criticism, unveiling the antinomic nature of such paradoxes, focussing on a view where neither the mechanical learning perspective nor the chaotic and nonlinear model can threaten and jeopardize the educational work. Author intends to go back over the steps that led to these paradoxes and to unveil their antinomic nature. Actually this could serve the purpose to explain some current misunderstandings about the real usefulness of Ict within the youth-s learning process and growth.

Speed Control of a Permanent Magnet Synchronous Machine (PMSM) Fed by an Inverter Voltage Fuzzy Control Approach

This paper deals with the synthesis of fuzzy controller applied to a permanent magnet synchronous machine (PMSM) with a guaranteed H∞ performance. To design this fuzzy controller, nonlinear model of the PMSM is approximated by Takagi-Sugeno fuzzy model (T-S fuzzy model), then the so-called parallel distributed compensation (PDC) is employed. Next, we derive the property of the H∞ norm. The latter is cast in terms of linear matrix inequalities (LMI-s) while minimizing the H∞ norm of the transfer function between the disturbance and the error ( ) ev T . The experimental and simulations results were conducted on a permanent magnet synchronous machine to illustrate the effects of the fuzzy modelling and the controller design via the PDC.

Determining Optimal Demand Rate and Production Decisions: A Geometric Programming Approach

In this paper a nonlinear model is presented to demonstrate the relation between production and marketing departments. By introducing some functions such as pricing cost and market share loss functions it will be tried to show some aspects of market modelling which has not been regarded before. The proposed model will be a constrained signomial geometric programming model. For model solving, after variables- modifications an iterative technique based on the concept of geometric mean will be introduced to solve the resulting non-standard posynomial model which can be applied to a wide variety of models in non-standard posynomial geometric programming form. At the end a numerical analysis will be presented to accredit the validity of the mentioned model.

Analyzing the Factors Effecting the Passenger Car Breakdowns using Com-Poisson GLM

Number of breakdowns experienced by a machinery is a highly under-dispersed count random variable and its value can be attributed to the factors related to the mechanical input and output of that machinery. Analyzing such under-dispersed count observations as a function of the explanatory factors has been a challenging problem. In this paper, we aim at estimating the effects of various factors on the number of breakdowns experienced by a passenger car based on a study performed in Mauritius over a year. We remark that the number of passenger car breakdowns is highly under-dispersed. These data are therefore modelled and analyzed using Com-Poisson regression model. We use quasi-likelihood estimation approach to estimate the parameters of the model. Under-dispersion parameter is estimated to be 2.14 justifying the appropriateness of Com-Poisson distribution in modelling under-dispersed count responses recorded in this study.

Indexing and Searching of Image Data in Multimedia Databases Using Axial Projection

This paper introduces and studies new indexing techniques for content-based queries in images databases. Indexing is the key to providing sophisticated, accurate and fast searches for queries in image data. This research describes a new indexing approach, which depends on linear modeling of signals, using bases for modeling. A basis is a set of chosen images, and modeling an image is a least-squares approximation of the image as a linear combination of the basis images. The coefficients of the basis images are taken together to serve as index for that image. The paper describes the implementation of the indexing scheme, and presents the findings of our extensive evaluation that was conducted to optimize (1) the choice of the basis matrix (B), and (2) the size of the index A (N). Furthermore, we compare the performance of our indexing scheme with other schemes. Our results show that our scheme has significantly higher performance.

The Multi-Layered Perceptrons Neural Networks for the Prediction of Daily Solar Radiation

The Multi-Layered Perceptron (MLP) Neural networks have been very successful in a number of signal processing applications. In this work we have studied the possibilities and the met difficulties in the application of the MLP neural networks for the prediction of daily solar radiation data. We have used the Polack-Ribière algorithm for training the neural networks. A comparison, in term of the statistical indicators, with a linear model most used in literature, is also performed, and the obtained results show that the neural networks are more efficient and gave the best results.

Adaptive MPC Using a Recursive Learning Technique

A model predictive controller based on recursive learning is proposed. In this SISO adaptive controller, a model is automatically updated using simple recursive equations. The identified models are then stored in the memory to be re-used in the future. The decision for model update is taken based on a new control performance index. The new controller allows the use of simple linear model predictive controllers in the control of nonlinear time varying processes.

Validity Domains of Beams Behavioural Models: Efficiency and Reduction with Artificial Neural Networks

In a particular case of behavioural model reduction by ANNs, a validity domain shortening has been found. In mechanics, as in other domains, the notion of validity domain allows the engineer to choose a valid model for a particular analysis or simulation. In the study of mechanical behaviour for a cantilever beam (using linear and non-linear models), Multi-Layer Perceptron (MLP) Backpropagation (BP) networks have been applied as model reduction technique. This reduced model is constructed to be more efficient than the non-reduced model. Within a less extended domain, the ANN reduced model estimates correctly the non-linear response, with a lower computational cost. It has been found that the neural network model is not able to approximate the linear behaviour while it does approximate the non-linear behaviour very well. The details of the case are provided with an example of the cantilever beam behaviour modelling.

Model Predictive Control of Gantry Crane with Input Nonlinearity Compensation

This paper proposed a nonlinear model predictive control (MPC) method for the control of gantry crane. One of the main motivations to apply MPC to control gantry crane is based on its ability to handle control constraints for multivariable systems. A pre-compensator is constructed to compensate the input nonlinearity (nonsymmetric dead zone with saturation) by using its inverse function. By well tuning the weighting function matrices, the control system can properly compromise the control between crane position and swing angle. The proposed control algorithm was implemented for the control of gantry crane system in System Control Lab of University of Technology, Sydney (UTS), and achieved desired experimental results.

Periodic Control of a Wastewater Treatment Process to Improve Productivity

In this paper, periodic force operation of a wastewater treatment process has been studied for the improved process performance. A previously developed dynamic model for the process is used to conduct the performance analysis. The static version of the model was utilized first to determine the optimal productivity conditions for the process. Then, feed flow rate in terms of dilution rate i.e. (D) is transformed into sinusoidal function. Nonlinear model predictive control algorithm is utilized to regulate the amplitude and period of the sinusoidal function. The parameters of the feed cyclic functions are determined which resulted in improved productivity than the optimal productivity under steady state conditions. The improvement in productivity is found to be marginal and is satisfactory in substrate conversion compared to that of the optimal condition and to the steady state condition, which corresponds to the average value of the periodic function. Successful results were also obtained in the presence of modeling errors and external disturbances.

An Algorithm for Autonomous Aerial Navigation using MATLAB® Mapping Tool Box

In the present era of aviation technology, autonomous navigation and control have emerged as a prime area of active research. Owing to the tremendous developments in the field, autonomous controls have led today’s engineers to claim that future of aerospace vehicle is unmanned. Development of guidance and navigation algorithms for an unmanned aerial vehicle (UAV) is an extremely challenging task, which requires efforts to meet strict, and at times, conflicting goals of guidance and control. In this paper, aircraft altitude and heading controllers and an efficient algorithm for self-governing navigation using MATLAB® mapping toolbox is presented which also enables loitering of a fixed wing UAV over a specified area. For this purpose, a nonlinear mathematical model of a UAV is used. The nonlinear model is linearized around a stable trim point and decoupled for controller design. The linear controllers are tested on the nonlinear aircraft model and navigation algorithm is subsequently developed for for autonomous flight of the UAV. The results are presented for trajectory controllers and waypoint based navigation. Our investigation reveals that MATLAB® mapping toolbox can be exploited to successfully deliver an efficient algorithm for autonomous aerial navigation for a UAV.

Stability Issues on an Implemented All-Pass Filter Circuitry

The so-called all-pass filter circuits are commonly used in the field of signal processing, control and measurement. Being connected to capacitive loads, these circuits tend to loose their stability; therefore the elaborate analysis of their dynamic behavior is necessary. The compensation methods intending to increase the stability of such circuits are discussed in this paper, including the socalled lead-lag compensation technique being treated in detail. For the dynamic modeling, a two-port network model of the all-pass filter is being derived. The results of the model analysis show, that effective lead-lag compensation can be achieved, alone by the optimization of the circuit parameters; therefore the application of additional electric components are not needed to fulfill the stability requirement.

Identification of Aircraft Gas Turbine Engines Temperature Condition

Groundlessness of application probability-statistic methods are especially shown at an early stage of the aviation GTE technical condition diagnosing, when the volume of the information has property of the fuzzy, limitations, uncertainty and efficiency of application of new technology Soft computing at these diagnosing stages by using the fuzzy logic and neural networks methods. It is made training with high accuracy of multiple linear and nonlinear models (the regression equations) received on the statistical fuzzy data basis. At the information sufficiency it is offered to use recurrent algorithm of aviation GTE technical condition identification on measurements of input and output parameters of the multiple linear and nonlinear generalized models at presence of noise measured (the new recursive least squares method (LSM)). As application of the given technique the estimation of the new operating aviation engine D30KU-154 technical condition at height H=10600 m was made.

Jitter Transfer in High Speed Data Links

Phase locked loops for data links operating at 10 Gb/s or faster are low phase noise devices designed to operate with a low jitter reference clock. Characterization of their jitter transfer function is difficult because the intrinsic noise of the device is comparable to the random noise level in the reference clock signal. A linear model is proposed to account for the intrinsic noise of a PLL. The intrinsic noise data of a PLL for 10 Gb/s links is presented. The jitter transfer function of a PLL in a test chip for 12.8 Gb/s data links was determined in experiments using the 400 MHz reference clock as the source of simultaneous excitations over a wide range of frequency. The result shows that the PLL jitter transfer function can be approximated by a second order linear model.

Mathematical Modeling Experimental Approach of the Friction on the Tool-Chip Interface of Multicoated Carbide Turning Inserts

The importance of machining process in today-s industry requires the establishment of more practical approaches to clearly represent the intimate and severe contact on the tool-chipworkpiece interfaces. Mathematical models are developed using the measured force signals to relate each of the tool-chip friction components on the rake face to the operating cutting parameters in rough turning operation using multilayers coated carbide inserts. Nonlinear modeling proved to have high capability to detect the nonlinear functional variability embedded in the experimental data. While feedrate is found to be the most influential parameter on the friction coefficient and its related force components, both cutting speed and depth of cut are found to have slight influence. Greater deformed chip thickness is found to lower the value of friction coefficient as the sliding length on the tool-chip interface is reduced.

Mixtures of Monotone Networks for Prediction

In many data mining applications, it is a priori known that the target function should satisfy certain constraints imposed by, for example, economic theory or a human-decision maker. In this paper we consider partially monotone prediction problems, where the target variable depends monotonically on some of the input variables but not on all. We propose a novel method to construct prediction models, where monotone dependences with respect to some of the input variables are preserved by virtue of construction. Our method belongs to the class of mixture models. The basic idea is to convolute monotone neural networks with weight (kernel) functions to make predictions. By using simulation and real case studies, we demonstrate the application of our method. To obtain sound assessment for the performance of our approach, we use standard neural networks with weight decay and partially monotone linear models as benchmark methods for comparison. The results show that our approach outperforms partially monotone linear models in terms of accuracy. Furthermore, the incorporation of partial monotonicity constraints not only leads to models that are in accordance with the decision maker's expertise, but also reduces considerably the model variance in comparison to standard neural networks with weight decay.

Local Linear Model Tree (LOLIMOT) Reconfigurable Parallel Hardware

Local Linear Neuro-Fuzzy Models (LLNFM) like other neuro- fuzzy systems are adaptive networks and provide robust learning capabilities and are widely utilized in various applications such as pattern recognition, system identification, image processing and prediction. Local linear model tree (LOLIMOT) is a type of Takagi-Sugeno-Kang neuro fuzzy algorithm which has proven its efficiency compared with other neuro fuzzy networks in learning the nonlinear systems and pattern recognition. In this paper, a dedicated reconfigurable and parallel processing hardware for LOLIMOT algorithm and its applications are presented. This hardware realizes on-chip learning which gives it the capability to work as a standalone device in a system. The synthesis results on FPGA platforms show its potential to improve the speed at least 250 of times faster than software implemented algorithms.

Using Different Aspects of the Signings for Appearance-based Sign Language Recognition

Sign language is used by the deaf and hard of hearing people for communication. Automatic sign language recognition is a challenging research area since sign language often is the only way of communication for the deaf people. Sign language includes different components of visual actions made by the signer using the hands, the face, and the torso, to convey his/her meaning. To use different aspects of signs, we combine the different groups of features which have been extracted from the image frames recorded directly by a stationary camera. We combine the features in two levels by employing three techniques. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, or by concatenating feature groups over time and using LDA to choose the most discriminant elements. At the model level, a late fusion of differently trained models can be carried out by a log-linear model combination. In this paper, we investigate these three combination techniques in an automatic sign language recognition system and show that the recognition rate can be significantly improved.

Control and Navigation with Knowledge Bases

In this paper, we focus on the use of knowledge bases in two different application areas – control of systems with unknown or strongly nonlinear models (i.e. hardly controllable by the classical methods), and robot motion planning in eight directions. The first one deals with fuzzy logic and the paper presents approaches for setting and aggregating the rules of a knowledge base. Te second one is concentrated on a case-based reasoning strategy for finding the path in a planar scene with obstacles.

Chaos Theory and Application in Foreign Exchange Rates vs. IRR (Iranian Rial)

Daily production of information and importance of the sequence of produced data in forecasting future performance of market causes analysis of data behavior to become a problem of analyzing time series. But time series that are very complicated, usually are random and as a result their changes considered being unpredictable. While these series might be products of a deterministic dynamical and nonlinear process (chaotic) and as a result be predictable. Point of Chaotic theory view, complicated systems have only chaotically face and as a result they seem to be unregulated and random, but it is possible that they abide by a specified math formula. In this article, with regard to test of strange attractor and biggest Lyapunov exponent probability of chaos on several foreign exchange rates vs. IRR (Iranian Rial) has been investigated. Results show that data in this market have complex chaotic behavior with big degree of freedom.