A Neurofuzzy Learning and its Application to Control System

A neurofuzzy approach for a given set of input-output training data is proposed in two phases. Firstly, the data set is partitioned automatically into a set of clusters. Then a fuzzy if-then rule is extracted from each cluster to form a fuzzy rule base. Secondly, a fuzzy neural network is constructed accordingly and parameters are tuned to increase the precision of the fuzzy rule base. This network is able to learn and optimize the rule base of a Sugeno like Fuzzy inference system using Hybrid learning algorithm, which combines gradient descent, and least mean square algorithm. This proposed neurofuzzy system has the advantage of determining the number of rules automatically and also reduce the number of rules, decrease computational time, learns faster and consumes less memory. The authors also investigate that how neurofuzzy techniques can be applied in the area of control theory to design a fuzzy controller for linear and nonlinear dynamic systems modelling from a set of input/output data. The simulation analysis on a wide range of processes, to identify nonlinear components on-linely in a control system and a benchmark problem involving the prediction of a chaotic time series is carried out. Furthermore, the well-known examples of linear and nonlinear systems are also simulated under the Matlab/Simulink environment. The above combination is also illustrated in modeling the relationship between automobile trips and demographic factors.

Panoramic Sensor Based Blind Spot Accident Prevention System

There are many automotive accidents due to blind spots and driver inattentiveness. Blind spot is the area that is invisible to the driver's viewpoint without head rotation. Several methods are available for assisting the drivers. Simplest methods are — rear mirrors and wide-angle lenses. But, these methods have a disadvantage of the requirement for human assistance. So, the accuracy of these devices depends on driver. Another approach called an automated approach that makes use of sensors such as sonar or radar. These sensors are used to gather range information. The range information will be processed and used for detecting the collision. The disadvantage of this system is — low angular resolution and limited sensing volumes. This paper is a panoramic sensor based automotive vehicle monitoring..

A New Perturbation Technique in Numerical Study on Buckling of Composite Shells under Axial Compression

A numerical study is presented on buckling and post buckling behaviour of laminated carbon fiber reinforced plastic (CFRP) thin-walled cylindrical shells under axial compression using asymmetric meshing technique (AMT). Asymmetric meshing technique is a perturbation technique to introduce disturbance without changing geometry, boundary conditions or loading conditions. Asymmetric meshing affects predicted buckling load, buckling mode shape and post-buckling behaviour. Linear (eigenvalue) and nonlinear (Riks) analyses have been performed to study the effect of asymmetric meshing in the form of a patch on buckling behaviour. The reduction in the buckling load using Asymmetric meshing technique was observed to be about 15%. An isolated dimple formed near the bifurcation point and the size of which increased to reach a stable state in the post-buckling region. The load-displacement curve behaviour applying asymmetric meshing is quite similar to the curve obtained using initial geometric imperfection in the shell model.

Enhancing Camera Operator Performance with Computer Vision Based Control

Cameras are often mounted on platforms that canmove like rovers, booms, gantries and aircraft. People operate suchplatforms to capture desired views of scene or target. To avoidcollisions with the environment and occlusions, such platforms oftenpossess redundant degrees-of-freedom. As a result, manipulatingsuch platforms demands much skill. Visual-servoing some degrees-of-freedom may reduce operator burden and improve tracking per-formance. This concept, which we call human-in-the-loop visual-servoing, is demonstrated in this paper and applies a Α-β-γ filter and feedforward controller to a broadcast camera boom.

An Exploration on On-line Mass Collaboration: Focusing on its Motivation Structure

The Internet has become an indispensable part of our lives. Witnessing recent web-based mass collaboration, e.g. Wikipedia, people are questioning whether the Internet has made fundamental changes to the society or whether it is merely a hyperbolic fad. It has long been assumed that collective action for a certain goal yields the problem of free-riding, due to its non-exclusive and non-rival characteristics. Then, thanks to recent technological advances, the on-line space experienced the following changes that enabled it to produce public goods: 1) decrease in the cost of production or coordination 2) externality from networked structure 3) production function which integrates both self-interest and altruism. However, this research doubts the homogeneity of on-line mass collaboration and argues that a more sophisticated and systematical approach is required. The alternative that we suggest is to connect the characteristics of the goal to the motivation. Despite various approaches, previous literature fails to recognize that motivation can be structurally restricted by the characteristic of the goal. First we draw a typology of on-line mass collaboration with 'the extent of expected beneficiary' and 'the existence of externality', and then we examine each combination of motivation using Benkler-s framework. Finally, we explore and connect such typology with its possible dominant participating motivation.

Quality of Concrete of Recent Development Projects in Libya

Numerous concrete structures projects are currently running in Libya as part of a US$50 billion government funding. The quality of concrete used in 20 different construction projects were assessed based mainly on the concrete compressive strength achieved. The projects are scattered all over the country and are at various levels of completeness. For most of these projects, the concrete compressive strength was obtained from test results of a 150mm standard cube mold. Statistical analysis of collected concrete compressive strengths reveals that the data in general followed a normal distribution pattern. The study covers comparison and assessment of concrete quality aspects such as: quality control, strength range, data standard deviation, data scatter, and ratio of minimum strength to design strength. Site quality control for these projects ranged from very good to poor according to ACI214 criteria [1]. The ranges (Rg) of the strength (max. strength – min. strength) divided by average strength are from (34% to 160%). Data scatter is measured as the range (Rg) divided by standard deviation () and is found to be (1.82 to 11.04), indicating that the range is ±3σ. International construction companies working in Libya follow different assessment criteria for concrete compressive strength in lieu of national unified procedure. The study reveals that assessments of concrete quality conducted by these construction companies usually meet their adopted (internal) standards, but sometimes fail to meet internationally known standard requirements. The assessment of concrete presented in this paper is based on ACI, British standards and proposed Libyan concrete strength assessment criteria.

On Fractional (k,m)-Deleted Graphs with Constrains Conditions

Let G be a graph of order n, and let k  2 and m  0 be two integers. Let h : E(G)  [0, 1] be a function. If e∋x h(e) = k holds for each x  V (G), then we call G[Fh] a fractional k-factor of G with indicator function h where Fh = {e  E(G) : h(e) > 0}. A graph G is called a fractional (k,m)-deleted graph if there exists a fractional k-factor G[Fh] of G with indicator function h such that h(e) = 0 for any e  E(H), where H is any subgraph of G with m edges. In this paper, it is proved that G is a fractional (k,m)-deleted graph if (G)  k + m + m k+1 , n  4k2 + 2k − 6 + (4k 2 +6k−2)m−2 k−1 and max{dG(x), dG(y)}  n 2 for any vertices x and y of G with dG(x, y) = 2. Furthermore, it is shown that the result in this paper is best possible in some sense.

A 3.125Gb/s Clock and Data Recovery Circuit Using 1/4-Rate Technique

This paper describes the design and fabrication of a clock and data recovery circuit (CDR). We propose a new clock and data recovery which is based on a 1/4-rate frequency detector (QRFD). The proposed frequency detector helps reduce the VCO frequency and is thus advantageous for high speed application. The proposed frequency detector can achieve low jitter operation and extend the pull-in range without using the reference clock. The proposed CDR was implemented using a 1/4-rate bang-bang type phase detector (PD) and a ring voltage controlled oscillator (VCO). The CDR circuit has been fabricated in a standard 0.18 CMOS technology. It occupies an active area of 1 x 1 and consumes 90 mW from a single 1.8V supply.

Multiple Model and Neural based Adaptive Multi-loop PID Controller for a CSTR Process

Multi-loop (De-centralized) Proportional-Integral- Derivative (PID) controllers have been used extensively in process industries due to their simple structure for control of multivariable processes. The objective of this work is to design multiple-model adaptive multi-loop PID strategy (Multiple Model Adaptive-PID) and neural network based multi-loop PID strategy (Neural Net Adaptive-PID) for the control of multivariable system. The first method combines the output of multiple linear PID controllers, each describing process dynamics at a specific level of operation. The global output is an interpolation of the individual multi-loop PID controller outputs weighted based on the current value of the measured process variable. In the second method, neural network is used to calculate the PID controller parameters based on the scheduling variable that corresponds to major shift in the process dynamics. The proposed control schemes are simple in structure with less computational complexity. The effectiveness of the proposed control schemes have been demonstrated on the CSTR process, which exhibits dynamic non-linearity.

Transmission Planning – a Probabilistic Load Flow Perspective

Perhaps no single issue has been cited as either the root cause and / or the greatest challenge to the restructured power system then the lack of adequate reliable transmission. Probabilistic transmission planning has become increasingly necessary and important in recent years. The transmission planning analysis carried out by the authors, spans a 10-year horizon, taking into consideration a value of 2 % load increase / year at each consumer. Taking into consideration this increased load, a probabilistic power flow was carried out, all the system components being regarded from probabilistic point of view. Several contingencies have been generated, for assessing the security of the power system. The results have been analyzed and several important conclusions were pointed. The objective is to achieve a network that works without limit violations for all (or most of) scenario realizations. The case study is represented by the IEEE 14 buses test power system.

The Sizes of Large Hierarchical Long-Range Percolation Clusters

We study a long-range percolation model in the hierarchical lattice ΩN of order N where probability of connection between two nodes separated by distance k is of the form min{αβ−k, 1}, α ≥ 0 and β > 0. The parameter α is the percolation parameter, while β describes the long-range nature of the model. The ΩN is an example of so called ultrametric space, which has remarkable qualitative difference between Euclidean-type lattices. In this paper, we characterize the sizes of large clusters for this model along the line of some prior work. The proof involves a stationary embedding of ΩN into Z. The phase diagram of this long-range percolation is well understood.

Improvising Intrusion Detection for Malware Activities on Dual-Stack Network Environment

Malware is software which was invented and meant for doing harms on computers. Malware is becoming a significant threat in computer network nowadays. Malware attack is not just only involving financial lost but it can also cause fatal errors which may cost lives in some cases. As new Internet Protocol version 6 (IPv6) emerged, many people believe this protocol could solve most malware propagation issues due to its broader addressing scheme. As IPv6 is still new compares to native IPv4, some transition mechanisms have been introduced to promote smoother migration. Unfortunately, these transition mechanisms allow some malwares to propagate its attack from IPv4 to IPv6 network environment. In this paper, a proof of concept shall be presented in order to show that some existing IPv4 malware detection technique need to be improvised in order to detect malware attack in dual-stack network more efficiently. A testbed of dual-stack network environment has been deployed and some genuine malware have been released to observe their behaviors. The results between these different scenarios will be analyzed and discussed further in term of their behaviors and propagation methods. The results show that malware behave differently on IPv6 from the IPv4 network protocol on the dual-stack network environment. A new detection technique is called for in order to cater this problem in the near future.

Latent Topic Based Medical Data Classification

This paper discusses the classification process for medical data. In this paper, we use the data from ACM KDDCup 2008 to demonstrate our classification process based on latent topic discovery. In this data set, the target set and outliers are quite different in their nature: target set is only 0.6% size in total, while the outliers consist of 99.4% of the data set. We use this data set as an example to show how we dealt with this extremely biased data set with latent topic discovery and noise reduction techniques. Our experiment faces two major challenge: (1) extremely distributed outliers, and (2) positive samples are far smaller than negative ones. We try to propose a suitable process flow to deal with these issues and get a best AUC result of 0.98.

Efficiency Evaluation of E-Commerce Websites

This study suggests a model of a new set of evaluation criteria that will be used to measure the efficiency of real-world E-commerce websites. Evaluation criteria include design, usability and performance for websites, the Data Envelopment Analysis (DEA) technique has been used to measure the websites efficiency. An efficient Web site is defined as a site that generates the most outputs, using the smallest amount of inputs. Inputs refer to measurements representing the amount of effort required to build, maintain and perform the site. Output is amount of traffic the site generates. These outputs are measured as the average number of daily hits and the average number of daily unique visitors.

Sensor Optimisation via H∞ Applied to a MAGLEV Suspension System

In this paper a systematic method via H∞ control design is proposed to select a sensor set that satisfies a number of input criteria for a MAGLEV suspension system. The proposed method recovers a number of optimised controllers for each possible sensor set that satisfies the performance and constraint criteria using evolutionary algorithms.

Influence of Rolling Temperature on Microstructure and Mechanical Properties of Cryorolled Al-Mg-Si Alloy

An effect of rolling temperature on the mechanical properties and microstructural evolution of an Al-Mg-Si alloy was studied. The material was rolled up to a true strain of ~0.7 at three different temperatures viz; room temperature, liquid propanol and liquid nitrogen. The liquid nitrogen rolled sample exhibited superior properties with a yield and tensile strength of 332 MPa and 364 MPa, respectively, with a reasonably good ductility of ~9%. The liquid nitrogen rolled sample showed around 54 MPa increase in tensile strength without much reduction in the ductility as compared to the as received T6 condition alloy. The microstructural details revealed equiaxed grains in the annealed and solutionized sample and elongated grains in the rolled samples. In addition, the cryorolled samples exhibited fine grain structure compared to the room temperature rolled samples.

Multi-models Approach for Describing and Verifying Constraints Based Interactive Systems

The requirements analysis, modeling, and simulation have consistently been one of the main challenges during the development of complex systems. The scenarios and the state machines are two successful models to describe the behavior of an interactive system. The scenarios represent examples of system execution in the form of sequences of messages exchanged between objects and are a partial view of the system. In contrast, state machines can represent the overall system behavior. The automation of processing scenarios in the state machines provide some answers to various problems such as system behavior validation and scenarios consistency checking. In this paper, we propose a method for translating scenarios in state machines represented by Discreet EVent Specification and procedure to detect implied scenarios. Each induced DEVS model represents the behavior of an object of the system. The global system behavior is described by coupling the atomic DEVS models and validated through simulation. We improve the validation process with integrating formal methods to eliminate logical inconsistencies in the global model. For that end, we use the Z notation.

Expert Witness Testimony in the Battered Woman Syndrome

The Expert Witness Testimony in the Battered Woman Syndrome Expert witness testimony (EWT) is a kind of information given by an expert specialized in the field (here in BWS) to the jury in order to help the court better understand the case. EWT does not always work in favor of the battered women. Two main decision-making models are discussed in the paper: the Mathematical model and the Explanation model. In the first model, the jurors calculate ″the importance and strength of each piece of evidence″ whereas in the second model they try to integrate the EWT with the evidence and create a coherent story that would describe the crime. The jury often misunderstands and misjudges battered women for their action (or in this case inaction). They assume that these women are masochists and accept being mistreated for if a man abuses a woman constantly, she should and could divorce him or simply leave at any time. The research in the domain found that indeed, expert witness testimony has a powerful influence on juror’s decisions thus its quality needs to be further explored. One of the important factors that need further studies is a bias called the dispositionist worldview (a belief that what happens to people is of their own doing). This kind of attributional bias represents a tendency to think that a person’s behavior is due to his or her disposition, even when the behavior is clearly attributed to the situation. Hypothesis The hypothesis of this paper is that if a juror has a dispositionist worldview then he or she will blame the rape victim for triggering the assault. The juror would therefore commit the fundamental attribution error and believe that the victim’s disposition caused the rape and not the situation she was in. Methods The subjects in the study were 500 randomly sampled undergraduate students from McGill, Concordia, Université de Montréal and UQAM. Dispositional Worldview was scored on the Dispositionist Worldview Questionnaire. After reading the Rape Scenarios, each student was asked to play the role of a juror and answer a questionnaire consisting of 7 questions about the responsibility, causality and fault of the victim. Results The results confirm the hypothesis which states that if a juror has a dispositionist worldview then he or she will blame the rape victim for triggering the assault. By doing so, the juror commits the fundamental attribution error because he will believe that the victim’s disposition, and not the constraints or opportunities of the situation, caused the rape scenario.

Application of Build-up and Wash-off Models for an East-Australian Catchment

Estimation of stormwater pollutants is a pre-requisite for the protection and improvement of the aquatic environment and for appropriate management options. The usual practice for the stormwater quality prediction is performed through water quality modeling. However, the accuracy of the prediction by the models depends on the proper estimation of model parameters. This paper presents the estimation of model parameters for a catchment water quality model developed for the continuous simulation of stormwater pollutants from a catchment to the catchment outlet. The model is capable of simulating the accumulation and transportation of the stormwater pollutants; suspended solids (SS), total nitrogen (TN) and total phosphorus (TP) from a particular catchment. Rainfall and water quality data were collected for the Hotham Creek Catchment (HTCC), Gold Coast, Australia. Runoff calculations from the developed model were compared with the calculated discharges from the widely used hydrological models, WBNM and DRAINS. Based on the measured water quality data, model water quality parameters were calibrated for the above-mentioned catchment. The calibrated parameters are expected to be helpful for the best management practices (BMPs) of the region. Sensitivity analyses of the estimated parameters were performed to assess the impacts of the model parameters on overall model estimations of runoff water quality.

Power Line Carrier for Power Telemetering

This paper presents an application of power line carrier (PLC) for electrical power telemetering. This system has a special capability of transmitting the measured values to a centralized computer via power lines. The PLC modem as a passive high-pass filter is designed for transmitting and receiving information. Its function is to send the information carrier together with transmitted data by superimposing it on the 50 Hz power frequency signal. A microcontroller is employed to function as the main processing of the modem. It is programmed for PLC control and interfacing with other devices. Each power meter, connected via a PLC modem, is assigned with a unique identification number (address) for distinguishing each device from one another.