Improving Co-integration Trading Rule Profitability with Forecasts from an Artificial Neural Network

Co-integration models the long-term, equilibrium relationship of two or more related financial variables. Even if cointegration is found, in the short run, there may be deviations from the long run equilibrium relationship. The aim of this work is to forecast these deviations using neural networks and create a trading strategy based on them. A case study is used: co-integration residuals from Australian Bank Bill futures are forecast and traded using various exogenous input variables combined with neural networks. The choice of the optimal exogenous input variables chosen for each neural network, undertaken in previous work [1], is validated by comparing the forecasts and corresponding profitability of each, using a trading strategy.

Condition Monitoring in the Management of Maintenance in a Large Scale Precision CNC Machining Manufacturing Facility

The manufacture of large-scale precision aerospace components using CNC requires a highly effective maintenance strategy to ensure that the required accuracy can be achieved over many hours of production. This paper reviews a strategy for a maintenance management system based on Failure Mode Avoidance, which uses advanced techniques and technologies to underpin a predictive maintenance strategy. It is shown how condition monitoring (CM) is important to predict potential failures in high precision machining facilities and achieve intelligent and integrated maintenance management. There are two distinct ways in which CM can be applied. One is to monitor key process parameters and observe trends which may indicate a gradual deterioration of accuracy in the product. The other is the use of CM techniques to monitor high status machine parameters enables trends to be observed which can be corrected before machine failure and downtime occurs. It is concluded that the key to developing a flexible and intelligent maintenance framework in any precision manufacturing operation is the ability to evaluate reliably and routinely machine tool condition using condition monitoring techniques within a framework of Failure Mode Avoidance.

An Autonomous Collaborative Forecasting System Implementation – The First Step towards Successful CPFR System

In the past decade, artificial neural networks (ANNs) have been regarded as an instrument for problem-solving and decision-making; indeed, they have already done with a substantial efficiency and effectiveness improvement in industries and businesses. In this paper, the Back-Propagation neural Networks (BPNs) will be modulated to demonstrate the performance of the collaborative forecasting (CF) function of a Collaborative Planning, Forecasting and Replenishment (CPFR®) system. CPFR functions the balance between the sufficient product supply and the necessary customer demand in a Supply and Demand Chain (SDC). Several classical standard BPN will be grouped, collaborated and exploited for the easy implementation of the proposed modular ANN framework based on the topology of a SDC. Each individual BPN is applied as a modular tool to perform the task of forecasting SKUs (Stock-Keeping Units) levels that are managed and supervised at a POS (point of sale), a wholesaler, and a manufacturer in an SDC. The proposed modular BPN-based CF system will be exemplified and experimentally verified using lots of datasets of the simulated SDC. The experimental results showed that a complex CF problem can be divided into a group of simpler sub-problems based on the single independent trading partners distributed over SDC, and its SKU forecasting accuracy was satisfied when the system forecasted values compared to the original simulated SDC data. The primary task of implementing an autonomous CF involves the study of supervised ANN learning methodology which aims at making “knowledgeable" decision for the best SKU sales plan and stocks management.

Application of Machine Learning Methods to Online Test Error Detection in Semiconductor Test

As in today's semiconductor industries test costs can make up to 50 percent of the total production costs, an efficient test error detection becomes more and more important. In this paper, we present a new machine learning approach to test error detection that should provide a faster recognition of test system faults as well as an improved test error recall. The key idea is to learn a classifier ensemble, detecting typical test error patterns in wafer test results immediately after finishing these tests. Since test error detection has not yet been discussed in the machine learning community, we define central problem-relevant terms and provide an analysis of important domain properties. Finally, we present comparative studies reflecting the failure detection performance of three individual classifiers and three ensemble methods based upon them. As base classifiers we chose a decision tree learner, a support vector machine and a Bayesian network, while the compared ensemble methods were simple and weighted majority vote as well as stacking. For the evaluation, we used cross validation and a specially designed practical simulation. By implementing our approach in a semiconductor test department for the observation of two products, we proofed its practical applicability.

HIV Treatment Planning on a case-by-CASE Basis

This study presents a mathematical modeling approach to the planning of HIV therapies on an individual basis. The model replicates clinical data from typical-progressors to AIDS for all stages of the disease with good agreement. Clinical data from rapid-progressors and long-term non-progressors is also matched by estimation of immune system parameters only. The ability of the model to reproduce these phenomena validates the formulation, a fact which is exploited in the investigation of effective therapies. The therapy investigation suggests that, unlike continuous therapy, structured treatment interruptions (STIs) are able to control the increase in both the drug-sensitive and drug-resistant virus population and, hence, prevent the ultimate progression from HIV to AIDS. The optimization results further suggest that even patients characterised by the same progression type can respond very differently to the same treatment and that the latter should be designed on a case-by-case basis. Such a methodology is presented here.

The Direct and Indirect Effects of the Achievement Motivation on Nurturing Intellectual Giftedness

Achievement motivation is believed to promote giftedness attracting people to invest in many programs to adopt gifted students providing them with challenging activities. Intellectual giftedness is founded on the fluid intelligence and extends to more specific abilities through the growth and inputs from the achievement motivation. Acknowledging the roles played by the motivation in the development of giftedness leads to an effective nurturing of gifted individuals. However, no study has investigated the direct and indirect effects of the achievement motivation and fluid intelligence on intellectual giftedness. Thus, this study investigated the contribution of motivation factors to giftedness development by conducting tests of fluid intelligence using Cattell Culture Fair Test (CCFT) and analytical abilities using culture reduced test items covering problem solving, pattern recognition, audio-logic, audio-matrices, and artificial language, and self report questionnaire for the motivational factors. A number of 180 highscoring students were selected using CCFT from a leading university in Malaysia. Structural equation modeling was employed using Amos V.16 to determine the direct and indirect effects of achievement motivation factors (self confidence, success, perseverance, competition, autonomy, responsibility, ambition, and locus of control) on the intellectual giftedness. The findings showed that the hypothesized model fitted the data, supporting the model postulates and showed significant and strong direct and indirect effects of the motivation and fluid intelligence on the intellectual giftedness.

A Fuzzy Logic Based Navigation of a Mobile Robot

One of the long standing challenging aspect in mobile robotics is the ability to navigate autonomously, avoiding modeled and unmodeled obstacles especially in crowded and unpredictably changing environment. A successful way of structuring the navigation task in order to deal with the problem is within behavior based navigation approaches. In this study, Issues of individual behavior design and action coordination of the behaviors will be addressed using fuzzy logic. A layered approach is employed in this work in which a supervision layer based on the context makes a decision as to which behavior(s) to process (activate) rather than processing all behavior(s) and then blending the appropriate ones, as a result time and computational resources are saved.

Ensembling Classifiers – An Application toImage Data Classification from Cherenkov Telescope Experiment

Ensemble learning algorithms such as AdaBoost and Bagging have been in active research and shown improvements in classification results for several benchmarking data sets with mainly decision trees as their base classifiers. In this paper we experiment to apply these Meta learning techniques with classifiers such as random forests, neural networks and support vector machines. The data sets are from MAGIC, a Cherenkov telescope experiment. The task is to classify gamma signals from overwhelmingly hadron and muon signals representing a rare class classification problem. We compare the individual classifiers with their ensemble counterparts and discuss the results. WEKA a wonderful tool for machine learning has been used for making the experiments.

An Investigation to Effective Parameters on the Damage of Dual Phase Steels by Acoustic Emission Using Energy Ratio

Dual phase steels (DPS)s have a microstructure consisting of a hard second phase called Martensite in the soft Ferrite matrix. In recent years, there has been interest in dual-phase steels, because the application of these materials has made significant usage; particularly in the automotive sector Composite microstructure of (DPS)s exhibit interesting characteristic mechanical properties such as continuous yielding, low yield stress to tensile strength ratios(YS/UTS), and relatively high formability; which offer advantages compared with conventional high strength low alloy steels(HSLAS). The research dealt with the characterization of damage in (DPS)s. In this study by review the mechanisms of failure due to volume fraction of martensite second phase; a new method is introduced to identifying the mechanisms of failure in the various phases of these types of steels. In this method the acoustic emission (AE) technique was used to detect damage progression. These failure mechanisms consist of Ferrite-Martensite interface decohesion and/or martensite phase fracture. For this aim, dual phase steels with different volume fraction of martensite second phase has provided by various heat treatment methods on a low carbon steel (0.1% C), and then AE monitoring is used during tensile test of these DPSs. From AE measurements and an energy ratio curve elaborated from the value of AE energy (it was obtained as the ratio between the strain energy to the acoustic energy), that allows detecting important events, corresponding to the sudden drops. These AE signals events associated with various failure mechanisms are classified for ferrite and (DPS)s with various amount of Vm and different martensite morphology. It is found that AE energy increase with increasing Vm. This increasing of AE energy is because of more contribution of martensite fracture in the failure of samples with higher Vm. Final results show a good relationship between the AE signals and the mechanisms of failure.

Development of a Kinetic Model for the Photodegradation of 4-Chlorophenol using a XeBr Excilamp

Excilamps are new UV sources with great potential for application in wastewater treatment. In the present work, a XeBr excilamp emitting radiation at 283 nm has been used for the photodegradation of 4-chlorophenol within a range of concentrations from 50 to 500 mg L-1. Total removal of 4-chlorophenol was achieved for all concentrations assayed. The two main photoproduct intermediates formed along the photodegradation process, benzoquinone and hydroquinone, although not being completely removed, remain at very low residual concentrations. Such concentrations are insignificant compared to the 4-chlorophenol initial ones and non-toxic. In order to simulate the process and scaleup, a kinetic model has been developed and validated from the experimental data.

Note to the Global GMRES for Solving the Matrix Equation AXB = F

In the present work, we propose a new projection method for solving the matrix equation AXB = F. For implementing our new method, generalized forms of block Krylov subspace and global Arnoldi process are presented. The new method can be considered as an extended form of the well-known global generalized minimum residual (Gl-GMRES) method for solving multiple linear systems and it will be called as the extended Gl-GMRES (EGl- GMRES). Some new theoretical results have been established for proposed method by employing Schur complement. Finally, some numerical results are given to illustrate the efficiency of our new method.

Evolutionary Dynamics on Small-World Networks

We study how the outcome of evolutionary dynamics on graphs depends on a randomness on the graph structure. We gradually change the underlying graph from completely regular (e.g. a square lattice) to completely random. We find that the fixation probability increases as the randomness increases; nevertheless, the increase is not significant and thus the fixation probability could be estimated by the known formulas for underlying regular graphs.

Reliable Capacitated Facility Location Problem Considering Maximal Covering

This paper provides a framework in order to incorporate reliability issue as a sign of disruption in distribution systems and partial covering theory as a response to limitation in coverage radios and economical preferences, simultaneously into the traditional literatures of capacitated facility location problems. As a result we develop a bi-objective model based on the discrete scenarios for expected cost minimization and demands coverage maximization through a three echelon supply chain network by facilitating multi-capacity levels for provider side layers and imposing gradual coverage function for distribution centers (DCs). Additionally, in spite of objectives aggregation for solving the model through LINGO software, a branch of LP-Metric method called Min- Max approach is proposed and different aspects of corresponds model will be explored.

Toward Community-Based Personal Cloud Computing

This paper proposes a new of cloud computing for individual computer users to share applications in distributed communities, called community-based personal cloud computing (CPCC). The paper also presents a prototype design and implementation of CPCC. The users of CPCC are able to share their computing applications with other users of the community. Any member of the community is able to execute remote applications shared by other members. The remote applications behave in the same way as their local counterparts, allowing the user to enter input, receive output as well as providing the access to the local data of the user. CPCC provides a peer-to-peer (P2P) environment where each peer provides applications which can be used by the other peers that are connected CPCC.

Enhanced-Delivery Overlay Multicasting Scheme by Optimizing Bandwidth and Latency Discrepancy Ratios

With optimized bandwidth and latency discrepancy ratios, Node Gain Scores (NGSs) are determined and used as a basis for shaping the max-heap overlay. The NGSs - determined as the respective bandwidth-latency-products - govern the construction of max-heap-form overlays. Each NGS is earned as a synergy of discrepancy ratio of the bandwidth requested with respect to the estimated available bandwidth, and latency discrepancy ratio between the nodes and the source node. The tree leads to enhanceddelivery overlay multicasting – increasing packet delivery which could, otherwise, be hindered by induced packet loss occurring in other schemes not considering the synergy of these parameters on placing the nodes on the overlays. The NGS is a function of four main parameters – estimated available bandwidth, Ba; individual node's requested bandwidth, Br; proposed node latency to its prospective parent (Lp); and suggested best latency as advised by source node (Lb). Bandwidth discrepancy ratio (BDR) and latency discrepancy ratio (LDR) carry weights of α and (1,000 - α ) , respectively, with arbitrary chosen α ranging between 0 and 1,000 to ensure that the NGS values, used as node IDs, maintain a good possibility of uniqueness and balance between the most critical factor between the BDR and the LDR. A max-heap-form tree is constructed with assumption that all nodes possess NGS less than the source node. To maintain a sense of load balance, children of each level's siblings are evenly distributed such that a node can not accept a second child, and so on, until all its siblings able to do so, have already acquired the same number of children. That is so logically done from left to right in a conceptual overlay tree. The records of the pair-wise approximate available bandwidths as measured by a pathChirp scheme at individual nodes are maintained. Evaluation measures as compared to other schemes – Bandwidth Aware multicaSt architecturE (BASE), Tree Building Control Protocol (TBCP), and Host Multicast Tree Protocol (HMTP) - have been conducted. This new scheme generally performs better in terms of trade-off between packet delivery ratio; link stress; control overhead; and end-to-end delays.

A Formative Assessment Model within the Competency-Based-Approach for an Individualized E-learning Path

E-learning is not restricted to the use of new technologies for the online content, but also induces the adoption of new approaches to improve the quality of education. This quality depends on the ability of these approaches (technical and pedagogical) to provide an adaptive learning environment. Thus, the environment should include features that convey intentions and meeting the educational needs of learners by providing a customized learning path to acquiring a competency concerned In our proposal, we believe that an individualized learning path requires knowledge of the learner. Therefore, it must pass through a personalization of diagnosis to identify precisely the competency gaps to fill, and reduce the cognitive load To personalize the diagnosis and pertinently measure the competency gap, we suggest implementing the formative assessment in the e-learning environment and we propose the introduction of a pre-regulation process in the area of formative assessment, involving its individualization and implementation in e-learning.

An Efficient VLSI Design Approach to Reduce Static Power using Variable Body Biasing

In CMOS integrated circuit design there is a trade-off between static power consumption and technology scaling. Recently, the power density has increased due to combination of higher clock speeds, greater functional integration, and smaller process geometries. As a result static power consumption is becoming more dominant. This is a challenge for the circuit designers. However, the designers do have a few methods which they can use to reduce this static power consumption. But all of these methods have some drawbacks. In order to achieve lower static power consumption, one has to sacrifice design area and circuit performance. In this paper, we propose a new method to reduce static power in the CMOS VLSI circuit using Variable Body Biasing technique without being penalized in area requirement and circuit performance.

Stagnation in Brownfield Redevelopment

Purpose of this paper is two-folded. At first it explains the major problems that are causing stagnation in brownfield redevelopment. In addition, these problems given the context of the present multi-actor built environment are becoming more complex to observe. Therefore, this paper suggests also a prospective decisionmaking approach that is the most appropriate to observe and react on the given stagnation problems. Such an approach should be regarded as prescriptive-interactive decision-making approach, a barely established branch. This approach should offer models that have prescriptive as well as an interactive component enabling them to successfully cope with the multi-actor environment. Overall, this paper provides up-to-date insight on the brownfield stagnation by gradually introducing the nowadays major problems and offers a prospective decision-making approach how these problems could be tackled.

The Research Approaches on Crisis and its Management

The paper structures research approaches to the crisis and its management. It focuses on approaches – psychological, sociological, economic, ethical and technological. Furthermore, it describes the basic features of models chosen according to those approaches. By their comparison it shows how the crisis influences organizations and individuals, and their mutual interaction.

EHW from Consumer Point of View: Consumer-Triggered Evolution

Evolvable Hardware (EHW) has been regarded as adaptive system acquired by wide application market. Consumer market of any good requires diversity to satisfy consumers- preferences. Adaptation of EHW is a key technology that could provide individual approach to every particular user. This situation raises a question: how to set target for evolutionary algorithm? The existing techniques do not allow consumer to influence evolutionary process. Only designer at the moment is capable to influence the evolution. The proposed consumer-triggered evolution overcomes this problem by introducing new features to EHW that help adaptive system to obtain targets during consumer stage. Classification of EHW is given according to responsiveness, imitation of human behavior and target circuit response. Home intelligent water heating system is considered as an example.