Cross Layer Optimization for Fairness Balancing Based on Adaptively Weighted Utility Functions in OFDMA Systems

Cross layer optimization based on utility functions has been recently studied extensively, meanwhile, numerous types of utility functions have been examined in the corresponding literature. However, a major drawback is that most utility functions take a fixed mathematical form or are based on simple combining, which can not fully exploit available information. In this paper, we formulate a framework of cross layer optimization based on Adaptively Weighted Utility Functions (AWUF) for fairness balancing in OFDMA networks. Under this framework, a two-step allocation algorithm is provided as a sub-optimal solution, whose control parameters can be updated in real-time to accommodate instantaneous QoS constrains. The simulation results show that the proposed algorithm achieves high throughput while balancing the fairness among multiple users.

Dimensionality Reduction of PSSM Matrix and its Influence on Secondary Structure and Relative Solvent Accessibility Predictions

State-of-the-art methods for secondary structure (Porter, Psi-PRED, SAM-T99sec, Sable) and solvent accessibility (Sable, ACCpro) predictions use evolutionary profiles represented by the position specific scoring matrix (PSSM). It has been demonstrated that evolutionary profiles are the most important features in the feature space for these predictions. Unfortunately applying PSSM matrix leads to high dimensional feature spaces that may create problems with parameter optimization and generalization. Several recently published suggested that applying feature extraction for the PSSM matrix may result in improvements in secondary structure predictions. However, none of the top performing methods considered here utilizes dimensionality reduction to improve generalization. In the present study, we used simple and fast methods for features selection (t-statistics, information gain) that allow us to decrease the dimensionality of PSSM matrix by 75% and improve generalization in the case of secondary structure prediction compared to the Sable server.

Comparative Analysis of Vibration between Laminated Composite Plates with and without Holes under Compressive Loads

In this study, a vibration analysis was carried out of symmetric angle-ply laminated composite plates with and without square hole when subjected to compressive loads, numerically. A buckling analysis is also performed to determine the buckling load of laminated plates. For each fibre orientation, the compression load is taken equal to 50% of the corresponding buckling load. In the analysis, finite element method (FEM) was applied to perform parametric studies, the effects of degree of orthotropy and stacking sequence upon the fundamental frequencies and buckling loads are discussed. The results show that the presence of a constant compressive load tends to reduce uniformly the natural frequencies for materials which have a low degree of orthotropy. However, this reduction becomes non-uniform for materials with a higher degree of orthotropy.

Application of Feed-Forward Neural Networks Autoregressive Models with Genetic Algorithm in Gross Domestic Product Prediction

In this paper we present a Feed-Foward Neural Networks Autoregressive (FFNN-AR) model with genetic algorithms training optimization in order to predict the gross domestic product growth of six countries. Specifically we propose a kind of weighted regression, which can be used for econometric purposes, where the initial inputs are multiplied by the neural networks final optimum weights from input-hidden layer of the training process. The forecasts are compared with those of the ordinary autoregressive model and we conclude that the proposed regression-s forecasting results outperform significant those of autoregressive model. Moreover this technique can be used in Autoregressive-Moving Average models, with and without exogenous inputs, as also the training process with genetics algorithms optimization can be replaced by the error back-propagation algorithm.

The Decentralized Nonlinear Controller of Robot Manipulator with External Load Compensation

This paper describes a newly designed decentralized nonlinear control strategy to control a robot manipulator. Based on the concept of the nonlinear state feedback theory and decentralized concept is developed to improve the drawbacks in previous works concerned with complicate intelligent control and low cost effective sensor. The control methodology is derived in the sense of Lyapunov theorem so that the stability of the control system is guaranteed. The decentralized algorithm does not require other joint angle and velocity information. Individual Joint controller is implemented using a digital processor with nearly actuator to make it possible to achieve good dynamics and modular. Computer simulation result has been conducted to validate the effectiveness of the proposed control scheme under the occurrence of possible uncertainties and different reference trajectories. The merit of the proposed control system is indicated in comparison with a classical control system.

A General Regression Test Selection Technique

This paper presents a new methodology to select test cases from regression test suites. The selection strategy is based on analyzing the dynamic behavior of the applications that written in any programming language. Methods based on dynamic analysis are more safe and efficient. We design a technique that combine the code based technique and model based technique, to allow comparing the object oriented of an application that written in any programming language. We have developed a prototype tool that detect changes and select test cases from test suite.

Implementation of RSA Blind Signature on CryptO-0N2 Protocol

Blind Signature were introduced by Chaum. In this scheme, a signer can “sign” a document without knowing the document contain. This is particularly important in electronic voting. CryptO-0N2 is an electronic voting protocol which is development of CryptO-0N. During its development this protocol has not been furnished with the requirement of blind signature, so the choice of voters can be determined by counting center. In this paper will be presented of implementation of blind signature using RSA algorithm.

Reducing Energy Consumption and GHG Emission by Integration of Flare Gas with Fuel Gas Network in Refinery

Gas flaring is one of the most GHG emitting sources in the oil and gas industries. It is also a major way for wasting such an energy that could be better utilized and even generates revenue. Minimize flaring is an effective approach for reducing GHG emissions and also conserving energy in flaring systems. Integrating waste and flared gases into the fuel gas networks (FGN) of refineries is an efficient tool. A fuel gas network collects fuel gases from various source streams and mixes them in an optimal manner, and supplies them to different fuel sinks such as furnaces, boilers, turbines, etc. In this article we use fuel gas network model proposed by Hasan et al. as a base model and modify some of its features and add constraints on emission pollution by gas flaring to reduce GHG emissions as possible. Results for a refinery case study showed that integration of flare gas stream with waste and natural gas streams to construct an optimal FGN can significantly reduce total annualized cost and flaring emissions.

How to Integrate Sustainability in Technological Degrees: Robotics at UPC

Embedding Sustainability in technological curricula has become a crucial factor for educating engineers with competences in sustainability. The Technical University of Catalonia UPC, in 2008, designed the Sustainable Technology Excellence Program STEP 2015 in order to assure a successful Sustainability Embedding. This Program takes advantage of the opportunity that the redesign of all Bachelor and Master Degrees in Spain by 2010 under the European Higher Education Area framework offered. The STEP program goals are: to design compulsory courses in each degree; to develop the conceptual base and identify reference models in sustainability for all specialties at UPC; to create an internal interdisciplinary network of faculty from all the schools; to initiate new transdisciplinary research activities in technology-sustainability-education; to spread the know/how attained; to achieve international scientific excellence in technology-sustainability-education and to graduate the first engineers/architects of the new EHEA bachelors with sustainability as a generic competence. Specifically, in this paper authors explain their experience in leading the STEP program, and two examples are presented: Industrial Robotics subject and the curriculum for the School of Architecture.

Model Discovery and Validation for the Qsar Problem using Association Rule Mining

There are several approaches in trying to solve the Quantitative 1Structure-Activity Relationship (QSAR) problem. These approaches are based either on statistical methods or on predictive data mining. Among the statistical methods, one should consider regression analysis, pattern recognition (such as cluster analysis, factor analysis and principal components analysis) or partial least squares. Predictive data mining techniques use either neural networks, or genetic programming, or neuro-fuzzy knowledge. These approaches have a low explanatory capability or non at all. This paper attempts to establish a new approach in solving QSAR problems using descriptive data mining. This way, the relationship between the chemical properties and the activity of a substance would be comprehensibly modeled.

Experimental Analysis of Diesel Hydrotreating Reactor to Development a Simplified Tool for Process Real- time Optimization

In this research, a systematic investigation was carried out to determine the optimum conditions of HDS reactor. Moreover, a suitable model was developed for a rigorous RTO (real time optimization) loop of HDS (Hydro desulfurization) process. A systematic experimental series was designed based on CCD (Central Composite design) and carried out in the related pilot plant to tune the develop model. The designed variables in the experiments were Temperature, LHSV and pressure. However, the hydrogen over fresh feed ratio was remained constant. The ranges of these variables were respectively equal to 320-380ºC, 1- 21/hr and 50-55 bar. a power law kinetic model was also developed for our further research in the future .The rate order and activation energy , power of reactant concentration and frequency factor of this model was respectively equal to 1.4, 92.66 kJ/mol and k0=2.7*109 .

Benchmarking Cleaner Production Performance of Coal-fired Power Plants Using Two-stage Super-efficiency Data Envelopment Analysis

Benchmarking cleaner production performance is an effective way of pollution control and emission reduction in coal-fired power industry. A benchmarking method using two-stage super-efficiency data envelopment analysis for coal-fired power plants is proposed – firstly, to improve the cleaner production performance of DEA-inefficient or weakly DEA-efficient plants, then to select the benchmark from performance-improved power plants. An empirical study is carried out with the survey data of 24 coal-fired power plants. The result shows that in the first stage the performance of 16 plants is DEA-efficient and that of 8 plants is relatively inefficient. The target values for improving DEA-inefficient plants are acquired by projection analysis. The efficient performance of 24 power plants and the benchmarking plant is achieved in the second stage. The two-stage benchmarking method is practical to select the optimal benchmark in the cleaner production of coal-fired power industry and will continuously improve plants- cleaner production performance.

Structural Modelling of the LiCl Aqueous Solution: Using the Hybrid Reverse Monte Carlo (HRMC) Simulation

The Reverse Monte Carlo (RMC) simulation is applied in the study of an aqueous electrolyte LiCl6H2O. On the basis of the available experimental neutron scattering data, RMC computes pair radial distribution functions in order to explore the structural features of the system. The obtained results include some unrealistic features. To overcome this problem, we use the Hybrid Reverse Monte Carlo (HRMC), incorporating an energy constraint in addition to the commonly used constraints derived from experimental data. Our results show a good agreement between experimental and computed partial distribution functions (PDFs) as well as a significant improvement in pair partial distribution curves. This kind of study can be considered as a useful test for a defined interaction model for conventional simulation techniques.

Flow Properties of Commercial Infant Formula Powders

The objective of this work was to investigate flow properties of powdered infant formula samples. Samples were purchased at a local pharmacy and differed in composition. Lactose free infant formula, gluten free infant formula and infant formulas containing dietary fibers and probiotics were tested and compared with a regular infant formula sample which did not contain any of these supplements. Particle size and bulk density were determined and their influence on flow properties was discussed. There were no significant differences in bulk densities of the samples, therefore the connection between flow properties and bulk density could not be determined. Lactose free infant formula showed flow properties different to standard supplement-free sample. Gluten free infant formula with addition of probiotic microorganisms and dietary fiber had the narrowest particle size distribution range and exhibited the best flow properties. All the other samples exhibited the same tendency of decreasing compaction coefficient with increasing flow speed, which means they all become freer flowing with higher flow speeds.

ISC–Intelligent Subspace Clustering, A Density Based Clustering Approach for High Dimensional Dataset

Many real-world data sets consist of a very high dimensional feature space. Most clustering techniques use the distance or similarity between objects as a measure to build clusters. But in high dimensional spaces, distances between points become relatively uniform. In such cases, density based approaches may give better results. Subspace Clustering algorithms automatically identify lower dimensional subspaces of the higher dimensional feature space in which clusters exist. In this paper, we propose a new clustering algorithm, ISC – Intelligent Subspace Clustering, which tries to overcome three major limitations of the existing state-of-art techniques. ISC determines the input parameter such as є – distance at various levels of Subspace Clustering which helps in finding meaningful clusters. The uniform parameters approach is not suitable for different kind of databases. ISC implements dynamic and adaptive determination of Meaningful clustering parameters based on hierarchical filtering approach. Third and most important feature of ISC is the ability of incremental learning and dynamic inclusion and exclusions of subspaces which lead to better cluster formation.

A Hypermap for Supply Chain Management

We present a prototype interactive (hyper) map of strategic, tactical, and logistic options for Supply Chain Management. The map comprises an anthology of options, broadly classified within the strategic spectrum of efficiency versus responsiveness, and according to logistic and cross-functional drivers. They are exemplified by cases in diverse industries. We seek to get all these information and ideas organized to help supply chain managers identify effective choices for specific business environments. The key and innovative linkage we introduce is the configuration of competitive forces. Instead of going through seemingly endless and isolated cases and wondering how one can borrow from them, we aim to provide a guide by force comparisons. The premise is that best practices in a different industry facing similar forces may be a most productive resource in supply chain design and planning. A prototype template is demonstrated.

A Revisited View to the Paced Auditory Serial Addition Test (PASAT) in Female and Male Normal Subjects

Paced Auditory Serial Addition Test (PASAT) has been used as a common research tool for different neurological disorders like Multiple Sclerosis. Recently, technology let researchers to introduce a new versions of the visual test, the paced visual serial addition test (PVSAT). In this paper, the computerized version of these two tests is introduced. Beside the number of true responses are interpreted, the reaction time of subjects are calculated by the software. We hypothesize that paying attention to the reaction time may be valuable. For this purpose, sixty eight female normal subjects and fifty eight male normal subjects are enrolled in the study. We investigate the similarity between the PASAT3 and PVSAT3 in number of true responses and the new criterion (the average reaction time of each subject). The similarity between two tests were rejected (p-value = 0.000) which means that these two test differ. The effect of sex in the tests were not approved since the pvalues of different between PASAT3 and PVSAT3 in both sex is the same (p-value = 0.000) which means that male and female subjects performed the tests at no different level of performance. The new criterion shows a negative correlation with the age which offers aged normal subjects may have the same number of true responses as the young subjects but they have latent responses. This will give prove for the importance of reaction time.

Motor Imaginary Signal Classification Using Adaptive Recursive Bandpass Filter and Adaptive Autoregressive Models for Brain Machine Interface Designs

The noteworthy point in the advancement of Brain Machine Interface (BMI) research is the ability to accurately extract features of the brain signals and to classify them into targeted control action with the easiest procedures since the expected beneficiaries are of disabled. In this paper, a new feature extraction method using the combination of adaptive band pass filters and adaptive autoregressive (AAR) modelling is proposed and applied to the classification of right and left motor imagery signals extracted from the brain. The introduction of the adaptive bandpass filter improves the characterization process of the autocorrelation functions of the AAR models, as it enhances and strengthens the EEG signal, which is noisy and stochastic in nature. The experimental results on the Graz BCI data set have shown that by implementing the proposed feature extraction method, a LDA and SVM classifier outperforms other AAR approaches of the BCI 2003 competition in terms of the mutual information, the competition criterion, or misclassification rate.

Colour Stability of Wild Cactus Pear Juice

Prickly pear (Opuntia spp) fruit has received renewed interest since it contains a betalain pigment that has an attractive purple colour for the production of juice. Prickly pear juice was prepared by homogenizing the fruit and treating the pulp with 48 g of pectinase from Aspergillus niger. Titratable acidity was determined by diluting 10 ml prickly pear juice with 90 ml deionized water and titrating to pH 8.2 with 0.1 N NaOH. Brix was measured using a refractometer and ascorbic acid content assayed spectrophotometrically. Colour variation was determined colorimetrically (Hunter L.a.b.). Hunter L.a.b. analysis showed that the red purple colour of prickly pear juice had been affected by juice treatments. This was indicated by low light values of colour difference meter (CDML*), hue, CDMa* and CDMb* values. It was observed that non-treated prickly pear juice had a high (colour difference meter of light) CDML* of 3.9 compared to juice treatments (range 3.29 to 2.14). The CDML* significantly (p

W3-Miner: Mining Weighted Frequent Subtree Patterns in a Collection of Trees

Mining frequent tree patterns have many useful applications in XML mining, bioinformatics, network routing, etc. Most of the frequent subtree mining algorithms (i.e. FREQT, TreeMiner and CMTreeMiner) use anti-monotone property in the phase of candidate subtree generation. However, none of these algorithms have verified the correctness of this property in tree structured data. In this research it is shown that anti-monotonicity does not generally hold, when using weighed support in tree pattern discovery. As a result, tree mining algorithms that are based on this property would probably miss some of the valid frequent subtree patterns in a collection of trees. In this paper, we investigate the correctness of anti-monotone property for the problem of weighted frequent subtree mining. In addition we propose W3-Miner, a new algorithm for full extraction of frequent subtrees. The experimental results confirm that W3-Miner finds some frequent subtrees that the previously proposed algorithms are not able to discover.