Authentication Analysis of the 802.11i Protocol

IEEE has designed 802.11i protocol to address the security issues in wireless local area networks. Formal analysis is important to ensure that the protocols work properly without having to resort to tedious testing and debugging which can only show the presence of errors, never their absence. In this paper, we present the formal verification of an abstract protocol model of 802.11i. We translate the 802.11i protocol into the Strand Space Model and then prove the authentication property of the resulting model using the Strand Space formalism. The intruder in our model is imbued with powerful capabilities and repercussions to possible attacks are evaluated. Our analysis proves that the authentication of 802.11i is not compromised in the presented model. We further demonstrate how changes in our model will yield a successful man-in-the-middle attack.

Development of Gas Chromatography Model: Propylene Concentration Using Neural Network

Gas chromatography (GC) is the most widely used technique in analytical chemistry. However, GC has high initial cost and requires frequent maintenance. This paper examines the feasibility and potential of using a neural network model as an alternative whenever GC is unvailable. It can also be part of system verification on the performance of GC for preventive maintenance activities. It shows the performance of MultiLayer Perceptron (MLP) with Backpropagation structure. Results demonstrate that neural network model when trained using this structure provides an adequate result and is suitable for this purpose. cm.

On the Multiplicity of Discriminants of Relative Quadratic Extensions of Quintic Fields

According to Hermite there exists only a finite number of number fields having a given degree, and a given value of the discriminant, nevertheless this number is not known generally. The determination of a maximum number of number fields of degree 10 having a given discriminant that contain a subfield of degree 5 having a fixed class number, narrow class number and Galois group is the purpose of this work. The constructed lists of the first coincidences of 52 (resp. 50, 40, 48, 22, 6) nonisomorphic number fields with same discriminant of degree 10 of signature (6,2) (resp. (4,3), (8,1), (2,4), (0,5), (10,0)) containing a quintic field. For each field in the lists, we indicate its discriminant, the discriminant of its subfield, a relative polynomial generating the field over its quintic field and its relative discriminant, the corresponding polynomial over Q and its Galois closure are presented with concluding remarks.

Robust Human Rights Governance: Developing International Criteria

Many states are now committed to implementing international human rights standards domestically. In terms of practical governance, how might effectiveness be measured? A facevalue answer can be found in domestic laws and institutions relating to human rights. However, this article provides two further tools to help states assess their status on the spectrum of robust to fragile human rights governance. The first recognises that each state has its own 'human rights history' and the ideal end stage is robust human rights governance, and the second is developing criteria to assess robustness. Although a New Zealand case study is used to illustrate these tools, the widespread adoption of human rights standards by many states inevitably means that the issues are relevant to other countries. This is even though there will always be varying degrees of similarity-difference in constitutional background and developed or emerging human rights systems.

Screening Wheat Parents of Mapping Population for Heat and Drought Tolerance, Detection of Wheat Genetic Variation

To evaluate genetic variation of wheat (Triticum aestivum) affected by heat and drought stress on eight Australian wheat genotypes that are parents of Doubled Haploid (HD) mapping populations at the vegetative stage, the water stress experiment was conducted at 65% field capacity in growth room. Heat stress experiment was conducted in the research field under irrigation over summer. Result show that water stress decreased dry shoot weight and RWC but increased osmolarity and means of Fv/Fm values in all varieties except for Krichauff. Krichauff and Kukri had the maximum RWC under drought stress. Trident variety was shown maximum WUE, osmolarity (610 mM/Kg), dry mater, quantum yield and Fv/Fm 0.815 under water stress condition. However, the recovery of quantum yield was apparent between 4 to 7 days after stress in all varieties. Nevertheless, increase in water stress after that lead to strong decrease in quantum yield. There was a genetic variation for leaf pigments content among varieties under heat stress. Heat stress decreased significantly the total chlorophyll content that measured by SPAD. Krichauff had maximum value of Anthocyanin content (2.978 A/g FW), chlorophyll a+b (2.001 mg/g FW) and chlorophyll a (1.502 mg/g FW). Maximum value of chlorophyll b (0.515 mg/g FW) and Carotenoids (0.234 mg/g FW) content belonged to Kukri. The quantum yield of all varieties decreased significantly, when the weather temperature increased from 28 ÔùªC to 36 ÔùªC during the 6 days. However, the recovery of quantum yield was apparent after 8th day in all varieties. The maximum decrease and recovery in quantum yield was observed in Krichauff. Drought and heat tolerant and moderately tolerant wheat genotypes were included Trident, Krichauff, Kukri and RAC875. Molineux, Berkut and Excalibur were clustered into most sensitive and moderately sensitive genotypes. Finally, the results show that there was a significantly genetic variation among the eight varieties that were studied under heat and water stress.

Predicting Bankruptcy using Tabu Search in the Mauritian Context

Throughout this paper, a relatively new technique, the Tabu search variable selection model, is elaborated showing how it can be efficiently applied within the financial world whenever researchers come across the selection of a subset of variables from a whole set of descriptive variables under analysis. In the field of financial prediction, researchers often have to select a subset of variables from a larger set to solve different type of problems such as corporate bankruptcy prediction, personal bankruptcy prediction, mortgage, credit scoring and the Arbitrage Pricing Model (APM). Consequently, to demonstrate how the method operates and to illustrate its usefulness as well as its superiority compared to other commonly used methods, the Tabu search algorithm for variable selection is compared to two main alternative search procedures namely, the stepwise regression and the maximum R 2 improvement method. The Tabu search is then implemented in finance; where it attempts to predict corporate bankruptcy by selecting the most appropriate financial ratios and thus creating its own prediction score equation. In comparison to other methods, mostly the Altman Z-Score model, the Tabu search model produces a higher success rate in predicting correctly the failure of firms or the continuous running of existing entities.

Kinematic and Dynamic Analysis of a Lower Limb Exoskeleton

This paper will provide the kinematic and dynamic analysis of a lower limb exoskeleton. The forward and inverse kinematics of proposed exoskeleton is performed using Denevit and Hartenberg method. The torques required for the actuators will be calculated using Lagrangian formulation technique. This research can be used to design the control of the proposed exoskeleton.

Predicting and Mitigating Dredging DispersionImpact: A Case of Phuket Port, Thailand

Dredging activities inevitably cause sediment dispersion. In certain locations, where there are important ecological areas such as mangroves or coral reefs, carefully planning the dredging can significantly reduce negative impacts. This article utilizes the dredging at Phuket port, Thailand, as a case study to demonstrate how computer simulations can be helpful to protect existing coral reefs. A software package named MIKE21 was applied. Necessary information required by the simulations was gathered. After calibrating and verifying the model, various dredging scenario were simulated to predict spoil movement. The simulation results were used as guidance to setting up an environmental measure. Finally, the recommendation to dredge during flood tide with silt curtains installed was made.

Mixture Design Experiment on Flow Behaviour of O/W Emulsions as Affected by Polysaccharide Interactions

Interaction effects of xanthan gum (XG), carboxymethyl cellulose (CMC), and locust bean gum (LBG) on the flow properties of oil-in-water emulsions were investigated by a mixture design experiment. Blends of XG, CMC and LBG were prepared according to an augmented simplex-centroid mixture design (10 points) and used at 0.5% (wt/wt) in the emulsion formulations. An appropriate mathematical model was fitted to express each response as a function of the proportions of the blend components that are able to empirically predict the response to any blend of combination of the components. The synergistic interaction effect of the ternary XG:CMC:LBG blends at approximately 33-67% XG levels was shown to be much stronger than that of the binary XG:LBG blend at 50% XG level (p < 0.05). Nevertheless, an antagonistic interaction effect became significant as CMC level in blends was more than 33% (p < 0.05). Yield stress and apparent viscosity (at 10 s-1) responses were successfully fitted with a special quartic model while flow behaviour index and consistency coefficient were fitted with a full quartic model (R2 adjusted ≥ 0.90). This study found that a mixture design approach could serve as a valuable tool in better elucidating and predicting the interaction effects beyond the conventional twocomponent blends.

An Immersive Motion Capture Environment

Motion capturing technology has been used for quite a while and several research has been done within this area. Nevertheless, we discovered open issues within current motion capturing environments. In this paper we provide a state-of-the-art overview of the addressed research areas and show issues with current motion capturing environments. Observations, interviews and questionnaires have been used to reveal the challenges actors are currently facing in a motion capturing environment. Furthermore, the idea to create a more immersive motion capturing environment to improve the acting performances and motion capturing outcomes as a potential solution is introduced. It is hereby the goal to explain the found open issues and the developed ideas which shall serve for further research as a basis. Moreover, a methodology to address the interaction and systems design issues is proposed. A future outcome could be that motion capture actors are able to perform more naturally, especially if using a non-body-worn solution.

Bridging the Mental Gap between Convolution Approach and Compartmental Modeling in Functional Imaging: Typical Embedding of an Open Two-Compartment Model into the Systems Theory Approach of Indicator Dilution Theory

Functional imaging procedures for the non-invasive assessment of tissue microcirculation are highly requested, but require a mathematical approach describing the trans- and intercapillary passage of tracer particles. Up to now, two theoretical, for the moment different concepts have been established for tracer kinetic modeling of contrast agent transport in tissues: pharmacokinetic compartment models, which are usually written as coupled differential equations, and the indicator dilution theory, which can be generalized in accordance with the theory of lineartime- invariant (LTI) systems by using a convolution approach. Based on mathematical considerations, it can be shown that also in the case of an open two-compartment model well-known from functional imaging, the concentration-time course in tissue is given by a convolution, which allows a separation of the arterial input function from a system function being the impulse response function, summarizing the available information on tissue microcirculation. Due to this reason, it is possible to integrate the open two-compartment model into the system-theoretic concept of indicator dilution theory (IDT) and thus results known from IDT remain valid for the compartment approach. According to the long number of applications of compartmental analysis, even for a more general context similar solutions of the so-called forward problem can already be found in the extensively available appropriate literature of the seventies and early eighties. Nevertheless, to this day, within the field of biomedical imaging – not from the mathematical point of view – there seems to be a trench between both approaches, which the author would like to get over by exemplary analysis of the well-known model.

Internal Force State Recognition of Jiujiang Bridge Based on Cable Force-displacement Relationship

The nearly 21-year-old Jiujiang Bridge, which is suffering from uneven line shape, constant great downwarping of the main beam and cracking of the box girder, needs reinforcement and cable adjustment. It has undergone cable adjustment for twice with incomplete data. Therefore, the initial internal force state of the Jiujiang Bridge is identified as the key for the cable adjustment project. Based on parameter identification by means of static force test data, this paper suggests determining the initial internal force state of the cable-stayed bridge according to the cable force-displacement relationship parameter identification method. That is, upon measuring the displacement and the change in cable forces for twice, one can identify the parameters concerned by means of optimization. This method is applied to the cable adjustment, replacement and reinforcement project for the Jiujiang Bridge as a guidance for the cable adjustment and reinforcement project of the bridge.

A Voltage Based Maximum Power Point Tracker for Low Power and Low Cost Photovoltaic Applications

This paper describes the design of a voltage based maximum power point tracker (MPPT) for photovoltaic (PV) applications. Of the various MPPT methods, the voltage based method is considered to be the simplest and cost effective. The major disadvantage of this method is that the PV array is disconnected from the load for the sampling of its open circuit voltage, which inevitably results in power loss. Another disadvantage, in case of rapid irradiance variation, is that if the duration between two successive samplings, called the sampling period, is too long there is a considerable loss. This is because the output voltage of the PV array follows the unchanged reference during one sampling period. Once a maximum power point (MPP) is tracked and a change in irradiation occurs between two successive samplings, then the new MPP is not tracked until the next sampling of the PV array voltage. This paper proposes an MPPT circuit in which the sampling interval of the PV array voltage, and the sampling period have been shortened. The sample and hold circuit has also been simplified. The proposed circuit does not utilize a microcontroller or a digital signal processor and is thus suitable for low cost and low power applications.

On the Seismic Response of Collided Structures

This study examines the inelastic behavior of adjacent planar reinforced concrete (R.C.) frames subjected to strong ground motions. The investigation focuses on the effects of vertical ground motion on the seismic pounding. The examined structures are modeled and analyzed by RUAUMOKO dynamic nonlinear analysis program using reliable hysteretic models for both structural members and contact elements. It is found that the vertical ground motion mildly affects the seismic response of adjacent buildings subjected to structural pounding and, for this reason, it can be ignored from the displacement and interstorey drifts assessment. However, the structural damage is moderately affected by the vertical component of earthquakes.

Simulation of Voltage Controlled Tunable All Pass Filter Using LM13700 OTA

In recent years Operational Transconductance Amplifier based high frequency integrated circuits, filters and systems have been widely investigated. The usefulness of OTAs over conventional OP-Amps in the design of both first order and second order active filters are well documented. This paper discusses some of the tunability issues using the Matlab/Simulink® software which are previously unreported for any commercial OTA. Using the simulation results two first order voltage controlled all pass filters with phase tuning capability are proposed.

A Case Study on Appearance Based Feature Extraction Techniques and Their Susceptibility to Image Degradations for the Task of Face Recognition

Over the past decades, automatic face recognition has become a highly active research area, mainly due to the countless application possibilities in both the private as well as the public sector. Numerous algorithms have been proposed in the literature to cope with the problem of face recognition, nevertheless, a group of methods commonly referred to as appearance based have emerged as the dominant solution to the face recognition problem. Many comparative studies concerned with the performance of appearance based methods have already been presented in the literature, not rarely with inconclusive and often with contradictory results. No consent has been reached within the scientific community regarding the relative ranking of the efficiency of appearance based methods for the face recognition task, let alone regarding their susceptibility to appearance changes induced by various environmental factors. To tackle these open issues, this paper assess the performance of the three dominant appearance based methods: principal component analysis, linear discriminant analysis and independent component analysis, and compares them on equal footing (i.e., with the same preprocessing procedure, with optimized parameters for the best possible performance, etc.) in face verification experiments on the publicly available XM2VTS database. In addition to the comparative analysis on the XM2VTS database, ten degraded versions of the database are also employed in the experiments to evaluate the susceptibility of the appearance based methods on various image degradations which can occur in "real-life" operating conditions. Our experimental results suggest that linear discriminant analysis ensures the most consistent verification rates across the tested databases.

Operational risks Classification for Information Systems with Service-Oriented Architecture (Including Loss Calculation Example)

This article presents the results of a study conducted to identify operational risks for information systems (IS) with service-oriented architecture (SOA). Analysis of current approaches to risk and system error classifications revealed that the system error classes were never used for SOA risk estimation. Additionally system error classes are not normallyexperimentally supported with realenterprise error data. Through the study several categories of various existing error classifications systems are applied and three new error categories with sub-categories are identified. As a part of operational risks a new error classification scheme is proposed for SOA applications. It is based on errors of real information systems which are service providers for application with service-oriented architecture. The proposed classification approach has been used to classify SOA system errors for two different enterprises (oil and gas industry, metal and mining industry). In addition we have conducted a research to identify possible losses from operational risks.

A Real-time 4M Collecting Method for Production Information System

It can be said that the business sector is faced with a range of challenges–a rapidly changing business environment, an increase and diversification of customers- demands and the consequent need for quick response–for having in place flexible management and production info systems. As a matter of fact, many manufacturers have adopted production info management systems such as MES and ERP. Nevertheless, managers are having difficulties obtaining ever-changing production process information in real time, or responding quickly to any change in production related needs on the basis of such information. This is because they rely on poor production info systems which are not capable of providing real-time factory settings. If the manufacturer doesn-t have a capacity for collecting or digitalizing the 4 Ms (Man, Machine, Material, Method), which are resources for production, on a real time basis, it might to difficult to effectively maintain the information on production process. In this regard, this paper will introduce some new alternatives to the existing methods of collecting the 4 Ms in real time, which are currently comprise the production field.

Interpreting the Out-of-Control Signals of Multivariate Control Charts Employing Neural Networks

Multivariate quality control charts show some advantages to monitor several variables in comparison with the simultaneous use of univariate charts, nevertheless, there are some disadvantages. The main problem is how to interpret the out-ofcontrol signal of a multivariate chart. For example, in the case of control charts designed to monitor the mean vector, the chart signals showing that it must be accepted that there is a shift in the vector, but no indication is given about the variables that have produced this shift. The MEWMA quality control chart is a very powerful scheme to detect small shifts in the mean vector. There are no previous specific works about the interpretation of the out-of-control signal of this chart. In this paper neural networks are designed to interpret the out-of-control signal of the MEWMA chart, and the percentage of correct classifications is studied for different cases.

Wheat Bran Carbohydrates as Substrate for Bifidobacterium lactis Development

The present study addresses problems and solutions related to new functional food production. Wheat (Triticum aestivum L) bran obtained from industrial mill company “Dobeles dzirnavieks”, was used to investigate them as raw material like nutrients for Bifidobacterium lactis Bb-12. Enzymatic hydrolysis of wheat bran starch was carried out by α-amylase from Bacillus amyloliquefaciens (Sigma Aldrich). The Viscozyme L purchased from (Sigma Aldrich) were used for reducing released sugar. Bifidibacterium lactis Bb-12 purchased from (Probio-Tec® CHR Hansen) was cultivated in enzymatically hydrolysed wheat bran mash. All procedures ensured the number of active Bifidobacterium lactis Bb-12 in the final product reached 105 CFUg-1. After enzymatic and bacterial fermentations sample were freeze dried for analysis of chemical compounds. All experiments were performed at Faculty of Food Technology of Latvia University of Agriculture in January- March 2013. The obtained results show that both types of wheat bran (enzymatically treated and non-treated) influenced the fermentative activity and number of Bifidibacterium lactis Bb-12 viable in wheat bran mash. Amount of acidity strongly increase during the wheat bran mash fermentation. The main objective of this work was to create low-energy functional enzymatically and bacterially treated food from wheat bran using enzymatic hydrolysis of carbohydrates and following cultivation of Bifidobacterium lactis Bb-12.