Experimental and Numerical Simulation of Fire in a Scaled Underground Station

The objective of this study is to investigate fire behaviors, experimentally and numerically, in a scaled version of an underground station. The effect of ventilation velocity on the fire is examined. Fire experiments are simulated by burning 10 ml isopropyl alcohol fuel in a fire pool with dimensions 5cm x 10cm x 4 mm at the center of 1/100 scaled underground station model. A commercial CFD program FLUENT was used in numerical simulations. For air flow simulations, k-ω SST turbulence model and for combustion simulation, non-premixed combustion model are used. This study showed that, the ventilation velocity is increased from 1 m/s to 3 m/s the maximum temperature in the station is found to be less for ventilation velocity of 1 m/s. The reason for these experimental result lies on the relative dominance of oxygen supply effect on cooling effect. Without piston effect, maximum temperature occurs above the fuel pool. However, when the ventilation velocity increased the flame was tilted in the direction of ventilation and the location of maximum temperature moves along the flow direction. The velocities measured experimentally in the station at different locations are well matched by the CFD simulation results. The prediction of general flow pattern is satisfactory with the smoke visualization tests. The backlayering in velocity is well predicted by CFD simulation. However, all over the station, the CFD simulations predicted higher temperatures compared to experimental measurements.

Cold Hardiness in Near Isogenic Lines of Bread Wheat (Triticum Aestivum L. em. Thell.)

Low temperature (LT) is one of the most abiotic stresses causing loss of yield in wheat (T. aestivum). Four major genes in wheat (Triticum aestivum L.) with the dominant alleles designated Vrn–A1,Vrn–B1,Vrn–D1 and Vrn4, are known to have large effects on the vernalization response, but the effects on cold hardiness are ambiguous. Poor cold tolerance has restricted winter wheat production in regions of high winter stress [9]. It was known that nearly all wheat chromosomes [5] or at least 10 chromosomes of 21 chromosome pairs are important in winter hardiness [15]. The objective of present study was to clarify the role of each chromosome in cold tolerance. With this purpose we used 20 isogenic lines of wheat. In each one of these isogenic lines only a chromosome from ‘Bezostaya’ variety (a winter habit cultivar) was substituted to ‘Capple desprez’ variety. The plant materials were planted in controlled conditions with 20º C and 16 h day length in moderately cold areas of Iran at Karaj Agricultural Research Station in 2006-07 and the acclimation period was completed for about 4 weeks in a cold room with 4º C. The cold hardiness of these isogenic lines was measured by LT50 (the temperature in which 50% of the plants are killed by freezing stress).The experimental design was completely randomized block design (RCBD)with three replicates. The results showed that chromosome 5A had a major effect on freezing tolerance, and then chromosomes 1A and 4A had less effect on this trait. Further studies are essential to understanding the importance of each chromosome in controlling cold hardiness in wheat.

Fault Classification of a Doubly FED Induction Machine Using Neural Network

Rapid progress in process automation and tightening quality standards result in a growing demand being placed on fault detection and diagnostics methods to provide both speed and reliability of motor quality testing. Doubly fed induction generators are used mainly for wind energy conversion in MW power plants. This paper presents a detection of an inter turn stator and an open phase faults, in a doubly fed induction machine whose stator and rotor are supplied by two pulse width modulation (PWM) inverters. The method used in this article to detect these faults, is based on Park-s Vector Approach, using a neural network.

The Job Satisfaction of the Employees with the Organization Retention of Metropolitan Waterworks Authority at Bangkhen

This research aimed to study correlation between work satisfaction and organization core value of officers in Waterworks Authority, Bangkean Branch. Sample group of the study was 112 officers who worked in the Waterworks Authority, Bangkean Branch. Questionnaires were employed as a research tools, while, Percentage, Mean, Standard Deviation, T-test, One-way ANOVA, and Pearson Product Moment Correlation were claimed as statistics used in this study. Researcher found that overall and individual aspects of work satisfaction namely, work characteristic, work progress, and colleagues significantly correlated with organization core value in aspect of perception in choice of work at 0.5, 0.01, and 0.01 respectively. Also, such aspects were compatible with income at .05 which indicated the low level of correlation, mid low correlation respectively at the same direction, same direction, opposite direction, and same direction, correspondingly.

Universities Strategic Evaluation Using Balanced Scorecard

Defining strategic position of the organizations within the industry environment is one of the basic and most important phases of strategic planning to which extent that one of the fundamental schools of strategic planning is the strategic positioning school. In today-s knowledge-based economy and dynamic environment, it is essential for universities as the centers of education, knowledge creation and knowledge worker evolvement. Till now, variant models with different approaches to strategic positioning are deployed in defining the strategic position within the various industries. Balanced Scorecard as one of the powerful models for strategic positioning, analyzes all aspects of the organization evenly. In this paper with the consideration of BSC strength in strategic evaluation, it is used for analyzing the environmental position of the best-s Iranian Business Schools. The results could be used in developing strategic plans for these schools as well as other Iranian Management and Business Schools.

An Agent-based Model for Analyzing Interaction of Two Stable Social Networks

In this research, the authors analyze network stability using agent-based simulation. Firstly, the authors focus on analyzing large networks (eight agents) by connecting different two stable small social networks (A small stable network is consisted on four agents.). Secondly, the authors analyze the network (eight agents) shape which is added one agent to a stable network (seven agents). Thirdly, the authors analyze interpersonal comparison of utility. The “star-network "was not found on the result of interaction among stable two small networks. On the other hand, “decentralized network" was formed from several combination. In case of added one agent to a stable network (seven agents), if the value of “c"(maintenance cost of per a link) was larger, the number of patterns of stable network was also larger. In this case, the authors identified the characteristics of a large stable network. The authors discovered the cases of decreasing personal utility under condition increasing total utility.

Robust UKF Insensitive to Measurement Faults for Pico Satellite Attitude Estimation

In the normal operation conditions of a pico satellite, conventional Unscented Kalman Filter (UKF) gives sufficiently good estimation results. However, if the measurements are not reliable because of any kind of malfunction in the estimation system, UKF gives inaccurate results and diverges by time. This study, introduces Robust Unscented Kalman Filter (RUKF) algorithms with the filter gain correction for the case of measurement malfunctions. By the use of defined variables named as measurement noise scale factor, the faulty measurements are taken into the consideration with a small weight and the estimations are corrected without affecting the characteristic of the accurate ones. Two different RUKF algorithms, one with single scale factor and one with multiple scale factors, are proposed and applied for the attitude estimation process of a pico satellite. The results of these algorithms are compared for different types of measurement faults in different estimation scenarios and recommendations about their applications are given.

Treatment of Eutrophic-lake Water by Free Water Surface Wetland

In China, with the rapid urbanization and industrialization, and highly accelerated economic development have resulted in degradation of water resource. The water quality deterioration usual result from eutrophication in most cases, so how to dispose this type pollution water higher efficiently is an urgent task. Hower, different with traditional technology, constructed wetlands are effective treatment systems that can be very useful because they are simple technology and low operational cost. A pilot-scale treatment including constructed wetlands was constructed at XingYun Lake, Yuxi, China, and operated as primary treatment measure before eutrophic-lake water draining to riverine landscape. Water quality indices were determined during the experiment, the results indicated that treatment removal efficiencies were high for Nitrate nitrogen, Chlorophyll–a and Algae, the final removal efficiency reached to 95.20%, 93.33% and 99.87% respectively, but the removal efficiency of Total phosphorous and Total nitrogen only reach to 68.83% and 50.00% respectively.

Propagation of a Generalized Beam in ABCD System

For a generalized Hermite sinosiodal / hyperbolic Gaussian beam passing through an ABCD system with a finite aperture, the propagation properties are derived using the Collins integral. The results are obtained in the form of intensity graphs indicating that previously demonstrated rules of reciprocity are applicable, while the existence of the aperture accelerates this transformation.

CFD Simulation of Dense Gas Extraction through Polymeric Membranes

In this study is presented a general methodology to predict the performance of a continuous near-critical fluid extraction process to remove compounds from aqueous solutions using hollow fiber membrane contactors. A comprehensive 2D mathematical model was developed to study Porocritical extraction process. The system studied in this work is a membrane based extractor of ethanol and acetone from aqueous solutions using near-critical CO2. Predictions of extraction percentages obtained by simulations have been compared to the experimental values reported by Bothun et al. [5]. Simulations of extraction percentage of ethanol and acetone show an average difference of 9.3% and 6.5% with the experimental data, respectively. More accurate predictions of the extraction of acetone could be explained by a better estimation of the transport properties in the aqueous phase that controls the extraction of this solute.

Grid Computing for the Bi-CGSTAB Applied to the Solution of the Modified Helmholtz Equation

The problem addressed herein is the efficient management of the Grid/Cluster intense computation involved, when the preconditioned Bi-CGSTAB Krylov method is employed for the iterative solution of the large and sparse linear system arising from the discretization of the Modified Helmholtz-Dirichlet problem by the Hermite Collocation method. Taking advantage of the Collocation ma-trix's red-black ordered structure we organize efficiently the whole computation and map it on a pipeline architecture with master-slave communication. Implementation, through MPI programming tools, is realized on a SUN V240 cluster, inter-connected through a 100Mbps and 1Gbps ethernet network,and its performance is presented by speedup measurements included.

Optimization of Energy Consumption in Sequential Distillation Column

Distillation column is one of the most common operations in process industries and is while the most expensive unit of the amount of energy consumption. Many ideas have been presented in the related literature for optimizing energy consumption in distillation columns. This paper studies the different heat integration methods in a distillation column which separate Benzene, Toluene, Xylene, and C9+. Three schemes of heat integration including, indirect sequence (IQ), indirect sequence with forward energy integration (IQF), and indirect sequence with backward energy integration (IQB) has been studied in this paper. Using shortcut method these heat integration schemes were simulated with Aspen HYSYS software and compared with each other with regarding economic considerations. The result shows that the energy consumption has been reduced 33% in IQF and 28% in IQB in comparison with IQ scheme. Also the economic result shows that the total annual cost has been reduced 12% in IQF and 8% in IQB regarding with IQ scheme. Therefore, the IQF scheme is most economic than IQB and IQ scheme.

Web Application Security, Attacks and Mitigation

Today’s technology is heavily dependent on web applications. Web applications are being accepted by users at a very rapid pace. These have made our work efficient. These include webmail, online retail sale, online gaming, wikis, departure and arrival of trains and flights and list is very long. These are developed in different languages like PHP, Python, C#, ASP.NET and many more by using scripts such as HTML and JavaScript. Attackers develop tools and techniques to exploit web applications and legitimate websites. This has led to rise of web application security; which can be broadly classified into Declarative Security and Program Security. The most common attacks on the applications are by SQL Injection and XSS which give access to unauthorized users who totally damage or destroy the system. This paper presents a detailed literature description and analysis on Web Application Security, examples of attacks and steps to mitigate the vulnerabilities.

Influence of Taguchi Selected Parameters on Properties of CuO-ZrO2 Nanoparticles Produced via Sol-gel Method

The present paper discusses the selection of process parameters for obtaining optimal nanocrystallites size in the CuOZrO2 catalyst. There are some parameters changing the inorganic structure which have an influence on the role of hydrolysis and condensation reaction. A statistical design test method is implemented in order to optimize the experimental conditions of CuO-ZrO2 nanoparticles preparation. This method is applied for the experiments and L16 orthogonal array standard. The crystallites size is considered as an index. This index will be used for the analysis in the condition where the parameters vary. The effect of pH, H2O/ precursor molar ratio (R), time and temperature of calcination, chelating agent and alcohol volume are particularity investigated among all other parameters. In accordance with the results of Taguchi, it is found that temperature has the greatest impact on the particle size. The pH and H2O/ precursor molar ratio have low influences as compared with temperature. The alcohol volume as well as the time has almost no effect as compared with all other parameters. Temperature also has an influence on the morphology and amorphous structure of zirconia. The optimal conditions are determined by using Taguchi method. The nanocatalyst is studied by DTA-TG, XRD, EDS, SEM and TEM. The results of this research indicate that it is possible to vary the structure, morphology and properties of the sol-gel by controlling the above-mentioned parameters.

Performance Comparison of Particle Swarm Optimization with Traditional Clustering Algorithms used in Self-Organizing Map

Self-organizing map (SOM) is a well known data reduction technique used in data mining. It can reveal structure in data sets through data visualization that is otherwise hard to detect from raw data alone. However, interpretation through visual inspection is prone to errors and can be very tedious. There are several techniques for the automatic detection of clusters of code vectors found by SOM, but they generally do not take into account the distribution of code vectors; this may lead to unsatisfactory clustering and poor definition of cluster boundaries, particularly where the density of data points is low. In this paper, we propose the use of an adaptive heuristic particle swarm optimization (PSO) algorithm for finding cluster boundaries directly from the code vectors obtained from SOM. The application of our method to several standard data sets demonstrates its feasibility. PSO algorithm utilizes a so-called U-matrix of SOM to determine cluster boundaries; the results of this novel automatic method compare very favorably to boundary detection through traditional algorithms namely k-means and hierarchical based approach which are normally used to interpret the output of SOM.

A New Composition Method of Admissible Support Vector Kernel Based on Reproducing Kernel

Kernel function, which allows the formulation of nonlinear variants of any algorithm that can be cast in terms of dot products, makes the Support Vector Machines (SVM) have been successfully applied in many fields, e.g. classification and regression. The importance of kernel has motivated many studies on its composition. It-s well-known that reproducing kernel (R.K) is a useful kernel function which possesses many properties, e.g. positive definiteness, reproducing property and composing complex R.K by simple operation. There are two popular ways to compute the R.K with explicit form. One is to construct and solve a specific differential equation with boundary value whose handicap is incapable of obtaining a unified form of R.K. The other is using a piecewise integral of the Green function associated with a differential operator L. The latter benefits the computation of a R.K with a unified explicit form and theoretical analysis, whereas there are relatively later studies and fewer practical computations. In this paper, a new algorithm for computing a R.K is presented. It can obtain the unified explicit form of R.K in general reproducing kernel Hilbert space. It avoids constructing and solving the complex differential equations manually and benefits an automatic, flexible and rigorous computation for more general RKHS. In order to validate that the R.K computed by the algorithm can be used in SVM well, some illustrative examples and a comparison between R.K and Gaussian kernel (RBF) in support vector regression are presented. The result shows that the performance of R.K is close or slightly superior to that of RBF.

Effect of Laser Power and Powder Flow Rate on Properties of Laser Metal Deposited Ti6Al4V

Laser Metal Deposition (LMD) is an additive manufacturing process with capabilities that include: producing new part directly from 3 Dimensional Computer Aided Design (3D CAD) model, building new part on the existing old component and repairing an existing high valued component parts that would have been discarded in the past. With all these capabilities and its advantages over other additive manufacturing techniques, the underlying physics of the LMD process is yet to be fully understood probably because of high interaction between the processing parameters and studying many parameters at the same time makes it further complex to understand. In this study, the effect of laser power and powder flow rate on physical properties (deposition height and deposition width), metallurgical property (microstructure) and mechanical (microhardness) properties on laser deposited most widely used aerospace alloy are studied. Also, because the Ti6Al4V is very expensive, and LMD is capable of reducing buy-to-fly ratio of aerospace parts, the material utilization efficiency is also studied. Four sets of experiments were performed and repeated to establish repeatability using laser power of 1.8 kW and 3.0 kW, powder flow rate of 2.88 g/min and 5.67 g/min, and keeping the gas flow rate and scanning speed constant at 2 l/min and 0.005 m/s respectively. The deposition height / width are found to increase with increase in laser power and increase in powder flow rate. The material utilization is favoured by higher power while higher powder flow rate reduces material utilization. The results are presented and fully discussed.

Kernel Matching versus Inverse Probability Weighting: A Comparative Study

Recent quasi-experimental evaluation of the Canadian Active Labour Market Policies (ALMP) by Human Resources and Skills Development Canada (HRSDC) has provided an opportunity to examine alternative methods to estimating the incremental effects of Employment Benefits and Support Measures (EBSMs) on program participants. The focus of this paper is to assess the efficiency and robustness of inverse probability weighting (IPW) relative to kernel matching (KM) in the estimation of program effects. To accomplish this objective, the authors compare pairs of 1,080 estimates, along with their associated standard errors, to assess which type of estimate is generally more efficient and robust. In the interest of practicality, the authorsalso document the computationaltime it took to produce the IPW and KM estimates, respectively.

A Fuzzy Logic Based Model to Predict Surface Roughness of A Machined Surface in Glass Milling Operation Using CBN Grinding Tool

Nowadays, the demand for high product quality focuses extensive attention to the quality of machined surface. The (CNC) milling machine facilities provides a wide variety of parameters set-up, making the machining process on the glass excellent in manufacturing complicated special products compared to other machining processes. However, the application of grinding process on the CNC milling machine could be an ideal solution to improve the product quality, but adopting the right machining parameters is required. In glass milling operation, several machining parameters are considered to be significant in affecting surface roughness. These parameters include the lubrication pressure, spindle speed, feed rate and depth of cut. In this research work, a fuzzy logic model is offered to predict the surface roughness of a machined surface in glass milling operation using CBN grinding tool. Four membership functions are allocated to be connected with each input of the model. The predicted results achieved via fuzzy logic model are compared to the experimental result. The result demonstrated settlement between the fuzzy model and experimental results with the 93.103% accuracy.

Public R and D Risk and Risk Management Policy

R&D risk management has been suggested as one of the management approaches for accomplishing the goals of public R&D investment. The investment in basic science and core technology development is the essential roles of government for securing the social base needed for continuous economic growth. And, it is also an important role of the science and technology policy sectors to generate a positive environment in which the outcomes of public R&D can be diffused in a stable fashion by controlling the uncertainties and risk factors in advance that may arise during the application of such achievements to society and industry. Various policies have already been implemented to manage uncertainties and variables that may have negative impact on accomplishing public R& investment goals. But we may derive new policy measures for complementing the existing policies and for exploring progress direction by analyzing them in a policy package from the viewpoint of R&D risk management.