Distortion of Flow Measurement and Cavitation Occurs Due to Orifice Inlet Velocity Profiles

This analysis investigates the distortion of flow measurement and the increase of cavitation along orifice flowmeter. The analysis using the numerical method (CFD) validated the distortion of flow measurement through the inlet velocity profile considering the convergence and grid dependency. Realizable k-e model was selected and y+ was about 50 in this numerical analysis. This analysis also estimated the vulnerability of cavitation effect due to inlet velocity profile. The investigation concludes that inclined inlet velocity profile could vary the pressure which was measured at pressure tab near pipe wall and it led to distort the pressure values ranged from -3.8% to 5.3% near the orifice plate and to make the increase of cavitation. The investigation recommends that the fully developed inlet velocity flow is beneficial to accurate flow measurement in orifice flowmeter.

Influence of Dilution and Lean-premixed on Mild Combustion in an Industrial Burner

Understanding of how and where NOx formation occurs in industrial burner is very important for efficient and clean operation of utility burners. Also the importance of this problem is mainly due to its relation to the pollutants produced by more burners used widely of gas turbine in thermal power plants and glass and steel industry. In this article, a numerical model of an industrial burner operating in MILD combustion is validated with experimental data.. Then influence of air flow rate and air temperature on combustor temperature profiles and NOX product are investigated. In order to modification this study reports on the effects of fuel and air dilution (with inert gases H2O, CO2, N2), and also influence of lean-premixed of fuel, on the temperature profiles and NOX emission. Conservation equations of mass, momentum and energy, and transport equations of species concentrations, turbulence, combustion and radiation modeling in addition to NO modeling equations were solved together to present temperature and NO distribution inside the burner. The results shows that dilution, cause to a reduction in value of temperature and NOX emission, and suppresses any flame propagation inside the furnace and made the flame inside the furnace invisible. Dilution with H2O rather than N2 and CO2 decreases further the value of the NOX. Also with raise of lean-premix level, local temperature of burner and the value of NOX product are decreases because of premixing prevents local “hot spots" within the combustor volume that can lead to significant NOx formation. Also leanpremixing of fuel with air cause to amount of air in reaction zone is reach more than amount that supplied as is actually needed to burn the fuel and this act lead to limiting NOx formation

A Finite Difference Calculation Procedure for the Navier-Stokes Equations on a Staggered Curvilinear Grid

A new numerical method for solving the twodimensional, steady, incompressible, viscous flow equations on a Curvilinear staggered grid is presented in this paper. The proposed methodology is finite difference based, but essentially takes advantage of the best features of two well-established numerical formulations, the finite difference and finite volume methods. Some weaknesses of the finite difference approach are removed by exploiting the strengths of the finite volume method. In particular, the issue of velocity-pressure coupling is dealt with in the proposed finite difference formulation by developing a pressure correction equation in a manner similar to the SIMPLE approach commonly used in finite volume formulations. However, since this is purely a finite difference formulation, numerical approximation of fluxes is not required. Results obtained from the present method are based on the first-order upwind scheme for the convective terms, but the methodology can easily be modified to accommodate higher order differencing schemes.

Examining the Value of Attribute Scores for Author-Supplied Keyphrases in Automatic Keyphrase Extraction

Automatic keyphrase extraction is useful in efficiently locating specific documents in online databases. While several techniques have been introduced over the years, improvement on accuracy rate is minimal. This research examines attribute scores for author-supplied keyphrases to better understand how the scores affect the accuracy rate of automatic keyphrase extraction. Five attributes are chosen for examination: Term Frequency, First Occurrence, Last Occurrence, Phrase Position in Sentences, and Term Cohesion Degree. The results show that First Occurrence is the most reliable attribute. Term Frequency, Last Occurrence and Term Cohesion Degree display a wide range of variation but are still usable with suggested tweaks. Only Phrase Position in Sentences shows a totally unpredictable pattern. The results imply that the commonly used ranking approach which directly extracts top ranked potential phrases from candidate keyphrase list as the keyphrases may not be reliable.

A Novel Pareto-Based Meta-Heuristic Algorithm to Optimize Multi-Facility Location-Allocation Problem

This article proposes a novel Pareto-based multiobjective meta-heuristic algorithm named non-dominated ranking genetic algorithm (NRGA) to solve multi-facility location-allocation problem. In NRGA, a fitness value representing rank is assigned to each individual of the population. Moreover, two features ranked based roulette wheel selection including select the fronts and choose solutions from the fronts, are utilized. The proposed solving methodology is validated using several examples taken from the specialized literature. The performance of our approach shows that NRGA algorithm is able to generate true and well distributed Pareto optimal solutions.

An Enhanced Distributed System to improve theTime Complexity of Binary Indexed Trees

Distributed Computing Systems are usually considered the most suitable model for practical solutions of many parallel algorithms. In this paper an enhanced distributed system is presented to improve the time complexity of Binary Indexed Trees (BIT). The proposed system uses multi-uniform processors with identical architectures and a specially designed distributed memory system. The analysis of this system has shown that it has reduced the time complexity of the read query to O(Log(Log(N))), and the update query to constant complexity, while the naive solution has a time complexity of O(Log(N)) for both queries. The system was implemented and simulated using VHDL and Verilog Hardware Description Languages, with xilinx ISE 10.1, as the development environment and ModelSim 6.1c, similarly as the simulation tool. The simulation has shown that the overhead resulting by the wiring and communication between the system fragments could be fairly neglected, which makes it applicable to practically reach the maximum speed up offered by the proposed model.

A Study on Mode of Collapse of Metallic Shells Having Combined Tube-Frusta Geometry Subjected to Axial Compression

The present paper deals with the experimental and computational study of axial collapse of the aluminum metallic shells having combined tube-frusta geometry between two parallel plates. Shells were having bottom two third lengths as frusta and remaining top one third lengths as tube. Shells were compressed to recognize their modes of collapse and associated energy absorption capability. An axisymmetric Finite Element computational model of collapse process is presented and analysed, using a non-linear FE code FORGE2. Six noded isoparametric triangular elements were used to discretize the deforming shell. The material of the shells was idealized as rigid visco-plastic. To validate the computational model experimental and computed results of the deformed shapes and their corresponding load-compression and energy-compression curves were compared. With the help of the obtained results progress of the axisymmetric mode of collapse has been presented, analysed and discussed.

The Effect of Carbon on Molybdenum in the Preparation of Microwave Induced Molybdenum Carbide

This study shows the effect of carbon towards molybdenum carbide alloy when exposed to Microwave. This technique is also known as Microwave Induced Alloying (MIA) for the preparation of molybdenum carbide. In this study ammonium heptamolybdate solution and carbon black powder were heterogeneously mixed and exposed to microwave irradiation for 2 minutes. The effect on amount of carbon towards the produced alloy on morphological and oxidation states changes during microwave is presented. In this experiment, it is expected carbon act as a reducing agent with the ratio 2:7 molybdenum to carbon as the optimum for the production of molybdenum carbide alloy. All the morphological transformations and changes in this experiment were followed and characterized using X-Ray Diffraction and FESEM.

Computational Intelligence Hybrid Learning Approach to Time Series Forecasting

Time series forecasting is an important and widely popular topic in the research of system modeling. This paper describes how to use the hybrid PSO-RLSE neuro-fuzzy learning approach to the problem of time series forecasting. The PSO algorithm is used to update the premise parameters of the proposed prediction system, and the RLSE is used to update the consequence parameters. Thanks to the hybrid learning (HL) approach for the neuro-fuzzy system, the prediction performance is excellent and the speed of learning convergence is much faster than other compared approaches. In the experiments, we use the well-known Mackey-Glass chaos time series. According to the experimental results, the prediction performance and accuracy in time series forecasting by the proposed approach is much better than other compared approaches, as shown in Table IV. Excellent prediction performance by the proposed approach has been observed.

Machine Learning Techniques for Short-Term Rain Forecasting System in the Northeastern Part of Thailand

This paper presents the methodology from machine learning approaches for short-term rain forecasting system. Decision Tree, Artificial Neural Network (ANN), and Support Vector Machine (SVM) were applied to develop classification and prediction models for rainfall forecasts. The goals of this presentation are to demonstrate (1) how feature selection can be used to identify the relationships between rainfall occurrences and other weather conditions and (2) what models can be developed and deployed for predicting the accurate rainfall estimates to support the decisions to launch the cloud seeding operations in the northeastern part of Thailand. Datasets collected during 2004-2006 from the Chalermprakiat Royal Rain Making Research Center at Hua Hin, Prachuap Khiri khan, the Chalermprakiat Royal Rain Making Research Center at Pimai, Nakhon Ratchasima and Thai Meteorological Department (TMD). A total of 179 records with 57 features was merged and matched by unique date. There are three main parts in this work. Firstly, a decision tree induction algorithm (C4.5) was used to classify the rain status into either rain or no-rain. The overall accuracy of classification tree achieves 94.41% with the five-fold cross validation. The C4.5 algorithm was also used to classify the rain amount into three classes as no-rain (0-0.1 mm.), few-rain (0.1- 10 mm.), and moderate-rain (>10 mm.) and the overall accuracy of classification tree achieves 62.57%. Secondly, an ANN was applied to predict the rainfall amount and the root mean square error (RMSE) were used to measure the training and testing errors of the ANN. It is found that the ANN yields a lower RMSE at 0.171 for daily rainfall estimates, when compared to next-day and next-2-day estimation. Thirdly, the ANN and SVM techniques were also used to classify the rain amount into three classes as no-rain, few-rain, and moderate-rain as above. The results achieved in 68.15% and 69.10% of overall accuracy of same-day prediction for the ANN and SVM models, respectively. The obtained results illustrated the comparison of the predictive power of different methods for rainfall estimation.

Design and Implementation of Project Time Management Risk Assessment Tool for SME Projects using Oracle Application Express

Risk Assessment Tool (RAT) is an expert system that assesses, monitors, and gives preliminary treatments automatically based on the project plan. In this paper, a review was taken out for the current project time management risk assessment tools for SME software development projects, analyze risk assessment parameters, conditions, scenarios, and finally propose risk assessment tool (RAT) model to assess, treat, and monitor risks. An implementation prototype system is developed to validate the model.

Antioxydant and Antibacterial Activity of Alkaloids and Terpenes Extracts from Euphorbia granulata

In order to enhance the knowledge of certain phytochemical Algerian plants that are widely used in traditional medicine and to exploit their therapeutic potential in modern medicine, we have done a specific extraction of terpenes and alkaloids from the leaves of Euphorbia granulata to evaluate the antioxidant and antibacterial activity of this extracts. After the extraction it was found that the terpene extract gave the highest yield 59.72% compared with alkaloids extracts. The disc diffusion method was used to determine the antibacterial activity against different bacterial strains: Escherichia coli (ATCC25922), Pseudomonas aeruginosa (ATCC27853) and Staphylococcus aureus (ATCC25923). All extracts have shown inhibition of growth bacteria. The different zones of inhibition have varied from (7 -10 mm) according to the concentrations of extract used. Testing the antiradical activity on DPPH-TLC plates indicated the presence of substances that have potent anti-free radical. As against, the BC-TLC revealed that only terpenes extract which was reacted positively. These results can validate the importance of Euphorbia granulata in traditional medicine.

Assessing drought Vulnerability of Bulgarian Agriculture through Model Simulations

This study assesses the vulnerability of Bulgarian agriculture to drought using the WINISAREG model and seasonal standard precipitation index SPI(2) for the period 1951-2004. This model was previously validated for maize on soils of different water holding capacity (TAW) in various locations. Simulations are performed for Plovdiv, Stara Zagora and Sofia. Results relative to Plovdiv show that in soils of large TAW (180 mm m-1) net irrigation requirements (NIRs) range 0-40 mm in wet years and 350-380 mm in dry years. In soils of small TAW (116 mm m-1), NIRs reach 440 mm in the very dry year. NIRs in Sofia are about 80 mm smaller. Rainfed maize is associated with great yield variability (29%

A Decision Boundary based Discretization Technique using Resampling

Many supervised induction algorithms require discrete data, even while real data often comes in a discrete and continuous formats. Quality discretization of continuous attributes is an important problem that has effects on speed, accuracy and understandability of the induction models. Usually, discretization and other types of statistical processes are applied to subsets of the population as the entire population is practically inaccessible. For this reason we argue that the discretization performed on a sample of the population is only an estimate of the entire population. Most of the existing discretization methods, partition the attribute range into two or several intervals using a single or a set of cut points. In this paper, we introduce a technique by using resampling (such as bootstrap) to generate a set of candidate discretization points and thus, improving the discretization quality by providing a better estimation towards the entire population. Thus, the goal of this paper is to observe whether the resampling technique can lead to better discretization points, which opens up a new paradigm to construction of soft decision trees.

Parametric Modeling Approach for Call Holding Times for IP based Public Safety Networks via EM Algorithm

This paper presents parametric probability density models for call holding times (CHTs) into emergency call center based on the actual data collected for over a week in the public Emergency Information Network (EIN) in Mongolia. When the set of chosen candidates of Gamma distribution family is fitted to the call holding time data, it is observed that the whole area in the CHT empirical histogram is underestimated due to spikes of higher probability and long tails of lower probability in the histogram. Therefore, we provide the Gaussian parametric model of a mixture of lognormal distributions with explicit analytical expressions for the modeling of CHTs of PSNs. Finally, we show that the CHTs for PSNs are fitted reasonably by a mixture of lognormal distributions via the simulation of expectation maximization algorithm. This result is significant as it expresses a useful mathematical tool in an explicit manner of a mixture of lognormal distributions.

Schmitt Trigger Based SRAM Using Finfet Technology- Shorted Gate Mode

The most widely used semiconductor memory types are the Dynamic Random Access Memory (DRAM) and Static Random Access memory (SRAM). Competition among memory manufacturers drives the need to decrease power consumption and reduce the probability of read failure. A technology that is relatively new and has not been explored is the FinFET technology. In this paper, a single cell Schmitt Trigger Based Static RAM using FinFET technology is proposed and analyzed. The accuracy of the result is validated by means of HSPICE simulations with 32nm FinFET technology and the results are then compared with 6T SRAM using the same technology.

Towards a Systematic, Cost-Effective Approach for ERP Selection

Existing experiences indicate that one of the most prominent reasons that some ERP implementations fail is related to selecting an improper ERP package. Among those important factors resulting in inappropriate ERP selections, one is to ignore preliminary activities that should be done before the evaluation of ERP packages. Another factor yielding these unsuitable selections is that usually organizations employ prolonged and costly selection processes in such extent that sometimes the process would never be finalized or sometimes the evaluation team might perform many key final activities in an incomplete or inaccurate way due to exhaustion, lack of interest or out-of-date data. In this paper, a systematic approach that recommends some activities to be done before and after the main selection phase is introduced for choosing an ERP package. On the other hand, the proposed approach has utilized some ideas that accelerates the selection process at the same time that reduces the probability of an erroneous final selection.

MATLAB/SIMULINK Based Model of Single- Machine Infinite-Bus with TCSC for Stability Studies and Tuning Employing GA

With constraints on data availability and for study of power system stability it is adequate to model the synchronous generator with field circuit and one equivalent damper on q-axis known as the model 1.1. This paper presents a systematic procedure for modelling and simulation of a single-machine infinite-bus power system installed with a thyristor controlled series compensator (TCSC) where the synchronous generator is represented by model 1.1, so that impact of TCSC on power system stability can be more reasonably evaluated. The model of the example power system is developed using MATLAB/SIMULINK which can be can be used for teaching the power system stability phenomena, and also for research works especially to develop generator controllers using advanced technologies. Further, the parameters of the TCSC controller are optimized using genetic algorithm. The non-linear simulation results are presented to validate the effectiveness of the proposed approach.

Adaptive Kernel Principal Analysis for Online Feature Extraction

The batch nature limits the standard kernel principal component analysis (KPCA) methods in numerous applications, especially for dynamic or large-scale data. In this paper, an efficient adaptive approach is presented for online extraction of the kernel principal components (KPC). The contribution of this paper may be divided into two parts. First, kernel covariance matrix is correctly updated to adapt to the changing characteristics of data. Second, KPC are recursively formulated to overcome the batch nature of standard KPCA.This formulation is derived from the recursive eigen-decomposition of kernel covariance matrix and indicates the KPC variation caused by the new data. The proposed method not only alleviates sub-optimality of the KPCA method for non-stationary data, but also maintains constant update speed and memory usage as the data-size increases. Experiments for simulation data and real applications demonstrate that our approach yields improvements in terms of both computational speed and approximation accuracy.

Strengthening of RC Beams with Large Openings in Shear by CFRP Laminates: 2D Nonlinear FE Analysis

To date, theoretical studies concerning the Carbon Fiber Reinforced Polymer (CFRP) strengthening of RC beams with openings have been rather limited. In addition, various numerical analyses presented so far have effectively simulated the behaviour of solid beam strengthened by FRP material. In this paper, a two dimensional nonlinear finite element analysis is presented to validate against the laboratory test results of six RC beams. All beams had the same rectangular cross-section geometry and were loaded under four point bending. The crack pattern results of the finite element model show good agreement with the crack pattern of the experimental beams. The load midspan deflection curves of the finite element models exhibited a stiffer result compared to the experimental beams. The possible reason may be due to the perfect bond assumption used between the concrete and steel reinforcement.