Effect of Different Fertilization Methods on Soil Biological Indexes

Fertilization plays an important role in crop growth and soil improvement. This study was conducted to determine the best fertilization system for wheat production. Experiments were arranged in a complete block design with three replications in two years. Main plots consisted of six methods of fertilization including (N1): farmyard manure; (N2): compost; (N3): chemical fertilizers; (N4): farmyard manure + compost; (N5): farmyard manure + compost + chemical fertilizers and (N6): control were arranged in sub plots. The addition of compost or farm yard manure significantly increased the soil microbial biomass carbon in comparison to the chemical fertilizer. The dehydrogenase, phosphatase and urease activities in the N3 treatment were significantly lower than in the farm yard manure and compost treatments.

Physical Parameters for Reliability Evaluation

This paper presents ageing experiments controlled by the evolution of junction parameters. The deterioration of the device is related to high injection effects which modified the transport mechanisms in the space charge region of the junction. Physical phenomena linked to the degradation of junction parameters that affect the devices reliability are reported and discussed. We have used the method based on numerical analysis of experimental current-voltage characteristic of the junction, in order to extract the electrical parameters. The simultaneous follow-up of the evolutions of the series resistance and of the transition voltage allow us to introduce a new parameter for reliability evaluation.

A Parameter-Tuning Framework for Metaheuristics Based on Design of Experiments and Artificial Neural Networks

In this paper, a framework for the simplification and standardization of metaheuristic related parameter-tuning by applying a four phase methodology, utilizing Design of Experiments and Artificial Neural Networks, is presented. Metaheuristics are multipurpose problem solvers that are utilized on computational optimization problems for which no efficient problem specific algorithm exist. Their successful application to concrete problems requires the finding of a good initial parameter setting, which is a tedious and time consuming task. Recent research reveals the lack of approach when it comes to this so called parameter-tuning process. In the majority of publications, researchers do have a weak motivation for their respective choices, if any. Because initial parameter settings have a significant impact on the solutions quality, this course of action could lead to suboptimal experimental results, and thereby a fraudulent basis for the drawing of conclusions.

A Text Mining Technique Using Association Rules Extraction

This paper describes text mining technique for automatically extracting association rules from collections of textual documents. The technique called, Extracting Association Rules from Text (EART). It depends on keyword features for discover association rules amongst keywords labeling the documents. In this work, the EART system ignores the order in which the words occur, but instead focusing on the words and their statistical distributions in documents. The main contributions of the technique are that it integrates XML technology with Information Retrieval scheme (TFIDF) (for keyword/feature selection that automatically selects the most discriminative keywords for use in association rules generation) and use Data Mining technique for association rules discovery. It consists of three phases: Text Preprocessing phase (transformation, filtration, stemming and indexing of the documents), Association Rule Mining (ARM) phase (applying our designed algorithm for Generating Association Rules based on Weighting scheme GARW) and Visualization phase (visualization of results). Experiments applied on WebPages news documents related to the outbreak of the bird flu disease. The extracted association rules contain important features and describe the informative news included in the documents collection. The performance of the EART system compared with another system that uses the Apriori algorithm throughout the execution time and evaluating extracted association rules.

Complex-Valued Neural Network in Signal Processing: A Study on the Effectiveness of Complex Valued Generalized Mean Neuron Model

A complex valued neural network is a neural network which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in signal processing. In Neural networks, generalized mean neuron model (GMN) is often discussed and studied. The GMN includes a new aggregation function based on the concept of generalized mean of all the inputs to the neuron. This paper aims to present exhaustive results of using Generalized Mean Neuron model in a complex-valued neural network model that uses the back-propagation algorithm (called -Complex-BP-) for learning. Our experiments results demonstrate the effectiveness of a Generalized Mean Neuron Model in a complex plane for signal processing over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error required on a Generalized Mean neural network model. Some inherent properties of this complex back propagation algorithm are also studied and discussed.

The Resource Description Framework (RDF) as a Modern Structure for Medical Data

The amount and heterogeneity of data in biomedical research, notably in interdisciplinary fields, requires new methods for the collection, presentation and analysis of information. Important data from laboratory experiments as well as patient trials are available but come out of distributed resources. The Charité - University Hospital Berlin has established together with the German Research Foundation (DFG) a new information service centre for kidney diseases and transplantation (Open European Nephrology Science Centre - OpEN.SC). Beside a collaborative aspect to create new research groups every single partner or institution of this science information centre making his own data available is allowed to search the whole data pool of the various involved centres. A core task is the implementation of a non-restricting open data structure for the various different data sources. We decided to use a modern RDF model and in a first phase transformed original data coming from the web-based Electronic Patient Record database TBase©.

Identification of Wideband Sources Using Higher Order Statistics in Noisy Environment

This paper deals with the localization of the wideband sources. We develop a new approach for estimating the wide band sources parameters. This method is based on the high order statistics of the recorded data in order to eliminate the Gaussian components from the signals received on the various hydrophones.In fact the noise of sea bottom is regarded as being Gaussian. Thanks to the coherent signal subspace algorithm based on the cumulant matrix of the received data instead of the cross-spectral matrix the wideband correlated sources are perfectly located in the very noisy environment. We demonstrate the performance of the proposed algorithm on the real data recorded during an underwater acoustics experiments.

Analytical Model Prediction: Micro-Cutting Tool Forces with the Effect of Friction on Machining Titanium Alloy (Ti-6Al-4V)

In this paper, a methodology of a model based on predicting the tool forces oblique machining are introduced by adopting the orthogonal technique. The applied analytical calculation is mostly based on Devries model and some parts of the methodology are employed from Amareggo-Brown model. Model validation is performed by comparing experimental data with the prediction results on machining titanium alloy (Ti-6Al-4V) based on micro-cutting tool perspective. Good agreements with the experiments are observed. A detailed friction form that affected the tool forces also been examined with reasonable results obtained.

Genetic Algorithm Parameters Optimization for Bi-Criteria Multiprocessor Task Scheduling Using Design of Experiments

Multiprocessor task scheduling is a NP-hard problem and Genetic Algorithm (GA) has been revealed as an excellent technique for finding an optimal solution. In the past, several methods have been considered for the solution of this problem based on GAs. But, all these methods consider single criteria and in the present work, minimization of the bi-criteria multiprocessor task scheduling problem has been considered which includes weighted sum of makespan & total completion time. Efficiency and effectiveness of genetic algorithm can be achieved by optimization of its different parameters such as crossover, mutation, crossover probability, selection function etc. The effects of GA parameters on minimization of bi-criteria fitness function and subsequent setting of parameters have been accomplished by central composite design (CCD) approach of response surface methodology (RSM) of Design of Experiments. The experiments have been performed with different levels of GA parameters and analysis of variance has been performed for significant parameters for minimisation of makespan and total completion time simultaneously.

An Experimental and Numerical Investigation on Gas Hydrate Plug Flow in the Inclined Pipes and Bends

Gas hydrates can agglomerate and block multiphase oil and gas pipelines when water is present at hydrate forming conditions. Using "Cold Flow Technology", the aim is to condition gas hydrates so that they can be transported as a slurry mixture without a risk of agglomeration. During the pipeline shut down however, hydrate particles may settle in bends and build hydrate plugs. An experimental setup has been designed and constructed to study the flow of such plugs at start up operations. Experiments have been performed using model fluid and model hydrate particles. The propagations of initial plugs in a bend were recorded with impedance probes along the pipe. The experimental results show a dispersion of the plug front. A peak in pressure drop was also recorded when the plugs were passing the bend. The evolutions of the plugs have been simulated by numerical integration of the incompressible mass balance equations, with an imposed mixture velocity. The slip between particles and carrier fluid has been calculated using a drag relation together with a particle-fluid force balance.

Signals from the Rocks

There is increasing evidence that earthquakes produce electromagnetic signals observable at the surface in the extremely low to very low freqency (ELF - VLF) range often in advance to the main event. These precursors are candidates for prediction purposes. Laboratory experiments con´¼ürm that material under load emits an electromagnetic signature, the detailed generation mechanisms how- ever are not well understood yet.

Approximate Range-Sum Queries over Data Cubes Using Cosine Transform

In this research, we propose to use the discrete cosine transform to approximate the cumulative distributions of data cube cells- values. The cosine transform is known to have a good energy compaction property and thus can approximate data distribution functions easily with small number of coefficients. The derived estimator is accurate and easy to update. We perform experiments to compare its performance with a well-known technique - the (Haar) wavelet. The experimental results show that the cosine transform performs much better than the wavelet in estimation accuracy, speed, space efficiency, and update easiness.

An Appraisal of Coal Fly Ash Soil Amendment Technology (FASAT) of Central Institute of Mining and Fuel Research (CIMFR)

Coal will continue to be the predominant source of global energy for coming several decades. The huge generation of fly ash (FA) from combustion of coal in thermal power plants (TPPs) is apprehended to pose the concerns of its disposal and utilization. FA application based on its typical characteristics as soil ameliorant for agriculture and forestry is the potential area, and hence the global attempt. The inferences drawn suffer from the variations of ash characteristics, soil types, and agro-climatic conditions; thereby correlating the effects of ash between various plant species and soil types is difficult. Indian FAs have low bulk density, high water holding capacity and porosity, rich silt-sized particles, alkaline nature, negligible solubility, and reasonable plant nutrients. Findings of the demonstrations trials for more than two decades from lab/pot to field scale long-term experiments are developed as FA soil amendment technology (FASAT) by Central Institute of Mining and Fuel Research (CIMFR), Dhanbad. Performance of different crops and plant species in cultivable and problematic soils, are encouraging, eco-friendly, and being adopted by the farmers. FA application includes ash alone and in combination with inorganic/organic amendments; combination treatments including bio-solids perform better than FA alone. Optimum dose being up to 100 t/ha for cultivable land and up to/ or above 200 t/ha of FA for waste/degraded land/mine refuse, depending on the characteristics of ash and soil. The elemental toxicity in Indian FA is usually not of much concern owing to alkaline ashes, oxide forms of elements, and elemental concentration within the threshold limits for soil application. Combating toxicity, if any, is possible through combination treatments with organic materials and phytoremediation. Government initiatives through extension programme involving farmers and ash generating organizations need to be accelerated

Synthesis of Logic Circuits Using Fractional-Order Dynamic Fitness Functions

This paper analyses the performance of a genetic algorithm using a new concept, namely a fractional-order dynamic fitness function, for the synthesis of combinational logic circuits. The experiments reveal superior results in terms of speed and convergence to achieve a solution.

A New Fuzzy Mathematical Model in Recycling Collection Networks: A Possibilistic Approach

Focusing on the environmental issues, including the reduction of scrap and consumer residuals, along with the benefiting from the economic value during the life cycle of goods/products leads the companies to have an important competitive approach. The aim of this paper is to present a new mixed nonlinear facility locationallocation model in recycling collection networks by considering multi-echelon, multi-suppliers, multi-collection centers and multifacilities in the recycling network. To make an appropriate decision in reality, demands, returns, capacities, costs and distances, are regarded uncertain in our model. For this purpose, a fuzzy mathematical programming-based possibilistic approach is introduced as a solution methodology from the recent literature to solve the proposed mixed-nonlinear programming model (MNLP). The computational experiments are provided to illustrate the applicability of the designed model in a supply chain environment and to help the decision makers to facilitate their analysis.

Influence of Deep Cold Rolling and Low Plasticity Burnishing on Surface Hardness and Surface Roughness of AISI 4140 Steel

Deep cold rolling (DCR) and low plasticity burnishing (LPB) process are cold working processes, which easily produce a smooth and work-hardened surface by plastic deformation of surface irregularities. The present study focuses on the surface roughness and surface hardness aspects of AISI 4140 work material, using fractional factorial design of experiments. The assessment of the surface integrity aspects on work material was done, in order to identify the predominant factors amongst the selected parameters. They were then categorized in order of significance followed by setting the levels of the factors for minimizing surface roughness and/or maximizing surface hardness. In the present work, the influence of main process parameters (force, feed rate, number of tool passes/overruns, initial roughness of the work piece, ball material, ball diameter and lubricant used) on the surface roughness and the hardness of AISI 4140 steel were studied for both LPB and DCR process and the results are compared. It was observed that by using LPB process surface hardness has been improved by 167% and in DCR process surface hardness has been improved by 442%. It was also found that the force, ball diameter, number of tool passes and initial roughness of the workpiece are the most pronounced parameters, which has a significant effect on the work piece-s surface during deep cold rolling and low plasticity burnishing process.

Heat Transfer Coefficients for Particulate Airflow in Shell and Coiled Tube Heat Exchangers

In this work, we experimentally study heat transfer from exhaust particulate air of detergent spray drying tower to water by using coiled tube heat exchanger. Water flows in the coiled tubes, where air loaded with detergent particles of 43 micrometers in diameter flows within the shell. Four coiled tubes with different coil pitches are used in a counter-current flow configuration. We investigate heat transfer coefficients of inside and outside the heat transfer surfaces through 400 experiments. The correlations between Nusselt number and Reynolds number, Prandtl number, mass flow rate of particulates to mass flow rate of air ratio and coiled tube pitch parameter are proposed. The correlations procured can be used to predicted heat transfer between tube and shell of the heat exchanger.

Jitter Transfer in High Speed Data Links

Phase locked loops for data links operating at 10 Gb/s or faster are low phase noise devices designed to operate with a low jitter reference clock. Characterization of their jitter transfer function is difficult because the intrinsic noise of the device is comparable to the random noise level in the reference clock signal. A linear model is proposed to account for the intrinsic noise of a PLL. The intrinsic noise data of a PLL for 10 Gb/s links is presented. The jitter transfer function of a PLL in a test chip for 12.8 Gb/s data links was determined in experiments using the 400 MHz reference clock as the source of simultaneous excitations over a wide range of frequency. The result shows that the PLL jitter transfer function can be approximated by a second order linear model.

An Efficient Heuristic for the Minimum Connected Dominating Set Problem on Ad Hoc Wireless Networks

Connected dominating set (CDS) problem in unit disk graph has signi£cant impact on an ef£cient design of routing protocols in wireless sensor networks, where the searching space for a route is reduced to nodes in the set. A set is dominating if all the nodes in the system are either in the set or neighbors of nodes in the set. In this paper, a simple and ef£cient heuristic method is proposed for £nding a minimum connected dominating set (MCDS) in ad hoc wireless networks based on the new parameter support of vertices. With this parameter the proposed heuristic approach effectively £nds the MCDS of a graph. Extensive computational experiments show that the proposed approach outperforms the recently proposed heuristics found in the literature for the MCD

Attribute Weighted Class Complexity: A New Metric for Measuring Cognitive Complexity of OO Systems

In general, class complexity is measured based on any one of these factors such as Line of Codes (LOC), Functional points (FP), Number of Methods (NOM), Number of Attributes (NOA) and so on. There are several new techniques, methods and metrics with the different factors that are to be developed by the researchers for calculating the complexity of the class in Object Oriented (OO) software. Earlier, Arockiam et.al has proposed a new complexity measure namely Extended Weighted Class Complexity (EWCC) which is an extension of Weighted Class Complexity which is proposed by Mishra et.al. EWCC is the sum of cognitive weights of attributes and methods of the class and that of the classes derived. In EWCC, a cognitive weight of each attribute is considered to be 1. The main problem in EWCC metric is that, every attribute holds the same value but in general, cognitive load in understanding the different types of attributes cannot be the same. So here, we are proposing a new metric namely Attribute Weighted Class Complexity (AWCC). In AWCC, the cognitive weights have to be assigned for the attributes which are derived from the effort needed to understand their data types. The proposed metric has been proved to be a better measure of complexity of class with attributes through the case studies and experiments