A 1.8 V RF CMOS Active Inductor with 0.18 um CMOS Technology

A active inductor in CMOS techonology with a supply voltage of 1.8V is presented. The value of the inductance L can be in the range from 0.12nH to 0.25nH in high frequency(HF). The proposed active inductor is designed in TSMC 0.18-um CMOS technology. The power dissipation of this inductor can retain constant at all operating frequency bands and consume around 20mW from 1.8V power supply. Inductors designed by integrated circuit occupy much smaller area, for this reason,attracted researchers attention for more than decade. In this design we used Advanced Designed System (ADS) for simulating cicuit.

A Model for Estimation of Efforts in Development of Software Systems

Software effort estimation is the process of predicting the most realistic use of effort required to develop or maintain software based on incomplete, uncertain and/or noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets. There are various models like Halstead, Walston-Felix, Bailey-Basili, Doty and GA Based models which have already used to estimate the software effort for projects. In this study Statistical Models, Fuzzy-GA and Neuro-Fuzzy (NF) Inference Systems are experimented to estimate the software effort for projects. The performances of the developed models were tested on NASA software project datasets and results are compared with the Halstead, Walston-Felix, Bailey-Basili, Doty and Genetic Algorithm Based models mentioned in the literature. The result shows that the NF Model has the lowest MMRE and RMSE values. The NF Model shows the best results as compared with the Fuzzy-GA based hybrid Inference System and other existing Models that are being used for the Effort Prediction with lowest MMRE and RMSE values.

An Experimental Design Approach to Determine Effects of The Operating Parameters on The Rate of Ru promoted Ir Carbonylation of Methanol

carbonylation of methanol in homogenous phase is one of the major routesfor production of acetic acid. Amongst group VIII metal catalysts used in this process iridium has displayed the best capabilities. To investigate effect of operating parameters like: temperature, pressure, methyl iodide, methyl acetate, iridium, ruthenium, and water concentrations on the reaction rate, experimental design for this system based upon central composite design (CCD) was utilized. Statistical rate equation developed by this method contained individual, interactions and curvature effects of parameters on the reaction rate. The model with p-value less than 0.0001 and R2 values greater than 0.9; confirmeda satisfactory fitness of the experimental and theoretical studies. In other words, the developed model and experimental data obtained passed all diagnostic tests establishing this model as a statistically significant.

A Technique for Execution of Written Values on Shared Variables

The current paper conceptualizes the technique of release consistency indispensable with the concept of synchronization that is user-defined. Programming model concreted with object and class is illustrated and demonstrated. The essence of the paper is phases, events and parallel computing execution .The technique by which the values are visible on shared variables is implemented. The second part of the paper consist of user defined high level synchronization primitives implementation and system architecture with memory protocols. There is a proposition of techniques which are core in deciding the validating and invalidating a stall page .

Categorical Missing Data Imputation Using Fuzzy Neural Networks with Numerical and Categorical Inputs

There are many situations where input feature vectors are incomplete and methods to tackle the problem have been studied for a long time. A commonly used procedure is to replace each missing value with an imputation. This paper presents a method to perform categorical missing data imputation from numerical and categorical variables. The imputations are based on Simpson-s fuzzy min-max neural networks where the input variables for learning and classification are just numerical. The proposed method extends the input to categorical variables by introducing new fuzzy sets, a new operation and a new architecture. The procedure is tested and compared with others using opinion poll data.

Application of Data Envelopment Analysis and Performance Indicators to Irrigation Systems in Thessaloniki Plain (Greece)

In this paper, a benchmarking framework is presented for the performance assessment of irrigations systems. Firstly, a data envelopment analysis (DEA) is applied to measure the technical efficiency of irrigation systems. This method, based on linear programming, aims to determine a consistent efficiency ranking of irrigation systems in which known inputs, such as water volume supplied and total irrigated area, and a given output corresponding to the total value of irrigation production are taken into account simultaneously. Secondly, in order to examine the irrigation efficiency in more detail, a cross – system comparison is elaborated using a performance indicators set selected by IWMI. The above methodologies were applied in Thessaloniki plain, located in Northern Greece while the results of the application are presented and discussed. The conjunctive use of DEA and performance indicators seems to be a very useful tool for efficiency assessment and identification of best practices in irrigation systems management.

Value Engineering and Its Effect in Reduction of Industrial Organization Energy Expenses

The review performed on the condition of energy consumption & rate in Iran, shows that unfortunately the subject of optimization and conservation of energy in active industries of country lacks a practical & effective method and in most factories, the energy consumption and rate is more than in similar industries of industrial countries. The increasing demand of electrical energy and the overheads which it imposes on the organization, forces companies to search for suitable approaches to optimize energy consumption and demand management. Application of value engineering techniques is among these approaches. Value engineering is considered a powerful tool for improving profitability. These tools are used for reduction of expenses, increasing profits, quality improvement, increasing market share, performing works in shorter durations, more efficient utilization of sources & etc. In this article, we shall review the subject of value engineering and its capabilities for creating effective transformations in industrial organizations, in order to reduce energy costs & the results have been investigated and described during a case study in Mazandaran wood and paper industries, the biggest consumer of energy in north of Iran, for the purpose of presenting the effects of performed tasks in optimization of energy consumption by utilizing value engineering techniques in one case study.

Extended Deductive Databases with Uncertain Information

The paper presents an approach for handling uncertain information in deductive databases using multivalued logics. Uncertainty means that database facts may be assigned logical values other than the conventional ones - true and false. The logical values represent various degrees of truth, which may be combined and propagated by applying the database rules. A corresponding multivalued database semantics is defined. We show that it extends successful conventional semantics as the well-founded semantics, and has a polynomial time data complexity.

The Same or Not the Same - On the Variety of Mechanisms of Path Dependence

In association with path dependence, researchers often talk of institutional “lock-in", thereby indicating that far-reaching path deviation or path departure are to be regarded as exceptional cases. This article submits the alleged general inclination for stability of path-dependent processes to a critical review. The different reasons for path dependence found in the literature indicate that different continuity-ensuring mechanisms are at work when people talk about path dependence (“increasing returns", complementarity, sequences etc.). As these mechanisms are susceptible to fundamental change in different ways and to different degrees, the path dependence concept alone is of only limited explanatory value. It is therefore indispensable to identify the underlying continuity-ensuring mechanism as well if a statement-s empirical value is to go beyond the trivial, always true “history matters".

Working Capital Management, Firms- Performance and Market Valuation in Nigeria

This study examines the impact of working capital management on firms- performance and market value of the firms in Nigeria. A sample of fifty four non-financial quoted firms in Nigeria listed on the Nigeria Stock Exchange was used for this study. Data were collected from annual reports of the sampled firms for the period 1995-2009. This result shows there is a significant negative relationship between cash conversion cycle and market valuation and firm-s performance. It also shows that debt ratio is positively related to market valuation and negatively related firm-s performance. The findings confirm that there is a significant relationship between Market valuation, profitability and working capital component in line with previous studies. This mean that Nigeria firms should ensure adequate management of working capital especially cash conversion cycle components of account receivables, account payables and inventories, as efficiency working capital management is expected to contribute positively to the firms- market value.

Implementing an Intuitive Reasoner with a Large Weather Database

In this paper, the implementation of a rule-based intuitive reasoner is presented. The implementation included two parts: the rule induction module and the intuitive reasoner. A large weather database was acquired as the data source. Twelve weather variables from those data were chosen as the “target variables" whose values were predicted by the intuitive reasoner. A “complex" situation was simulated by making only subsets of the data available to the rule induction module. As a result, the rules induced were based on incomplete information with variable levels of certainty. The certainty level was modeled by a metric called "Strength of Belief", which was assigned to each rule or datum as ancillary information about the confidence in its accuracy. Two techniques were employed to induce rules from the data subsets: decision tree and multi-polynomial regression, respectively for the discrete and the continuous type of target variables. The intuitive reasoner was tested for its ability to use the induced rules to predict the classes of the discrete target variables and the values of the continuous target variables. The intuitive reasoner implemented two types of reasoning: fast and broad where, by analogy to human thought, the former corresponds to fast decision making and the latter to deeper contemplation. . For reference, a weather data analysis approach which had been applied on similar tasks was adopted to analyze the complete database and create predictive models for the same 12 target variables. The values predicted by the intuitive reasoner and the reference approach were compared with actual data. The intuitive reasoner reached near-100% accuracy for two continuous target variables. For the discrete target variables, the intuitive reasoner predicted at least 70% as accurately as the reference reasoner. Since the intuitive reasoner operated on rules derived from only about 10% of the total data, it demonstrated the potential advantages in dealing with sparse data sets as compared with conventional methods.

Statistical Reliability Based Modeling of Series and Parallel Operating Systems using Extreme Value Theory

This paper tries to represent a new method for computing the reliability of a system which is arranged in series or parallel model. In this method we estimate life distribution function of whole structure using the asymptotic Extreme Value (EV) distribution of Type I, or Gumbel theory. We use EV distribution in minimal mode, for estimate the life distribution function of series structure and maximal mode for parallel system. All parameters also are estimated by Moments method. Reliability function and failure (hazard) rate and p-th percentile point of each function are determined. Other important indexes such as Mean Time to Failure (MTTF), Mean Time to repair (MTTR), for non-repairable and renewal systems in both of series and parallel structure will be computed.

Development of a Comprehensive Electricity Generation Simulation Model Using a Mixed Integer Programming Approach

This paper presents the development of an electricity simulation model taking into account electrical network constraints, applied on the Belgian power system. The base of the model is optimizing an extensive Unit Commitment (UC) problem through the use of Mixed Integer Linear Programming (MILP). Electrical constraints are incorporated through the implementation of a DC load flow. The model encloses the Belgian power system in a 220 – 380 kV high voltage network (i.e., 93 power plants and 106 nodes). The model features the use of pumping storage facilities as well as the inclusion of spinning reserves in a single optimization process. Solution times of the model stay below reasonable values.

Exons and Introns Classification in Human and Other Organisms

In the paper, the relative performances on spectral classification of short exon and intron sequences of the human and eleven model organisms is studied. In the simulations, all combinations of sixteen one-sequence numerical representations, four threshold values, and four window lengths are considered. Sequences of 150-base length are chosen and for each organism, a total of 16,000 sequences are used for training and testing. Results indicate that an appropriate combination of one-sequence numerical representation, threshold value, and window length is essential for arriving at top spectral classification results. For fixed-length sequences, the precisions on exon and intron classification obtained for different organisms are not the same because of their genomic differences. In general, precision increases as sequence length increases.

Correlations between Cleaning Frequency of Reservoir and Water Tower and Parameters of Water Quality

This study was investigated on sampling and analyzing water quality in water reservoir & water tower installed in two kind of residential buildings and school facilities. Data of water quality was collected for correlation analysis with frequency of sanitization of water reservoir through questioning managers of building about the inspection charts recorded on equipment for water reservoir. Statistical software packages (SPSS) were applied to the data of two groups (cleaning frequency and water quality) for regression analysis to determine the optimal cleaning frequency of sanitization. The correlation coefficient (R) in this paper represented the degree of correlation, with values of R ranging from +1 to -1.After investigating three categories of drinking water users; this study found that the frequency of sanitization of water reservoir significantly influenced the water quality of drinking water. A higher frequency of sanitization (more than four times per 1 year) implied a higher quality of drinking water. Results indicated that sanitizing water reservoir & water tower should at least twice annually for achieving the aim of safety of drinking water.

Advanced Neural Network Learning Applied to Pulping Modeling

This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of pulping problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified odified problem M-1 Ax= M-1b where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.

Carrageenan Properties Extracted From Eucheuma cottonii, Indonesia

The effect of extraction solvent upon properties of carrageenan from Eucheuma cottonii was studied. The distilled water and KOH solution (concentration 0.1- 0.5N) were used as the solvent. Extraction process was carried out in water bath equipped by stirrer with constant speed of 275 rpm with a constant ratio of seaweed weight to solvent volume ( 1:50 g/mL) at 86oC for 45 minutes. The extract was then precipitated in 3 volume of 90% ethanol, oven dried at 60oC. Based on experimental data, alkali significantly influenced yield and properties of extracted carrageenan. The extracted carrageenan was found to have essentially identical FTIR spectra to the reference samples of kappa-carrageenan. Increasing the KOH concentration led to carrageenan containing less sulfate content and intrinsic viscosity. The gel strength increased along with the increasing of KOH concentration. The decreasing of intrinsic viscosity value indicates that a polymer degradation occurs during alkali extraction.

Extracting Single Trial Visual Evoked Potentials using Selective Eigen-Rate Principal Components

In single trial analysis, when using Principal Component Analysis (PCA) to extract Visual Evoked Potential (VEP) signals, the selection of principal components (PCs) is an important issue. We propose a new method here that selects only the appropriate PCs. We denote the method as selective eigen-rate (SER). In the method, the VEP is reconstructed based on the rate of the eigen-values of the PCs. When this technique is applied on emulated VEP signals added with background electroencephalogram (EEG), with a focus on extracting the evoked P3 parameter, it is found to be feasible. The improvement in signal to noise ratio (SNR) is superior to two other existing methods of PC selection: Kaiser (KSR) and Residual Power (RP). Though another PC selection method, Spectral Power Ratio (SPR) gives a comparable SNR with high noise factors (i.e. EEGs), SER give more impressive results in such cases. Next, we applied SER method to real VEP signals to analyse the P3 responses for matched and non-matched stimuli. The P3 parameters extracted through our proposed SER method showed higher P3 response for matched stimulus, which confirms to the existing neuroscience knowledge. Single trial PCA using KSR and RP methods failed to indicate any difference for the stimuli.

Health Care Ethics in Vulnerable Populations: Clinical Research through the Patient's Eyes

Chronic conditions carry with them strong emotions and often lead to charged relationships between patients and their health providers and, by extension, patients and health researchers. Persons are both autonomous and relational and a purely cognitive model of autonomy neglects the social and relational basis of chronic illness. Ensuring genuine informed consent in research requires a thorough understanding of how participants perceive a study and their reasons for participation. Surveys may not capture the complexities of reasoning that underlies study participation. Contradictory reasons for participation, for instance an initial claim of altruism as rationale and a subsequent claim of personal benefit (therapeutic misconception), affect the quality of informed consent. Individuals apply principles through the filter of personal values and lived experience. Authentic autonomy, and hence authentic consent to research, occurs within the context of patients- unique life narratives and illness experiences.

Control Technology for a Daily Load-following Operation in a Nuclear Power Plant

In Korea, the technology of a load fo nuclear power plant has been being developed. automatic controller which is able to control temperature and axial power distribution was developed. identification algorithm and a model predictive contact former transforms the nuclear reactor status into numerically. And the latter uses them and ge manipulated values such as two kinds of control ro this automatic controller, the performance of a coperation was evaluated. As a result, the automatic generated model parameters of a nuclear react to nuclear reactor average temperature and axial power the desired targets during a daily load follow.