Depressing Turbine-Generator Supersynchronous Torsional Torques by Using Virtual Inertia

Single-pole switching scheme is widely used in the Extra High Voltage system. However, the substantial negativesequence current injected to the turbine-generators imposes the electromagnetic (E/M) torque of double system- frequency components during the dead time (between single-pole clearing and line reclosing). This would induce supersynchronous resonance (SPSR) torque amplifications on low pressure turbine generator blades and even lead to fatigue damage. This paper proposes the design of a mechanical filter (MF) with natural frequency close to double-system frequency. From the simulation results, it is found that such a filter not only successfully damps the resonant effect, but also has the characteristics of feasibility and compact.

Advanced Neural Network Learning Applied to Pulping Modeling

This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of pulping problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified odified problem M-1 Ax= M-1b where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.

Advance in Monitoring and Process Control of Surface Roughness

This paper presents an advance in monitoring and process control of surface roughness in CNC machine for the turning and milling processes. An integration of the in-process monitoring and process control of the surface roughness is proposed and developed during the machining process by using the cutting force ratio. The previously developed surface roughness models for turning and milling processes of the author are adopted to predict the inprocess surface roughness, which consist of the cutting speed, the feed rate, the tool nose radius, the depth of cut, the rake angle, and the cutting force ratio. The cutting force ratios obtained from the turning and the milling are utilized to estimate the in-process surface roughness. The dynamometers are installed on the tool turret of CNC turning machine and the table of 5-axis machining center to monitor the cutting forces. The in-process control of the surface roughness has been developed and proposed to control the predicted surface roughness. It has been proved by the cutting tests that the proposed integration system of the in-process monitoring and the process control can be used to check the surface roughness during the cutting by utilizing the cutting force ratio.

Improved Robust Stability and Stabilization Conditions of Discrete-time Delayed System

The problem of robust stability and robust stabilization for a class of discrete-time uncertain systems with time delay is investigated. Based on Tchebychev inequality, by constructing a new augmented Lyapunov function, some improved sufficient conditions ensuring exponential stability and stabilization are established. These conditions are expressed in the forms of linear matrix inequalities (LMIs), whose feasibility can be easily checked by using Matlab LMI Toolbox. Compared with some previous results derived in the literature, the new obtained criteria have less conservatism. Two numerical examples are provided to demonstrate the improvement and effectiveness of the proposed method.

Extracting Single Trial Visual Evoked Potentials using Selective Eigen-Rate Principal Components

In single trial analysis, when using Principal Component Analysis (PCA) to extract Visual Evoked Potential (VEP) signals, the selection of principal components (PCs) is an important issue. We propose a new method here that selects only the appropriate PCs. We denote the method as selective eigen-rate (SER). In the method, the VEP is reconstructed based on the rate of the eigen-values of the PCs. When this technique is applied on emulated VEP signals added with background electroencephalogram (EEG), with a focus on extracting the evoked P3 parameter, it is found to be feasible. The improvement in signal to noise ratio (SNR) is superior to two other existing methods of PC selection: Kaiser (KSR) and Residual Power (RP). Though another PC selection method, Spectral Power Ratio (SPR) gives a comparable SNR with high noise factors (i.e. EEGs), SER give more impressive results in such cases. Next, we applied SER method to real VEP signals to analyse the P3 responses for matched and non-matched stimuli. The P3 parameters extracted through our proposed SER method showed higher P3 response for matched stimulus, which confirms to the existing neuroscience knowledge. Single trial PCA using KSR and RP methods failed to indicate any difference for the stimuli.

A New Edit Distance Method for Finding Similarity in Dna Sequence

The P-Bigram method is a string comparison methods base on an internal two characters-based similarity measure. The edit distance between two strings is the minimal number of elementary editing operations required to transform one string into the other. The elementary editing operations include deletion, insertion, substitution two characters. In this paper, we address the P-Bigram method to sole the similarity problem in DNA sequence. This method provided an efficient algorithm that locates all minimum operation in a string. We have been implemented algorithm and found that our program calculated that smaller distance than one string. We develop PBigram edit distance and show that edit distance or the similarity and implementation using dynamic programming. The performance of the proposed approach is evaluated using number edit and percentage similarity measures.

Reduction of Chloride Dioxide in Paper Bleaching using Peroxide Activation

All around the world pulp and paper industries are the biggest plant production with the environmental pollution as the biggest challenge facing the pulp manufacturing operations. The concern among these industries is to produce a high volume of papers with the high quality standard and of low cost without affecting the environment. This result obtained from this bleaching study show that the activation of peroxide was an effective method of reducing the total applied charge of chlorine dioxide which is harmful to our environment and also show that softwood and hardwood Kraft pulps responded linearly to the peroxide treatments. During the bleaching process the production plant produce chlorines. Under the trial stages chloride dioxide has been reduced by 3 kg/ton to reduce the brightness from 65% ISO to 60% ISO of pulp and the dosing point returned to the E stage charges by pre-treating Kraft pulps with hydrogen peroxide. The pulp and paper industry has developed elemental chlorine free (ECF) and totally chlorine free (TCF) bleaching, in their quest for being environmental friendly, they have been looking at ways to turn their ECF process into a TCF process while still being competitive. This prompted the research to investigate the capability of the hydrogen peroxide as catalyst to reduce chloride dioxide.

Industrial Development, Environment And Occupational Problems: The Case Of Iran

There are three distinct stages in the evolution of economic thought, namely: 1. in the first stage, the major concern was to accelerate economic growth with increased availability of material goods, especially in developing economies with very low living standards, because poverty eradication meant faster economic growth. 2. in the second stage, economists made distinction between growth and development. Development was seen as going beyond economic growth, and bringing certain changes in the structure of the economy with more equitable distribution of the benefits of growth, with the growth coming automatic and sustained. 3. the third stage is now reached. Our concern is now with “sustainable development", that is, development not only for the present but also of the future. Thus the focus changed from “sustained growth" to “sustained development". Sustained development brings to the fore the long term relationship between the ecology and economic development. Since the creation of UNEP in 1972 it has worked for development without destruction for environmentally sound and sustained development. It was realised that the environment cannot be viewed in a vaccum, it is not separate from development, nor is it competing. It suggested for the integration of the environment with development whereby ecological factors enter development planning, socio-economic policies, cost-benefit analysis, trade, technology transfer, waste management, educational and other specific areas. Industrialisation has contributed to the growth of economy of several countries. It has improved the standards of living of its people and provided benefits to the society. It has also created in the process great environmental problems like climate change, forest destruction and denudation, soil erosion and desertification etc. On the other hand, industry has provided jobs and improved the prospects of wealth for the industrialists. The working class communities had to simply put up with the high levels of pollution in order to keep up their jobs and also to save their income. There are many roots of the environmental problem. They may be political, economic, cultural and technological conditions of the modern society. The experts concede that industrial growth lies somewhere close to the heart of the matter. Therefore, the objective of this paper is not to document all roots of an environmental crisis but rather to discuss the effects of industrial growth and development. We have come to the conclusion that although public intervention is often unnecessary to ensure that perfectly competitive markets will function in society-s best interests, such intervention is necessary when firms or consumers pollute.

Thermo-mechanical Behavior of Pressure Tube of Indian PHWR at 20 bar Pressure

In a nuclear reactor Loss of Coolant accident (LOCA) considers wide range of postulated damage or rupture of pipe in the heat transport piping system. In the case of LOCA with/without failure of emergency core cooling system in a Pressurised Heavy water Reactor, the Pressure Tube (PT) temperature could rise significantly due to fuel heat up and gross mismatch of the heat generation and heat removal in the affected channel. The extent and nature of deformation is important from reactor safety point of view. Experimental set-ups have been designed and fabricated to simulate ballooning (radial deformation) of PT for 220 MWe IPHWRs. Experiments have been conducted by covering the CT by ceramic fibers and then by submerging CT in water of voided PTs. In both the experiments, it is observed that ballooning initiates at a temperature around 665´┐¢C and complete contact between PT and Caldaria Tube (CT) occurs at around 700´┐¢C approximately. The strain rate is found to be 0.116% per second. The structural integrity of PT is retained (no breach) for all the experiments. The PT heatup is found to be arrested after the contact between PT and CT, thus establishing moderator acting as an efficient heat sink for IPHWRs.

Closing the Achievement Gap Within Reading and Mathematics Classrooms by Fostering Hispanic Students- Educational Resilience

While many studies have conducted the achievement gap between groups of students in school districts, few studies have utilized resilience research to investigate achievement gaps within classrooms. This paper aims to summarize and discuss some recent studies Waxman, Padr├│n, and their colleagues conducted, in which they examined learning environment differences between resilient and nonresilient students in reading and mathematics classrooms. The classes consist of predominantly Hispanic elementary school students from low-income families. These studies all incorporated learning environment questionnaires and systematic observation methods. Significant differences were found between resilient and nonresilient students on their classroom learning environments and classroom behaviors. The observation results indicate that the amount and quality of teacher and student academic interaction are two of the most influential variables that promote student outcomes. This paper concludes by suggesting the following teacher practices to promote resiliency in schools: (a) using feedback from classroom observation and learning environment measures, (b) employing explicit teaching practices; and (c) understanding students on a social and personal level.

Numerical Analysis and Experimental Validation of Detector Pressure Housing Subject to HPHT

Reservoirs with high pressures and temperatures (HPHT) that were considered to be atypical in the past are now frequent targets for exploration. For downhole oilfield drilling tools and components, the temperature and pressure affect the mechanical strength. To address this issue, a finite element analysis (FEA) for 206.84 MPa (30 ksi) pressure and 165°C has been performed on the pressure housing of the measurement-while-drilling/logging-whiledrilling (MWD/LWD) density tool. The density tool is a MWD/LWD sensor that measures the density of the formation. One of the components of the density tool is the pressure housing that is positioned in the tool. The FEA results are compared with the experimental test performed on the pressure housing of the density tool. Past results show a close match between the numerical results and the experimental test. This FEA model can be used for extreme HPHT and ultra HPHT analyses, and/or optimal design changes.

Increasing of Energy Efficiency based on Persian Ancient Architectural Patterns in Desert Regions (Case Study Of Traditional Houses In Kashan)

In general architecture means the art of creating the space. Comprehensive and complete body which is created by a creative and purposeful thought to respond the human needs. Professionally, architecture is the are of designing and comprehensive planning of physical spaces that is created for human-s productivity. The purpose of architectural design is to respond the human needs which is appeared in physical frame. Human in response to his needs is always looking to achieve comfort. Throughout history of human civilization this relative comfort has been inspired by nature and assimilating the facility and natural achievement in the format of artifact patterns base on the nature, so that it is achieved in this comfort level and invention of these factors. All physical factors like regional, social and economical factors are made available to human in order to achieve a specific goal and are made to gain an ideal architecture to respond the functional needs and consider the aesthetics and elemental principles and pay attention to residents- comfort. In this study the Persian architecture with exploiting and transforming the energies into the requisite energies of architecture spaces and importing fuel products, utilities, etc, in order to achieve a relative comfort level will be investigated. In this paper the study of structural and physical specialties of traditional houses in desert regions and Central Plateau of Iran gave us this opportunity to being more familiar with important specialties of energy productivity in architecture body of traditional houses in these regions specially traditional houses of Kashan and in order to use these principles to create modern architectures in these regions.

Off-Line Hand Written Thai Character Recognition using Ant-Miner Algorithm

Much research into handwritten Thai character recognition have been proposed, such as comparing heads of characters, Fuzzy logic and structure trees, etc. This paper presents a system of handwritten Thai character recognition, which is based on the Ant-minor algorithm (data mining based on Ant colony optimization). Zoning is initially used to determine each character. Then three distinct features (also called attributes) of each character in each zone are extracted. The attributes are Head zone, End point, and Feature code. All attributes are used for construct the classification rules by an Ant-miner algorithm in order to classify 112 Thai characters. For this experiment, the Ant-miner algorithm is adapted, with a small change to increase the recognition rate. The result of this experiment is a 97% recognition rate of the training set (11200 characters) and 82.7% recognition rate of unseen data test (22400 characters).

Using Malolactic Fermentation with Acid- And Ethanol- Adapted Oenococcus Oeni Strain to Improve the Quality of Wine from Champs Bourcin Grape in Sapa - Lao Cai

Champs Bourcin black grape originated from Aquitaine, France and planted in Sapa, Lao cai provice, exhibited high total acidity (11.72 g/L). After 9 days of alcoholic fermentation at 25oC using Saccharomyces cerevisiae UP3OY5 strain, the ethanol concentration of wine was 11.5% v/v, however the sharp sour taste of wine has been found. The malolactic fermentation (MLF) was carried out by Oenococcus oeni ATCCBAA-1163 strain which had been preadapted to acid (pH 3-4) and ethanol (8-12%v/v) conditions. We obtained the highest vivability (83.2%) upon malolactic fermentation after 5 days at 22oC with early stationary phase O. oeni cells preadapted to pH 3.5 and 8% v/v ethanol in MRS medium. The malic acid content in wine was decreased from 5.82 g/L to 0.02 g/L after MLF (21 days at 22oC). The sensory quality of wine was significantly improved.

Effects of a Nectandra Membranacea Extract on Labeling of Blood Constituents with Technetium-99m and on the Morphology of Red Blood Cells

The aim of this in vitro study was to evaluate the possible interference of a Nectandra membranacea extract (i) on the labeling of blood cells (BC), (ii) on the labeling process of BC and plasma (P) proteins with technetium-99m (Tc-99m) and (iii) on the morphology of red blood cells (RBC). Blood samples were incubated with a Nectandra membranacea crude extract, stannous chloride, Tc- 99m (sodium pertechnetate) was added, and soluble (SF) and insoluble (IF) fractions were isolated. Morphometry studies were performed with blood samples incubated with Nectandra membranacea extract. The results show that the Nectandra membranacea extract does not promote significant alteration of the labeling of BC, IF-P and IF-BC. The Nectandra membranacea extract was able to alter the erythrocyte membrane morphology, but these morphological changes were not capable to interfere on the labeling of blood constituents with Tc-99m.

Near-Field Robust Adaptive Beamforming Based on Worst-Case Performance Optimization

The performance of adaptive beamforming degrades substantially in the presence of steering vector mismatches. This degradation is especially severe in the near-field, for the 3-dimensional source location is more difficult to estimate than the 2-dimensional direction of arrival in far-field cases. As a solution, a novel approach of near-field robust adaptive beamforming (RABF) is proposed in this paper. It is a natural extension of the traditional far-field RABF and belongs to the class of diagonal loading approaches, with the loading level determined based on worst-case performance optimization. However, different from the methods solving the optimal loading by iteration, it suggests here a simple closed-form solution after some approximations, and consequently, the optimal weight vector can be expressed in a closed form. Besides simplicity and low computational cost, the proposed approach reveals how different factors affect the optimal loading as well as the weight vector. Its excellent performance in the near-field is confirmed via a number of numerical examples.

An Innovational Intermittent Algorithm in Networks-On-Chip (NOC)

Every day human life experiences new equipments more automatic and with more abilities. So the need for faster processors doesn-t seem to finish. Despite new architectures and higher frequencies, a single processor is not adequate for many applications. Parallel processing and networks are previous solutions for this problem. The new solution to put a network of resources on a chip is called NOC (network on a chip). The more usual topology for NOC is mesh topology. There are several routing algorithms suitable for this topology such as XY, fully adaptive, etc. In this paper we have suggested a new algorithm named Intermittent X, Y (IX/Y). We have developed the new algorithm in simulation environment to compare delay and power consumption with elders' algorithms.

Knowledge Representation and Retrieval in Design Project Memory

Knowledge sharing in general and the contextual access to knowledge in particular, still represent a key challenge in the knowledge management framework. Researchers on semantic web and human machine interface study techniques to enhance this access. For instance, in semantic web, the information retrieval is based on domain ontology. In human machine interface, keeping track of user's activity provides some elements of the context that can guide the access to information. We suggest an approach based on these two key guidelines, whilst avoiding some of their weaknesses. The approach permits a representation of both the context and the design rationale of a project for an efficient access to knowledge. In fact, the method consists of an information retrieval environment that, in the one hand, can infer knowledge, modeled as a semantic network, and on the other hand, is based on the context and the objectives of a specific activity (the design). The environment we defined can also be used to gather similar project elements in order to build classifications of tasks, problems, arguments, etc. produced in a company. These classifications can show the evolution of design strategies in the company.

Image Compression with Back-Propagation Neural Network using Cumulative Distribution Function

Image Compression using Artificial Neural Networks is a topic where research is being carried out in various directions towards achieving a generalized and economical network. Feedforward Networks using Back propagation Algorithm adopting the method of steepest descent for error minimization is popular and widely adopted and is directly applied to image compression. Various research works are directed towards achieving quick convergence of the network without loss of quality of the restored image. In general the images used for compression are of different types like dark image, high intensity image etc. When these images are compressed using Back-propagation Network, it takes longer time to converge. The reason for this is, the given image may contain a number of distinct gray levels with narrow difference with their neighborhood pixels. If the gray levels of the pixels in an image and their neighbors are mapped in such a way that the difference in the gray levels of the neighbors with the pixel is minimum, then compression ratio as well as the convergence of the network can be improved. To achieve this, a Cumulative distribution function is estimated for the image and it is used to map the image pixels. When the mapped image pixels are used, the Back-propagation Neural Network yields high compression ratio as well as it converges quickly.

Harmonic Elimination of Hybrid Multilevel Inverters Using Particle Swarm Optimization

This paper present the harmonic elimination of hybrid multilevel inverters (HMI) which could be increase the number of output voltage level. Total Harmonic Distortion (THD) is one of the most important requirements concerning performance indices. Because of many numbers output levels of HMI, it had numerous unknown variables of eliminate undesired individual harmonic and THD nonlinear equations set. Optimized harmonic stepped waveform (OHSW) is solving switching angles conventional method, but most complicated for solving as added level. The artificial intelligent techniques are deliberation to solve this problem. This paper presents the Particle Swarm Optimization (PSO) technique for solving switching angles to get minimum THD and eliminate undesired individual harmonics of 15-levels hybrid multilevel inverters. Consequently it had many variables and could eliminate numerous harmonics. Both advantages including high level of inverter and Particle Swarm Optimization (PSO) are used as powerful tools for harmonics elimination.