Robust Sensorless Speed Control of Induction Motor with DTFC and Fuzzy Speed Regulator

Recent developments in Soft computing techniques, power electronic switches and low-cost computational hardware have made it possible to design and implement sophisticated control strategies for sensorless speed control of AC motor drives. Such an attempt has been made in this work, for Sensorless Speed Control of Induction Motor (IM) by means of Direct Torque Fuzzy Control (DTFC), PI-type fuzzy speed regulator and MRAS speed estimator strategy, which is absolutely nonlinear in its nature. Direct torque control is known to produce quick and robust response in AC drive system. However, during steady state, torque, flux and current ripple occurs. So, the performance of conventional DTC with PI speed regulator can be improved by implementing fuzzy logic techniques. Certain important issues in design including the space vector modulated (SVM) 3-Ф voltage source inverter, DTFC design, generation of reference torque using PI-type fuzzy speed regulator and sensor less speed estimator have been resolved. The proposed scheme is validated through extensive numerical simulations on MATLAB. The simulated results indicate the sensor less speed control of IM with DTFC and PI-type fuzzy speed regulator provides satisfactory high dynamic and static performance compare to conventional DTC with PI speed regulator.

Performance Evaluation of AOMDV-PAMAC Protocols for Ad Hoc Networks

Power consumption of nodes in ad hoc networks is a critical issue as they predominantly operate on batteries. In order to improve the lifetime of an ad hoc network, all the nodes must be utilized evenly and the power required for connections must be minimized. In this project a link layer algorithm known as Power Aware medium Access Control (PAMAC) protocol is proposed which enables the network layer to select a route with minimum total power requirement among the possible routes between a source and a destination provided all nodes in the routes have battery capacity above a threshold. When the battery capacity goes below a predefined threshold, routes going through these nodes will be avoided and these nodes will act only as source and destination. Further, the first few nodes whose battery power drained to the set threshold value are pushed to the exterior part of the network and the nodes in the exterior are brought to the interior. Since less total power is required to forward packets for each connection. The network layer protocol AOMDV is basically an extension to the AODV routing protocol. AOMDV is designed to form multiple routes to the destination and it also avoid the loop formation so that it reduces the unnecessary congestion to the channel. In this project, the performance of AOMDV is evaluated using PAMAC as a MAC layer protocol and the average power consumption, throughput and average end to end delay of the network are calculated and the results are compared with that of the other network layer protocol AODV.

Optimization the Process of Osmo – Convective Drying of Edible Button Mushrooms using Response Surface Methodology (RSM)

Simultaneous effects of temperature, immersion time, salt concentration, sucrose concentration, pressure and convective dryer temperature on the combined osmotic dehydration - convective drying of edible button mushrooms were investigated. Experiments were designed according to Central Composite Design with six factors each at five different levels. Response Surface Methodology (RSM) was used to determine the optimum processing conditions that yield maximum water loss and rehydration ratio and minimum solid gain and shrinkage in osmotic-convective drying of edible button mushrooms. Applying surfaces profiler and contour plots optimum operation conditions were found to be temperature of 39 °C, immersion time of 164 min, salt concentration of 14%, sucrose concentration of 53%, pressure of 600 mbar and drying temperature of 40 °C. At these optimum conditions, water loss, solid gain, rehydration ratio and shrinkage were found to be 63.38 (g/100 g initial sample), 3.17 (g/100 g initial sample), 2.26 and 7.15%, respectively.

Dynamic Bayesian Networks Modeling for Inferring Genetic Regulatory Networks by Search Strategy: Comparison between Greedy Hill Climbing and MCMC Methods

Using Dynamic Bayesian Networks (DBN) to model genetic regulatory networks from gene expression data is one of the major paradigms for inferring the interactions among genes. Averaging a collection of models for predicting network is desired, rather than relying on a single high scoring model. In this paper, two kinds of model searching approaches are compared, which are Greedy hill-climbing Search with Restarts (GSR) and Markov Chain Monte Carlo (MCMC) methods. The GSR is preferred in many papers, but there is no such comparison study about which one is better for DBN models. Different types of experiments have been carried out to try to give a benchmark test to these approaches. Our experimental results demonstrated that on average the MCMC methods outperform the GSR in accuracy of predicted network, and having the comparable performance in time efficiency. By proposing the different variations of MCMC and employing simulated annealing strategy, the MCMC methods become more efficient and stable. Apart from comparisons between these approaches, another objective of this study is to investigate the feasibility of using DBN modeling approaches for inferring gene networks from few snapshots of high dimensional gene profiles. Through synthetic data experiments as well as systematic data experiments, the experimental results revealed how the performances of these approaches can be influenced as the target gene network varies in the network size, data size, as well as system complexity.

Improvement of Passengers Ride Comfort in Rail Vehicles Equipped with Air Springs

In rail vehicles, air springs are very important isolating component, which guarantee good ride comfort for passengers during their trip. In the most new rail–vehicle models, developed by researchers, the thermo–dynamical effects of air springs are ignored and secondary suspension is modeled by simple springs and dampers. As the performance of suspension components have significant effects on rail–vehicle dynamics and ride comfort of passengers, a complete nonlinear thermo–dynamical air spring model, which is a combination of two different models, is introduced. Result from field test shows remarkable agreement between proposed model and experimental data. Effects of air suspension parameters on the system performances are investigated here and then these parameters are tuned to minimize Sperling ride comfort index during the trip. Results showed that by modification of air suspension parameters, passengers comfort is improved and ride comfort index is reduced about 10%.

Influence of Combined Drill Coulters on Seedbed Compaction under Conservation Tillage Technologies

All over the world, including the Middle and East European countries, sustainable tillage and sowing technologies are applied increasingly broadly with a view to optimising soil resources, mitigating soil degradation processes, saving energy resources, preserving biological diversity, etc. As a result, altered conditions of tillage and sowing technological processes are faced inevitably. The purpose of this study is to determine the seedbed topsoil hardness when using a combined sowing coulter in different sustainable tillage technologies. The research involved a combined coulter consisting of two dissected blade discs and a shoe coulter. In order to determine soil hardness at the seedbed area, a multipenetrometer was used. It was found by experimental studies that in loosened soil, a combined sowing coulter equally suppresses the furrow bottom, walls and soil near the furrow; therefore, here, soil hardness was similar at all researched depths and no significant differences were established. In loosened and compacted (double-rolled) soil, the impact of a combined coulter on the hardness of seedbed soil surface was more considerable at a depth of 2 mm. Soil hardness at the furrow bottom and walls to a distance of up to 26 mm was 1.1 MPa. At a depth of 10 mm, the greatest hardness was established at the furrow bottom. In loosened and heavily compacted (rolled for 6 times) soil, at a depth of 2 and 10 mm a combined coulter most of all compacted the furrow bottom, which has a hardness of 1.8 MPa. At a depth of 20 mm, soil hardness within the whole investigated area varied insignificantly and fluctuated by around 2.0 MPa. The hardness of furrow walls and soil near the furrow was by approximately 1.0 MPa lower than that at the furrow bottom

Computational Evaluation of a C-A Heat Pump

The compression-absorption heat pump (C-A HP), one of the promising heat recovery equipments that make process hot water using low temperature heat of wastewater, was evaluated by computer simulation. A simulation program was developed based on the continuity and the first and second laws of thermodynamics. Both the absorber and desorber were modeled using UA-LMTD method. In order to prevent an unfeasible temperature profile and to reduce calculation errors from the curved temperature profile of a mixture, heat loads were divided into lots of segments. A single-stage compressor was considered. A compressor cooling load was also taken into account. An isentropic efficiency was computed from the map data. Simulation conditions were given based on the system consisting of ordinarily designed components. The simulation results show that most of the total entropy generation occurs during the compression and cooling process, thus suggesting the possibility that system performance can be enhanced if a rectifier is introduced.

Harris Extraction and SIFT Matching for Correlation of Two Tablets

This article presents the developments of efficient algorithms for tablet copies comparison. Image recognition has specialized use in digital systems such as medical imaging, computer vision, defense, communication etc. Comparison between two images that look indistinguishable is a formidable task. Two images taken from different sources might look identical but due to different digitizing properties they are not. Whereas small variation in image information such as cropping, rotation, and slight photometric alteration are unsuitable for based matching techniques. In this paper we introduce different matching algorithms designed to facilitate, for art centers, identifying real painting images from fake ones. Different vision algorithms for local image features are implemented using MATLAB. In this framework a Table Comparison Computer Tool “TCCT" is designed to facilitate our research. The TCCT is a Graphical Unit Interface (GUI) tool used to identify images by its shapes and objects. Parameter of vision system is fully accessible to user through this graphical unit interface. And then for matching, it applies different description technique that can identify exact figures of objects.

Optimal and Generalized Multiple Descriptions Image Coding Transform in the Wavelet Domain

In this paper we propose a Multiple Description Image Coding(MDIC) scheme to generate two compressed and balanced rates descriptions in the wavelet domain (Daubechies biorthogonal (9, 7) wavelet) using pairwise correlating transform optimal and application method for Generalized Multiple Description Coding (GMDC) to image coding in the wavelet domain. The GMDC produces statistically correlated streams such that lost streams can be estimated from the received data. Our performance test shown that the proposed method gives more improvement and good quality of the reconstructed image when the wavelet coefficients are normalized by Gaussian Scale Mixture (GSM) model then the Gaussian one ,.

Modeling and Analysis of Twelve-phase (Multi- Phase) DSTATCOM for Multi-Phase Load Circuits

This paper presents modeling and analysis of 12-phase distribution static compensator (DSTATCOM), which is capable of balancing the source currents in spite of unbalanced loading and phase outages. In addition to balance the supply current, the power factor can be set to a desired value. The theory of instantaneous symmetrical components is used to generate the twelve-phase reference currents. These reference currents are then tracked using current controlled voltage source inverter, operated in a hysteresis band control scheme. An ideal compensator in place of physical realization of the compensator is used. The performance of the proposed DTATCOM is validated through MATLAB simulation and detailed simulation results are given.

Feasibility of Integrating Heating Valve Drivers with KNX-standard for Performing Dynamic Hydraulic Balance in Domestic Buildings

The increasing demand for sufficient and clean energy forces industrial and service companies to align their strategies towards efficient consumption. This trend refers also to the residential building sector. There, large amounts of energy consumption are caused by house and facility heating. Many of the operated hot water heating systems lack hydraulic balanced working conditions for heat distribution and –transmission and lead to inefficient heating. Through hydraulic balancing of heating systems, significant energy savings for primary and secondary energy can be achieved. This paper addresses the use of KNX-technology (Smart Buildings) in residential buildings to ensure a dynamic adaption of hydraulic system's performance, in order to increase the heating system's efficiency. In this paper, the procedure of heating system segmentation into hydraulically independent units (meshes) is presented. Within these meshes, the heating valve are addressed and controlled by a central facility server. Feasibility criteria towards such drivers will be named. The dynamic hydraulic balance is achieved by positioning these valves according to heating loads, that are generated from the temperature settings in the corresponding rooms. The energetic advantages of single room heating control procedures, based on the application FacilityManager, is presented.

ECA-SCTP: Enhanced Cooperative ACK for SCTP Path Recovery in Concurrent Multiple Transfer

Stream Control Transmission Protocol (SCTP) has been proposed to provide reliable transport of real-time communications. Due to its attractive features, such as multi-streaming and multihoming, the SCTP is often expected to be an alternative protocol for TCP and UDP. In the original SCTP standard, the secondary path is mainly regarded as a redundancy. Recently, most of researches have focused on extending the SCTP to enable a host to send its packets to a destination over multiple paths simultaneously. In order to transfer packets concurrently over the multiple paths, the SCTP should be well designed to avoid unnecessary fast retransmission and the mis-estimation of congestion window size through the paths. Therefore, we propose an Enhanced Cooperative ACK SCTP (ECASCTP) to improve the path recovery efficiency of multi-homed host which is under concurrent multiple transfer mode. We evaluated the performance of our proposed scheme using ns-2 simulation in terms of cwnd variation, path recovery time, and goodput. Our scheme provides better performance in lossy and path asymmetric networks.

Onset Velocity Profiles Evolution in Microchannels

The present microfluidic study is emphasizing the flow behavior within a Y shape micro-bifurcation in two similar flow configurations. We report here a numerical and experimental investigation on the velocity profiles evolution and secondary flows, manifested at different Reynolds numbers (Re) and for two different boundary conditions. The experiments are performed using special designed setup based on optical microscopic devices. With this setup, direct visualizations and quantitative measurements of the path-lines are obtained. A Micro-PIV measurement system is used to obtain velocity profiles distributions in a spatial evolution in the main flows domains. The experimental data is compared with numerical simulations performed with commercial computational code FLUENT in a 3D geometry with the same dimensions as the experimental one. The numerical flow patterns are found to be in good agreement with the experimental manifestations.

Photo-Fenton Treatment of 1,3-dichloro-2- Propanol Aqueous Solutions Using UV Radiation and H2O2 – A Kinetic Study

The photochemical and photo-Fenton oxidation of 1,3-dichloro-2-propanol was performed in a batch reactor, at room temperature, using UV radiation, H2O2 as oxidant, and Fenton-s reagent. The effect of the oxidative agent-s initial concentration was investigated as well as the effect of the initial concentration of Fe(II) by following the target compound degradation, the total organic carbon removal and the chloride ion production. Also, from the kinetic analysis conducted and proposed reaction scheme it was deduced that the addition of Fe(II) significantly increases the production and the further oxidation of the chlorinated intermediates.

3D Dynamic Representation System for the Human Head

The human head representations usually are based on the morphological – structural components of a real model. Over the time became more and more necessary to achieve full virtual models that comply very rigorous with the specifications of the human anatomy. Still, making and using a model perfectly fitted with the real anatomy is a difficult task, because it requires large hardware resources and significant times for processing. That is why it is necessary to choose the best compromise solution, which keeps the right balance between the details perfection and the resources consumption, in order to obtain facial animations with real-time rendering. We will present here the way in which we achieved such a 3D system that we intend to use as a base point in order to create facial animations with real-time rendering, used in medicine to find and to identify different types of pathologies.

Intelligent Vision System for Human-Robot Interface

This paper addresses the development of an intelligent vision system for human-robot interaction. The two novel contributions of this paper are 1) Detection of human faces and 2) Localizing the eye. The method is based on visual attributes of human skin colors and geometrical analysis of face skeleton. This paper introduces a spatial domain filtering method named ?Fuzzily skewed filter' which incorporates Fuzzy rules for deciding the gray level of pixels in the image in their neighborhoods and takes advantages of both the median and averaging filters. The effectiveness of the method has been justified over implementing the eye tracking commands to an entertainment robot, named ''AIBO''.

User Experience Evolution Lifecycle Framework

Perceptions of quality from both designers and users perspective have now stretched beyond the traditional usability, incorporating abstract and subjective concepts. This has led to a shift in human computer interaction research communities- focus; a shift that focuses on achieving user experience (UX) by not only fulfilling conventional usability needs but also those that go beyond them. The term UX, although widely spread and given significant importance, lacks consensus in its unified definition. In this paper, we survey various UX definitions and modeling frameworks and examine them as the foundation for proposing a UX evolution lifecycle framework for understanding UX in detail. In the proposed framework we identify the building blocks of UX and discuss how UX evolves in various phases. The framework can be used as a tool to understand experience requirements and evaluate them, resulting in better UX design and hence improved user satisfaction.

Covering-based Rough sets Based on the Refinement of Covering-element

Covering-based rough sets is an extension of rough sets and it is based on a covering instead of a partition of the universe. Therefore it is more powerful in describing some practical problems than rough sets. However, by extending the rough sets, covering-based rough sets can increase the roughness of each model in recognizing objects. How to obtain better approximations from the models of a covering-based rough sets is an important issue. In this paper, two concepts, determinate elements and indeterminate elements in a universe, are proposed and given precise definitions respectively. This research makes a reasonable refinement of the covering-element from a new viewpoint. And the refinement may generate better approximations of covering-based rough sets models. To prove the theory above, it is applied to eight major coveringbased rough sets models which are adapted from other literature. The result is, in all these models, the lower approximation increases effectively. Correspondingly, in all models, the upper approximation decreases with exceptions of two models in some special situations. Therefore, the roughness of recognizing objects is reduced. This research provides a new approach to the study and application of covering-based rough sets.

BEM Formulations Based on Kirchhoffs Hypoyhesis to Perform Linear Bending Analysis of Plates Reinforced by Beams

In this work, are discussed two formulations of the boundary element method - BEM to perform linear bending analysis of plates reinforced by beams. Both formulations are based on the Kirchhoff's hypothesis and they are obtained from the reciprocity theorem applied to zoned plates, where each sub-region defines a beam or a slab. In the first model the problem values are defined along the interfaces and the external boundary. Then, in order to reduce the number of degrees of freedom kinematics hypothesis are assumed along the beam cross section, leading to a second formulation where the collocation points are defined along the beam skeleton, instead of being placed on interfaces. On these formulations no approximation of the generalized forces along the interface is required. Moreover, compatibility and equilibrium conditions along the interface are automatically imposed by the integral equation. Thus, these formulations require less approximation and the total number of the degree s of freedom is reduced. In the numerical examples are discussed the differences between these two BEM formulations, comparing as well the results to a well-known finite element code.

Novel Hybrid Method for Gene Selection and Cancer Prediction

Microarray data profiles gene expression on a whole genome scale, therefore, it provides a good way to study associations between gene expression and occurrence or progression of cancer. More and more researchers realized that microarray data is helpful to predict cancer sample. However, the high dimension of gene expressions is much larger than the sample size, which makes this task very difficult. Therefore, how to identify the significant genes causing cancer becomes emergency and also a hot and hard research topic. Many feature selection algorithms have been proposed in the past focusing on improving cancer predictive accuracy at the expense of ignoring the correlations between the features. In this work, a novel framework (named by SGS) is presented for stable gene selection and efficient cancer prediction . The proposed framework first performs clustering algorithm to find the gene groups where genes in each group have higher correlation coefficient, and then selects the significant genes in each group with Bayesian Lasso and important gene groups with group Lasso, and finally builds prediction model based on the shrinkage gene space with efficient classification algorithm (such as, SVM, 1NN, Regression and etc.). Experiment results on real world data show that the proposed framework often outperforms the existing feature selection and prediction methods, say SAM, IG and Lasso-type prediction model.