Evaluation of Solid Phase Micro-extraction with Standard Testing Method for Formaldehyde Determination

In this study, solid phase micro-extraction (SPME) was optimized to improve the sensitivity and accuracy in formaldehyde determination for plywood panels. Further work has been carried out to compare the newly developed technique with existing method which reacts formaldehyde collected in desiccators with acetyl acetone reagent (DC-AA). In SPME, formaldehyde was first derivatized with O-(2,3,4,5,6 pentafluorobenzyl)-hydroxylamine hydrochloride (PFBHA) and analysis was then performed by gas chromatography in combination with mass spectrometry (GC-MS). SPME data subjected to various wood species gave satisfactory results, with relative standard deviations (RSDs) obtained in the range of 3.1-10.3%. It was also well correlated with DC values, giving a correlation coefficient, RSQ, of 0.959. The quantitative analysis of formaldehyde by SPME was an alternative in wood industry with great potential

An Optimal Control Problem for Rigid Body Motions on Lie Group SO(2, 1)

In this paper smooth trajectories are computed in the Lie group SO(2, 1) as a motion planning problem by assigning a Frenet frame to the rigid body system to optimize the cost function of the elastic energy which is spent to track a timelike curve in Minkowski space. A method is proposed to solve a motion planning problem that minimize the integral of the square norm of Darboux vector of a timelike curve. This method uses the coordinate free Maximum Principle of Optimal control and results in the theory of integrable Hamiltonian systems. The presence of several conversed quantities inherent in these Hamiltonian systems aids in the explicit computation of the rigid body motions.

A Graph-Based Approach for Placement of No-Replicated Databases in Grid

On a such wide-area environment as a Grid, data placement is an important aspect of distributed database systems. In this paper, we address the problem of initial placement of database no-replicated fragments in Grid architecture. We propose a graph based approach that considers resource restrictions. The goal is to optimize the use of computing, storage and communication resources. The proposed approach is developed in two phases: in the first phase, we perform fragment grouping using knowledge about fragments dependency and, in the second phase, we determine an efficient placement of the fragment groups on the Grid. We also show, via experimental analysis that our approach gives solutions that are close to being optimal for different databases and Grid configurations.

IMLFQ Scheduling Algorithm with Combinational Fault Tolerant Method

Scheduling algorithms are used in operating systems to optimize the usage of processors. One of the most efficient algorithms for scheduling is Multi-Layer Feedback Queue (MLFQ) algorithm which uses several queues with different quanta. The most important weakness of this method is the inability to define the optimized the number of the queues and quantum of each queue. This weakness has been improved in IMLFQ scheduling algorithm. Number of the queues and quantum of each queue affect the response time directly. In this paper, we review the IMLFQ algorithm for solving these problems and minimizing the response time. In this algorithm Recurrent Neural Network has been utilized to find both the number of queues and the optimized quantum of each queue. Also in order to prevent any probable faults in processes' response time computation, a new fault tolerant approach has been presented. In this approach we use combinational software redundancy to prevent the any probable faults. The experimental results show that using the IMLFQ algorithm results in better response time in comparison with other scheduling algorithms also by using fault tolerant mechanism we improve IMLFQ performance.

Effect of a Linear-Exponential Penalty Functionon the GA-s Efficiency in Optimization of a Laminated Composite Panel

A stiffened laminated composite panel (1 m length × 0.5m width) was optimized for minimum weight and deflection under several constraints using genetic algorithm. Here, a significant study on the performance of a penalty function with two kinds of static and dynamic penalty factors was conducted. The results have shown that linear dynamic penalty factors are more effective than the static ones. Also, a specially combined linear-exponential function has shown to perform more effective than the previously mentioned penalty functions. This was then resulted in the less sensitivity of the GA to the amount of penalty factor.

Hybrid Prefix Adder Architecture for Minimizing the Power Delay Product

Parallel Prefix addition is a technique for improving the speed of binary addition. Due to continuing integrating intensity and the growing needs of portable devices, low-power and highperformance designs are of prime importance. The classical parallel prefix adder structures presented in the literature over the years optimize for logic depth, area, fan-out and interconnect count of logic circuits. In this paper, a new architecture for performing 8-bit, 16-bit and 32-bit Parallel Prefix addition is proposed. The proposed prefix adder structures is compared with several classical adders of same bit width in terms of power, delay and number of computational nodes. The results reveal that the proposed structures have the least power delay product when compared with its peer existing Prefix adder structures. Tanner EDA tool was used for simulating the adder designs in the TSMC 180 nm and TSMC 130 nm technologies.

An Adaptive Fuzzy Clustering Approach for the Network Management

The Chiu-s method which generates a Takagi-Sugeno Fuzzy Inference System (FIS) is a method of fuzzy rules extraction. The rules output is a linear function of inputs. In addition, these rules are not explicit for the expert. In this paper, we develop a method which generates Mamdani FIS, where the rules output is fuzzy. The method proceeds in two steps: first, it uses the subtractive clustering principle to estimate both the number of clusters and the initial locations of a cluster centers. Each obtained cluster corresponds to a Mamdani fuzzy rule. Then, it optimizes the fuzzy model parameters by applying a genetic algorithm. This method is illustrated on a traffic network management application. We suggest also a Mamdani fuzzy rules generation method, where the expert wants to classify the output variables in some fuzzy predefined classes.

MATLAB/SIMULINK Based Model of Single- Machine Infinite-Bus with TCSC for Stability Studies and Tuning Employing GA

With constraints on data availability and for study of power system stability it is adequate to model the synchronous generator with field circuit and one equivalent damper on q-axis known as the model 1.1. This paper presents a systematic procedure for modelling and simulation of a single-machine infinite-bus power system installed with a thyristor controlled series compensator (TCSC) where the synchronous generator is represented by model 1.1, so that impact of TCSC on power system stability can be more reasonably evaluated. The model of the example power system is developed using MATLAB/SIMULINK which can be can be used for teaching the power system stability phenomena, and also for research works especially to develop generator controllers using advanced technologies. Further, the parameters of the TCSC controller are optimized using genetic algorithm. The non-linear simulation results are presented to validate the effectiveness of the proposed approach.

An Enhance of the Energy Effectiveness of the Convectors Used for Heating or Cooling

The objective of this paper is to present a research study of the convectors that are used for heating or cooling of the living room or industrial halls. The key points are experimental measurement and comprehensive numerical simulation of the flow coming throughout the part of the convector such as heat exchanger, input from the fan etc.. From the obtained results, the components of the convector are optimized in sense to increase thermal power efficiency due to improvement of heat convection or reduction of air drag friction. Both optimized aspects are leading to the more effective service conditions and to energy saving. The significant part of the convector research is a design of the unique measurement laboratory and adopting measure techniques. The new laboratory provides possibility to measure thermal power efficiency and other relevant parameters under specific service conditions of the convectors.

Cash Flow Optimization on Synthetic CDOs

Collateralized Debt Obligations are not as widely used nowadays as they were before 2007 Subprime crisis. Nonetheless there remains an enthralling challenge to optimize cash flows associated with synthetic CDOs. A Gaussian-based model is used here in which default correlation and unconditional probabilities of default are highlighted. Then numerous simulations are performed based on this model for different scenarios in order to evaluate the associated cash flows given a specific number of defaults at different periods of time. Cash flows are not solely calculated on a single bought or sold tranche but rather on a combination of bought and sold tranches. With some assumptions, the simplex algorithm gives a way to find the maximum cash flow according to correlation of defaults and maturities. The used Gaussian model is not realistic in crisis situations. Besides present system does not handle buying or selling a portion of a tranche but only the whole tranche. However the work provides the investor with relevant elements on how to know what and when to buy and sell.

Preparation of Polylactic Acid Graft Polyvinyl Acetate Compatibilizers for 50/50 Starch/PLLA Blending

Polylactic acid-g-polyvinyl acetate (PLLA-g-PVAc) was used as a compatibilizer for 50/50 starch/PLLA blend. PLLA-g- PVAc with different mol% of PVAc contents were prepared by grafting PVAc onto PLLA backbone via free radical polymerization in solution process. Various conditions such as type and the amount of initiator, monomer concentration, polymerization time and temperature were studied. Results showed that the highest mol% of PVAc grafting (16 mol%) was achieved by conducting graft copolymerization in toluene at 110°C for 10 h using DCP as an initiator. Chemical structure of the PVAc grafted PLLA was confirmed by 1H NMR. Blending of modified starch and PLLA in the presence compatibilizer with different amounts and mol% PVAc was acquired using internal mixer at 160°C for 15 min. Effects of PVAc content and the amount of compatibilizer on mechanical properties of polymer blend were studied. Results revealed that tensile strength and tensile modulus of polymer blend with higher PVAc grafting content compatibilizer showed better properties than that of lower PVAc grafting content compatibilizer. The amount of compatibilizer was found optimized in the range of 0.5-1.0 Wt% depending on the mol% PVAc.

Dynamic Routing to Multiple Destinations in IP Networks using Hybrid Genetic Algorithm (DRHGA)

In this paper we have proposed a novel dynamic least cost multicast routing protocol using hybrid genetic algorithm for IP networks. Our protocol finds the multicast tree with minimum cost subject to delay, degree, and bandwidth constraints. The proposed protocol has the following features: i. Heuristic local search function has been devised and embedded with normal genetic operation to increase the speed and to get the optimized tree, ii. It is efficient to handle the dynamic situation arises due to either change in the multicast group membership or node / link failure, iii. Two different crossover and mutation probabilities have been used for maintaining the diversity of solution and quick convergence. The simulation results have shown that our proposed protocol generates dynamic multicast tree with lower cost. Results have also shown that the proposed algorithm has better convergence rate, better dynamic request success rate and less execution time than other existing algorithms. Effects of degree and delay constraints have also been analyzed for the multicast tree interns of search success rate.

400 kW Six Analytical High Speed Generator Designs for Smart Grid Systems

High Speed PM Generators driven by micro-turbines are widely used in Smart Grid System. So, this paper proposes comparative study among six classical, optimized and genetic analytical design cases for 400 kW output power at tip speed 200 m/s. These six design trials of High Speed Permanent Magnet Synchronous Generators (HSPMSGs) are: Classical Sizing; Unconstrained optimization for total losses and its minimization; Constrained optimized total mass with bounded constraints are introduced in the problem formulation. Then a genetic algorithm is formulated for obtaining maximum efficiency and minimizing machine size. In the second genetic problem formulation, we attempt to obtain minimum mass, the machine sizing that is constrained by the non-linear constraint function of machine losses. Finally, an optimum torque per ampere genetic sizing is predicted. All results are simulated with MATLAB, Optimization Toolbox and its Genetic Algorithm. Finally, six analytical design examples comparisons are introduced with study of machines waveforms, THD and rotor losses.

Simulation of Robotic Arm using Genetic Algorithm and AHP

In this paper, we have proposed a low cost optimized solution for the movement of a three-arm manipulator using Genetic Algorithm (GA) and Analytical Hierarchy Process (AHP). A scheme is given for optimizing the movement of robotic arm with the help of Genetic Algorithm so that the minimum energy consumption criteria can be achieved. As compared to Direct Kinematics, Inverse Kinematics evolved two solutions out of which the best-fit solution is selected with the help of Genetic Algorithm and is kept in search space for future use. The Inverse Kinematics, Fitness Value evaluation and Binary Encoding like tasks are simulated and tested. Although, three factors viz. Movement, Friction and Least Settling Time (or Min. Vibration) are used for finding the Fitness Function / Fitness Values, however some more factors can also be considered.

Indexing and Searching of Image Data in Multimedia Databases Using Axial Projection

This paper introduces and studies new indexing techniques for content-based queries in images databases. Indexing is the key to providing sophisticated, accurate and fast searches for queries in image data. This research describes a new indexing approach, which depends on linear modeling of signals, using bases for modeling. A basis is a set of chosen images, and modeling an image is a least-squares approximation of the image as a linear combination of the basis images. The coefficients of the basis images are taken together to serve as index for that image. The paper describes the implementation of the indexing scheme, and presents the findings of our extensive evaluation that was conducted to optimize (1) the choice of the basis matrix (B), and (2) the size of the index A (N). Furthermore, we compare the performance of our indexing scheme with other schemes. Our results show that our scheme has significantly higher performance.

Development of Genetic-based Machine Learning for Network Intrusion Detection (GBML-NID)

Society has grown to rely on Internet services, and the number of Internet users increases every day. As more and more users become connected to the network, the window of opportunity for malicious users to do their damage becomes very great and lucrative. The objective of this paper is to incorporate different techniques into classier system to detect and classify intrusion from normal network packet. Among several techniques, Steady State Genetic-based Machine Leaning Algorithm (SSGBML) will be used to detect intrusions. Where Steady State Genetic Algorithm (SSGA), Simple Genetic Algorithm (SGA), Modified Genetic Algorithm and Zeroth Level Classifier system are investigated in this research. SSGA is used as a discovery mechanism instead of SGA. SGA replaces all old rules with new produced rule preventing old good rules from participating in the next rule generation. Zeroth Level Classifier System is used to play the role of detector by matching incoming environment message with classifiers to determine whether the current message is normal or intrusion and receiving feedback from environment. Finally, in order to attain the best results, Modified SSGA will enhance our discovery engine by using Fuzzy Logic to optimize crossover and mutation probability. The experiments and evaluations of the proposed method were performed with the KDD 99 intrusion detection dataset.

Optimization of GAMM Francis Turbine Runner

Nowadays, the challenge in hydraulic turbine design is the multi-objective design of turbine runner to reach higher efficiency. The hydraulic performance of a turbine is strictly depends on runner blades shape. The present paper focuses on the application of the multi-objective optimization algorithm to the design of a small Francis turbine runner. The optimization exercise focuses on the efficiency improvement at the best efficiency operating point (BEP) of the GAMM Francis turbine. A global optimization method based on artificial neural networks (ANN) and genetic algorithms (GA) coupled by 3D Navier-Stokes flow solver has been used to improve the performance of an initial geometry of a Francis runner. The results show the good ability of optimization algorithm and the final geometry has better efficiency with initial geometry. The goal was to optimize the geometry of the blades of GAMM turbine runner which leads to maximum total efficiency by changing the design parameters of camber line in at least 5 sections of a blade. The efficiency of the optimized geometry is improved from 90.7% to 92.5%. Finally, design parameters and the way of selection have been considered and discussed.

A Perceptually Optimized Foveation Based Wavelet Embedded Zero Tree Image Coding

In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

Micro-Controller Based Oxy-Fuel Profile Cutting System

In today-s era of plasma and laser cutting, machines using oxy-acetylene flame are also meritorious due to their simplicity and cost effectiveness. The objective to devise a Computer controlled Oxy-Fuel profile cutting machine arose from the increasing demand for metal cutting with respect to edge quality, circularity and lesser formation of redeposit material. The System has an 8 bit micro controller based embedded system, which assures stipulated time response. A new window based Application software was devised which takes a standard CAD file .DXF as input and converts it into numerical data required for the controller. It uses VB6 as a front end whereas MS-ACCESS and AutoCAD as back end. The system is designed around AT89C51RD2, powerful 8 bit, ISP micro controller from Atmel and is optimized to achieve cost effectiveness and also maintains the required accuracy and reliability for complex shapes. The backbone of the system is a cleverly designed mechanical assembly along with the embedded system resulting in an accuracy of about 10 microns while maintaining perfect linearity in the cut. This results in substantial increase in productivity. The observed results also indicate reduced inter laminar spacing of pearlite with an increase in the hardness of the edge region.

A “Greedy“ Czech Manufacturing Case

The article describes a case study on one of Czech Republic-s manufacturing middle size enterprises (ME), where due to the European financial crisis, production lines had to be redesigned and optimized in order to minimize the total costs of the production of goods. It is considered an optimization problem of minimizing the total cost of the work load, according to the costs of the possible locations of the workplaces, with an application of the Greedy algorithm and a partial analogy to a Set Packing Problem. The displacement of working tables in a company should be as a one-toone monotone increasing function in order for the total costs of production of the goods to be at minimum. We use a heuristic approach with greedy algorithm for solving this linear optimization problem, regardless the possible greediness which may appear and we apply it in a Czech ME.