A Hybrid Genetic Algorithm for the Sequence Dependent Flow-Shop Scheduling Problem

Flow-shop scheduling problem (FSP) deals with the scheduling of a set of jobs that visit a set of machines in the same order. The FSP is NP-hard, which means that an efficient algorithm for solving the problem to optimality is unavailable. To meet the requirements on time and to minimize the make-span performance of large permutation flow-shop scheduling problems in which there are sequence dependent setup times on each machine, this paper develops one hybrid genetic algorithms (HGA). Proposed HGA apply a modified approach to generate population of initial chromosomes and also use an improved heuristic called the iterated swap procedure to improve initial solutions. Also the author uses three genetic operators to make good new offspring. The results are compared to some recently developed heuristics and computational experimental results show that the proposed HGA performs very competitively with respect to accuracy and efficiency of solution.

Design of Medical Information Storage System – ECG Signal

This paper presents the design, implementation and results related to the storage system of medical information associated to the ECG (Electrocardiography) signal. The system includes the signal acquisition modules, the preprocessing and signal processing, followed by a module of transmission and reception of the signal, along with the storage and web display system of the medical platform. The tests were initially performed with this signal, with the purpose to include more biosignal under the same system in the future.

Feature Point Reduction for Video Stabilization

Corner detection and optical flow are common techniques for feature-based video stabilization. However, these algorithms are computationally expensive and should be performed at a reasonable rate. This paper presents an algorithm for discarding irrelevant feature points and maintaining them for future use so as to improve the computational cost. The algorithm starts by initializing a maintained set. The feature points in the maintained set are examined against its accuracy for modeling. Corner detection is required only when the feature points are insufficiently accurate for future modeling. Then, optical flows are computed from the maintained feature points toward the consecutive frame. After that, a motion model is estimated based on the simplified affine motion model and least square method, with outliers belonging to moving objects presented. Studentized residuals are used to eliminate such outliers. The model estimation and elimination processes repeat until no more outliers are identified. Finally, the entire algorithm repeats along the video sequence with the points remaining from the previous iteration used as the maintained set. As a practical application, an efficient video stabilization can be achieved by exploiting the computed motion models. Our study shows that the number of times corner detection needs to perform is greatly reduced, thus significantly improving the computational cost. Moreover, optical flow vectors are computed for only the maintained feature points, not for outliers, thus also reducing the computational cost. In addition, the feature points after reduction can sufficiently be used for background objects tracking as demonstrated in the simple video stabilizer based on our proposed algorithm.

High Performance in Parallel Data Integration: An Empirical Evaluation of the Ratio Between Processing Time and Number of Physical Nodes

Many studies have shown that parallelization decreases efficiency [1], [2]. There are many reasons for these decrements. This paper investigates those which appear in the context of parallel data integration. Integration processes generally cannot be allocated to packages of identical size (i. e. tasks of identical complexity). The reason for this is unknown heterogeneous input data which result in variable task lengths. Process delay is defined by the slowest processing node. It leads to a detrimental effect on the total processing time. With a real world example, this study will show that while process delay does initially increase with the introduction of more nodes it ultimately decreases again after a certain point. The example will make use of the cloud computing platform Hadoop and be run inside Amazon-s EC2 compute cloud. A stochastic model will be set up which can explain this effect.

Contribution of Vitaton (Β-Carotene) to the Rearing Factors Survival Rate and Visual Flesh Color of Rainbow Trout Fish in Comparison With Astaxanthin

In this study Vitaton (an organic supplement which contains fermentative β-carotene) and synthetic astaxanthin (CAROPHYLL® Pink) were evaluated as pro-growth factors in Rainbow trout diet. An 8 week feeding trial was conducted to determine the effects of Vitaton versus astaxanthin on rearing factors, survival rate and visual flesh color of Rainbow trout (Oncorhnchynchus mykiss) with initial weight of 196±5. Four practical diets were formulated to contain 50 and 80 (ppm) of β- carotene and astaxanthin and also a control diet was prepared without any pigment. Each diet was fed to triplicate groups of fish rearing in fresh water. Fish were fed twice daily. The water temperature fluctuated from 12 to 15 (C˚) and also dissolved oxygen content was between 7 to 7.5 (mg/lit) during the experimental period. At the end of the experiment, growth and food utilization parameters and survival rate were unaffected by dietary treatments (p>0.05). Also, there was no significant difference between carcass yield within treatments (p>0.05). No significant difference recognized between visual flesh color (SalmoFan score) of fish fed Vitaton-containing diets. On the contrary, feeding on diets containing 50 and 80 (ppm) of astaxanthin, increased SalmoFan score (flesh astaxanthin concentration) from

Improving Convergence of Parameter Tuning Process of the Additive Fuzzy System by New Learning Strategy

An additive fuzzy system comprising m rules with n inputs and p outputs in each rule has at least t m(2n + 2 p + 1) parameters needing to be tuned. The system consists of a large number of if-then fuzzy rules and takes a long time to tune its parameters especially in the case of a large amount of training data samples. In this paper, a new learning strategy is investigated to cope with this obstacle. Parameters that tend toward constant values at the learning process are initially fixed and they are not tuned till the end of the learning time. Experiments based on applications of the additive fuzzy system in function approximation demonstrate that the proposed approach reduces the learning time and hence improves convergence speed considerably.

Effects of Material Properties of Warhead Casing on Natural Fragmentation Performance of High Explosive (HE) Warhead

This research paper presents numerical studies of the characteristics of warhead fragmentation in terms of initial velocities, spray angles of fragments and fragment mass distribution of high explosive (HE) warhead. The behavior of warhead fragmentation depends on shape and size of warhead, thickness of casing, type of explosive, number and position of detonator, and etc. This paper focuses on the effects of material properties of warhead casing, i.e. failure strain, initial yield and ultimate strength on the characteristics of warhead fragmentation. It was found that initial yield and ultimate strength of casing has minimal effects on the initial velocities and spray angles of fragments. Moreover, a brittle warhead casing with low failure strain tends to produce higher number of fragments with less average fragment mass.

Enhanced Traveling Salesman Problem Solving by Genetic Algorithm Technique (TSPGA)

The well known NP-complete problem of the Traveling Salesman Problem (TSP) is coded in genetic form. A software system is proposed to determine the optimum route for a Traveling Salesman Problem using Genetic Algorithm technique. The system starts from a matrix of the calculated Euclidean distances between the cities to be visited by the traveling salesman and a randomly chosen city order as the initial population. Then new generations are then created repeatedly until the proper path is reached upon reaching a stopping criterion. This search is guided by a solution evaluation function.

Analysis of Flow in Cylindrical Mixing Chamber

The article deals with numerical investigation of axisymmetric subsonic air to air ejector. An analysis of flow and mixing processes in cylindrical mixing chamber are made. Several modes with different velocity and ejection ratio are presented. The mixing processes are described and differences between flow in the initial region of mixing and the main region of mixing are described. The lengths of both regions are evaluated. Transition point and point where the mixing processes are finished are identified. It was found that the length of the initial region of mixing is strongly dependent on the velocity ratio, while the length of the main region of mixing is dependent on velocity ratio only slightly.

Removal of Methylene Blue from Aqueous Solution by Using Gypsum as a Low Cost Adsorbent

Removal of Methylene Blue (MB) from aqueous solution by adsorbing it on Gypsum was investigated by batch method. The studies were conducted at 25°C and included the effects of pH and initial concentration of Methylene Blue. The adsorption data was analyzed by using the Langmuir, Freundlich and Tempkin isotherm models. The maximum monolayer adsorption capacity was found to be 36 mg of the dye per gram of gypsum. The data were also analyzed in terms of their kinetic behavior and was found to obey the pseudo second order equation.

A Hybrid Approach Using Particle Swarm Optimization and Simulated Annealing for N-queen Problem

This paper presents a hybrid approach for solving nqueen problem by combination of PSO and SA. PSO is a population based heuristic method that sometimes traps in local maximum. To solve this problem we can use SA. Although SA suffer from many iterations and long time convergence for solving some problems, By good adjusting initial parameters such as temperature and the length of temperature stages SA guarantees convergence. In this article we use discrete PSO (due to nature of n-queen problem) to achieve a good local maximum. Then we use SA to escape from local maximum. The experimental results show that our hybrid method in comparison of SA method converges to result faster, especially for high dimensions n-queen problems.

Formulation, Analysis and Validation of Takagi-Sugeno Fuzzy Modeling For Robotic Monipulators

This paper proposes a methodology for analysis of the dynamic behavior of a robotic manipulator in continuous time. Initially this system (nonlinear system) will be decomposed into linear submodels and analyzed in the context of the Linear and Parameter Varying (LPV) Systems. The obtained linear submodels, which represent the local dynamic behavior of the robotic manipulator in some operating points were grouped in a Takagi-Sugeno fuzzy structure. The obtained fuzzy model was analyzed and validated through analog simulation, as universal approximator of the robotic manipulator.

Multiobjective Optimization Solution for Shortest Path Routing Problem

The shortest path routing problem is a multiobjective nonlinear optimization problem with constraints. This problem has been addressed by considering Quality of service parameters, delay and cost objectives separately or as a weighted sum of both objectives. Multiobjective evolutionary algorithms can find multiple pareto-optimal solutions in one single run and this ability makes them attractive for solving problems with multiple and conflicting objectives. This paper uses an elitist multiobjective evolutionary algorithm based on the Non-dominated Sorting Genetic Algorithm (NSGA), for solving the dynamic shortest path routing problem in computer networks. A priority-based encoding scheme is proposed for population initialization. Elitism ensures that the best solution does not deteriorate in the next generations. Results for a sample test network have been presented to demonstrate the capabilities of the proposed approach to generate well-distributed pareto-optimal solutions of dynamic routing problem in one single run. The results obtained by NSGA are compared with single objective weighting factor method for which Genetic Algorithm (GA) was applied.

Influence of Surface-Treated Coarse Recycled Concrete Aggregate on Compressive Strength of Concrete

This paper reports on the influence of surface-treated coarse recycled concrete aggregate (RCA) on developing the compressive strength of concrete. The coarse RCA was initially treated by separately impregnating it in calcium metasilicate (CM) or wollastonite and nanosilica (NS) prepared at various concentrations. The effects of both treatment materials on concrete properties (e.g., slump, density and compressive strength) were evaluated. Scanning electron microscopy (SEM) analysis was performed to examine the microstructure of the resulting concrete. Results show that the effective use of treated coarse RCA significantly enhances the compressive strength of concrete. This result is supported by the SEM analysis, which indicates the formation of a dense interface between the treated coarse RCA and the cement matrix. Coarse RCA impregnated in CM solution results in better concrete strength than NS, and the optimum concentration of CM solution recommended for treated coarse RCA is 10%.

Fifth Order Variable Step Block Backward Differentiation Formulae for Solving Stiff ODEs

The implicit block methods based on the backward differentiation formulae (BDF) for the solution of stiff initial value problems (IVPs) using variable step size is derived. We construct a variable step size block methods which will store all the coefficients of the method with a simplified strategy in controlling the step size with the intention of optimizing the performance in terms of precision and computation time. The strategy involves constant, halving or increasing the step size by 1.9 times the previous step size. Decision of changing the step size is determined by the local truncation error (LTE). Numerical results are provided to support the enhancement of method applied.

Development of Rotational Smart Lighting Control System for Plant Factory

Rotational Smart Lighting Control System can supply the quantity of lighting which is required to run plants by rotating few LED and Fluorescent instead of that are used in the existing plant factories.The initial installation of the existing plants factory is expensive, so in order to solve the problem with smart lighting control system was developed. The beam required intensity for the growth of crops, Photosynthetic Photon Flux Density(PPFD)is calculated; and the number of LED, are installed on the blades, set; using the Lighting Simulation Program.Relux, it is able to confirm that the difference of the beam intensity between the center and the outer of lighting system when the lighting device is rotating.

One scheme of Transition Probability Evaluation

In present work are considered the scheme of evaluation the transition probability in quantum system. It is based on path integral representation of transition probability amplitude and its evaluation by means of a saddle point method, applied to the part of integration variables. The whole integration process is reduced to initial value problem solutions of Hamilton equations with a random initial phase point. The scheme is related to the semiclassical initial value representation approaches using great number of trajectories. In contrast to them from total set of generated phase paths only one path for each initial coordinate value is selected in Monte Karlo process.

Absorption of CO2 in EAF Reducing Slag from Stainless Steel Making Process by Wet Grinding

In the current study, we have conducted an experimental investigation on the utilization of electronic arc furnace (EAF) reducing slag for the absorption of CO2 via wet grinding method. It was carried out by various grinding conditions. The slag was ground in the vibrating ball mill in the presence of CO2 and pure water under ambient temperature. The reaction behavior was monitored with constant pressure method, and the changes of experimental systems volume as a function of grinding time were measured. It was found that the CO2 absorption occurred as soon as the grinding started. The CO2 absorption was significantly increased in the case of wet grinding compare to the dry grinding. Generally, the amount of CO2 absorption increased as the amount of water, weight of slag and initial pressure increased. However, it was decreased when the amount of water exceeds 200ml and when smaller balls were used. The absorption of CO2 occurred simultaneously with the start of the grinding and it stopped when the grinding was stopped. According to this research, the CO2 reacted with the CaO inside the slag, forming CaCO3.

Application of Lattice Boltzmann Methods in Heat and Moisture Transfer in Frozen Soil

Although water only takes a little percentage in the total mass of soil, it indeed plays an important role to the strength of structure. Moisture transfer can be carried out by many different mechanisms which may involve heat and mass transfer, thermodynamic phase change, and the interplay of various forces such as viscous, buoyancy, and capillary forces. The continuum models are not well suited for describing those phenomena in which the connectivity of the pore space or the fracture network, or that of a fluid phase, plays a major role. However, Lattice Boltzmann methods (LBMs) are especially well suited to simulate flows around complex geometries. Lattice Boltzmann methods were initially invented for solving fluid flows. Recently, fluid with multicomponent and phase change is also included in the equations. By comparing the numerical result with experimental result, the Lattice Boltzmann methods with phase change will be optimized.

Complex-Valued Neural Network in Image Recognition: A Study on the Effectiveness of Radial Basis Function

A complex valued neural network is a neural network, which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in image and vision processing. In Neural networks, radial basis functions are often used for interpolation in multidimensional space. A Radial Basis function is a function, which has built into it a distance criterion with respect to a centre. Radial basis functions have often been applied in the area of neural networks where they may be used as a replacement for the sigmoid hidden layer transfer characteristic in multi-layer perceptron. This paper aims to present exhaustive results of using RBF units in a complex-valued neural network model that uses the back-propagation algorithm (called 'Complex-BP') for learning. Our experiments results demonstrate the effectiveness of a Radial basis function in a complex valued neural network in image recognition over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error on a neural network model with RBF units. Some inherent properties of this complex back propagation algorithm are also studied and discussed.