A Heuristic Algorithm Approach for Scheduling of Multi-criteria Unrelated Parallel Machines

In this paper we address a multi-objective scheduling problem for unrelated parallel machines. In unrelated parallel systems, the processing cost/time of a given job on different machines may vary. The objective of scheduling is to simultaneously determine the job-machine assignment and job sequencing on each machine. In such a way the total cost of the schedule is minimized. The cost function consists of three components, namely; machining cost, earliness/tardiness penalties and makespan related cost. Such scheduling problem is combinatorial in nature. Therefore, a Simulated Annealing approach is employed to provide good solutions within reasonable computational times. Computational results show that the proposed approach can efficiently solve such complicated problems.

A High Performance Technique in Harmonic Omitting Based on Predictive Current Control of a Shunt Active Power Filter

The perfect operation of common Active Filters is depended on accuracy of identification system distortion. Also, using a suitable method in current injection and reactive power compensation, leads to increased filter performance. Due to this fact, this paper presents a method based on predictive current control theory in shunt active filter applications. The harmonics of the load current is identified by using o–d–q reference frame on load current and eliminating the DC part of d–q components. Then, the rest of these components deliver to predictive current controller as a Threephase reference current by using Park inverse transformation. System is modeled in discreet time domain. The proposed method has been tested using MATLAB model for a nonlinear load (with Total Harmonic Distortion=20%). The simulation results indicate that the proposed filter leads to flowing a sinusoidal current (THD=0.15%) through the source. In addition, the results show that the filter tracks the reference current accurately.

Probabilistic Model Development for Project Performance Forecasting

In this paper, based on the past project cost and time performance, a model for forecasting project cost performance is developed. This study presents a probabilistic project control concept to assure an acceptable forecast of project cost performance. In this concept project activities are classified into sub-groups entitled control accounts. Then obtain the Stochastic S-Curve (SS-Curve), for each sub-group and the project SS-Curve is obtained by summing sub-groups- SS-Curves. In this model, project cost uncertainties are considered through Beta distribution functions of the project activities costs required to complete the project at every selected time sections through project accomplishment, which are extracted from a variety of sources. Based on this model, after a percentage of the project progress, the project performance is measured via Earned Value Management to adjust the primary cost probability distribution functions. Then, accordingly the future project cost performance is predicted by using the Monte-Carlo simulation method.

Quasi Multi-Pulse Back-to-Back Static Synchronous Compensator Employing Line Frequency Switching 2-Level GTO Inverters

Back-to-back static synchronous compensator (BtBSTATCOM) consists of two back-to-back voltage-source converters (VSC) with a common DC link in a substation. This configuration extends the capabilities of conventional STATCOM that bidirectional active power transfer from one bus to another is possible. In this paper, VSCs are designed in quasi multi-pulse form in which GTOs are triggered only once per cycle in PSCAD/EMTDC. The design details of VSCs as well as gate switching circuits and controllers are fully represented. Regulation modes of BtBSTATCOM are verified and tested on a multi-machine power system through different simulation cases. The results presented in the form of typical time responses show that practical PI controllers are almost robust and stable in case of start-up, set-point change, and line faults.

Identifying Significant Factors of Brick Laying Process through Design of Experiment and Computer Simulation: A Case Study

Improving performance measures in the construction processes has been a major concern for managers and decision makers in the industry. They seek for ways to recognize the key factors which have the largest effect on the process. Identifying such factors can guide them to focus on the right parts of the process in order to gain the best possible result. In the present study design of experiment (DOE) has been applied to a computer simulation model of brick laying process to determine significant factors while productivity has been chosen as the response of the experiment. To this end, four controllable factors and their interaction have been experimented and the best factor level has been calculated for each one. The results indicate that three factors, namely, labor of brick, labor of mortar and inter arrival time of mortar along with interaction of labor of brick and labor of mortar are significant.

Technique for Processing and Preservation of Human Amniotic Membrane for Ocular Surface Reconstruction

Human amniotic membrane (HAM) is a useful biological material for the reconstruction of damaged ocular surface. The processing and preservation of HAM is critical to prevent the patients undergoing amniotic membrane transplant (AMT) from cross infections. For HAM preparation human placenta is obtained after an elective cesarean delivery. Before collection, the donor is screened for seronegativity of HCV, Hbs Ag, HIV and Syphilis. After collection, placenta is washed in balanced salt solution (BSS) in sterile environment. Amniotic membrane is then separated from the placenta as well as chorion while keeping the preparation in BSS. Scrapping of HAM is then carried out manually until all the debris is removed and clear transparent membrane is acquired. Nitrocellulose membrane filters are then placed on the stromal side of HAM, cut around the edges with little membrane folded towards other side making it easy to separate during surgery. HAM is finally stored in solution of glycerine and Dulbecco-s Modified Eagle Medium (DMEM) in 1:1 ratio containing antibiotics. The capped borosil vials containing HAM are kept at -80°C until use. This vial is thawed to room temperature and opened under sterile operation theatre conditions at the time of surgery.

Measurement of Small PD-S in Compressed SF6(10%) - N2(90%) Gas Mixture

Partial Discharge measurement is a very important means of assessing the integrity of insulation systems in a High Voltage apparatus. In compressed gas insulation systems, floating particles can initiate partial discharge activities which adversely affect the working of insulation. Partial Discharges below the inception voltage also plays a crucial in damaging the integrity of insulation over a period of time. This paper discusses the effect of loose and fixed Copper and Nichrome wire particles on the PD characteristics in SF6-N2 (10:90) gas mixtures at a pressure of 0.4MPa. The Partial Discharge statistical parameters and their correlation to the observed results are discussed.

Discrete Vector Control for Induction Motor Drives with the Rotor Time Constant Update

In this paper, we investigated vector control of an induction machine taking into account discretization problems of the command. In the purpose to show how to include in a discrete model of this current control and with rotor time constant update. The results of simulation obtained are very satisfaisant. That was possible thanks to the good choice of the values of the parameters of the regulators used which shows, the founded good of the method used, for the choice of the parameters of the discrete regulators. The simulation results are presented at the end of this paper.

Data Preprocessing for Supervised Leaning

Many factors affect the success of Machine Learning (ML) on a given task. The representation and quality of the instance data is first and foremost. If there is much irrelevant and redundant information present or noisy and unreliable data, then knowledge discovery during the training phase is more difficult. It is well known that data preparation and filtering steps take considerable amount of processing time in ML problems. Data pre-processing includes data cleaning, normalization, transformation, feature extraction and selection, etc. The product of data pre-processing is the final training set. It would be nice if a single sequence of data pre-processing algorithms had the best performance for each data set but this is not happened. Thus, we present the most well know algorithms for each step of data pre-processing so that one achieves the best performance for their data set.

Integrating Big Island Layout with Pull System for Production Optimization

Lean manufacturing is a production philosophy made popular by Toyota Motor Corporation (TMC). It is globally known as the Toyota Production System (TPS) and has the ultimate aim of reducing cost by thoroughly eliminating wastes or muda. TPS embraces the Just-in-time (JIT) manufacturing; achieving cost reduction through lead time reduction. JIT manufacturing can be achieved by implementing Pull system in the production. Furthermore, TPS aims to improve productivity and creating continuous flow in the production by arranging the machines and processes in cellular configurations. This is called as Cellular Manufacturing Systems (CMS). This paper studies on integrating the CMS with the Pull system to establish a Big Island-Pull system production for High Mix Low Volume (HMLV) products in an automotive component industry. The paper will use the build-in JIT system steps adapted from TMC to create the Pull system production and also create a shojinka line which, according to takt time, has the flexibility to adapt to demand changes simply by adding and taking out manpower. This will lead to optimization in production.

A Codebook-based Redundancy Suppression Mechanism with Lifetime Prediction in Cluster-based WSN

Wireless Sensor Network (WSN) comprises of sensor nodes which are designed to sense the environment, transmit sensed data back to the base station via multi-hop routing to reconstruct physical phenomena. Since physical phenomena exists significant overlaps between temporal redundancy and spatial redundancy, it is necessary to use Redundancy Suppression Algorithms (RSA) for sensor node to lower energy consumption by reducing the transmission of redundancy. A conventional algorithm of RSAs is threshold-based RSA, which sets threshold to suppress redundant data. Although many temporal and spatial RSAs are proposed, temporal-spatial RSA are seldom to be proposed because it is difficult to determine when to utilize temporal or spatial RSAs. In this paper, we proposed a novel temporal-spatial redundancy suppression algorithm, Codebookbase Redundancy Suppression Mechanism (CRSM). CRSM adopts vector quantization to generate a codebook, which is easily used to implement temporal-spatial RSA. CRSM not only achieves power saving and reliability for WSN, but also provides the predictability of network lifetime. Simulation result shows that the network lifetime of CRSM outperforms at least 23% of that of other RSAs.

Multi-Agent Simulation of Wayfinding for Rescue Operation during Building Fire

Recently research on human wayfinding has focused mainly on mental representations rather than processes of wayfinding. The objective of this paper is to demonstrate the rationality behind applying multi-agent simulation paradigm to the modeling of rescuer team wayfinding in order to develop computational theory of perceptual wayfinding in crisis situations using image schemata and affordances, which explains how people find a specific destination in an unfamiliar building such as a hospital. The hypothesis of this paper is that successful navigation is possible if the agents are able to make the correct decision through well-defined cues in critical cases, so the design of the building signage is evaluated through the multi-agent-based simulation. In addition, a special case of wayfinding in a building, finding one-s way through three hospitals, is used to demonstrate the model. Thereby, total rescue time for rescue operation during building fire is computed. This paper discuses the computed rescue time for various signage localization and provides experimental result for optimization of building signage design. Therefore the most appropriate signage design resulted in the shortest total rescue time in various situations.

Analysis of Aiming Performance for Games Using Mapping Method of Corneal Reflections Based on Two Different Light Sources

Fundamental motivation of this paper is how gaze estimation can be utilized effectively regarding an application to games. In games, precise estimation is not always important in aiming targets but an ability to move a cursor to an aiming target accurately is also significant. Incidentally, from a game producing point of view, a separate expression of a head movement and gaze movement sometimes becomes advantageous to expressing sense of presence. A case that panning a background image associated with a head movement and moving a cursor according to gaze movement can be a representative example. On the other hand, widely used technique of POG estimation is based on a relative position between a center of corneal reflection of infrared light sources and a center of pupil. However, a calculation of a center of pupil requires relatively complicated image processing, and therefore, a calculation delay is a concern, since to minimize a delay of inputting data is one of the most significant requirements in games. In this paper, a method to estimate a head movement by only using corneal reflections of two infrared light sources in different locations is proposed. Furthermore, a method to control a cursor using gaze movement as well as a head movement is proposed. By using game-like-applications, proposed methods are evaluated and, as a result, a similar performance to conventional methods is confirmed and an aiming control with lower computation power and stressless intuitive operation is obtained.

Protein Secondary Structure Prediction

Protein structure determination and prediction has been a focal research subject in the field of bioinformatics due to the importance of protein structure in understanding the biological and chemical activities of organisms. The experimental methods used by biotechnologists to determine the structures of proteins demand sophisticated equipment and time. A host of computational methods are developed to predict the location of secondary structure elements in proteins for complementing or creating insights into experimental results. However, prediction accuracies of these methods rarely exceed 70%.

Oscillation Effect of the Multi-stage Learning for the Layered Neural Networks and Its Analysis

This paper proposes an efficient learning method for the layered neural networks based on the selection of training data and input characteristics of an output layer unit. Comparing to recent neural networks; pulse neural networks, quantum neuro computation, etc, the multilayer network is widely used due to its simple structure. When learning objects are complicated, the problems, such as unsuccessful learning or a significant time required in learning, remain unsolved. Focusing on the input data during the learning stage, we undertook an experiment to identify the data that makes large errors and interferes with the learning process. Our method devides the learning process into several stages. In general, input characteristics to an output layer unit show oscillation during learning process for complicated problems. The multi-stage learning method proposes by the authors for the function approximation problems of classifying learning data in a phased manner, focusing on their learnabilities prior to learning in the multi layered neural network, and demonstrates validity of the multi-stage learning method. Specifically, this paper verifies by computer experiments that both of learning accuracy and learning time are improved of the BP method as a learning rule of the multi-stage learning method. In learning, oscillatory phenomena of a learning curve serve an important role in learning performance. The authors also discuss the occurrence mechanisms of oscillatory phenomena in learning. Furthermore, the authors discuss the reasons that errors of some data remain large value even after learning, observing behaviors during learning.

Web Pages Aesthetic Evaluation Using Low-Level Visual Features

Web sites are rapidly becoming the preferred media choice for our daily works such as information search, company presentation, shopping, and so on. At the same time, we live in a period where visual appearances play an increasingly important role in our daily life. In spite of designers- effort to develop a web site which be both user-friendly and attractive, it would be difficult to ensure the outcome-s aesthetic quality, since the visual appearance is a matter of an individual self perception and opinion. In this study, it is attempted to develop an automatic system for web pages aesthetic evaluation which are the building blocks of web sites. Based on the image processing techniques and artificial neural networks, the proposed method would be able to categorize the input web page according to its visual appearance and aesthetic quality. The employed features are multiscale/multidirectional textural and perceptual color properties of the web pages, fed to perceptron ANN which has been trained as the evaluator. The method is tested using university web sites and the results suggested that it would perform well in the web page aesthetic evaluation tasks with around 90% correct categorization.

An Improved Quality Adaptive Rate Filtering Technique Based on the Level Crossing Sampling

Mostly the systems are dealing with time varying signals. The Power efficiency can be achieved by adapting the system activity according to the input signal variations. In this context an adaptive rate filtering technique, based on the level crossing sampling is devised. It adapts the sampling frequency and the filter order by following the input signal local variations. Thus, it correlates the processing activity with the signal variations. Interpolation is required in the proposed technique. A drastic reduction in the interpolation error is achieved by employing the symmetry during the interpolation process. Processing error of the proposed technique is calculated. The computational complexity of the proposed filtering technique is deduced and compared to the classical one. Results promise a significant gain of the computational efficiency and hence of the power consumption.

Photodegradation of Optically Trapped Polystyrene Beads at 442 nm

Polystyrene particles of different sizes are optically trapped with a gaussian beam from a He-Cd laser operating at 442 nm. The particles are observed to exhibit luminescence after a certain trapping time followed by an escape from the optical trap. The observed luminescence is explained in terms of the photodegradation of the polystyrene backbone. It is speculated that these chemical modifications also play a role for the escape of the particles from the trap. Variations of the particle size and the laser power show that these parameters have a great influence on the observed phenomena.

Dynamic Action Induced By Walking Pedestrian

The main focus of this paper is on the human induced forces. Almost all existing force models for this type of load (defined either in the time or frequency domain) are developed from the assumption of perfect periodicity of the force and are based on force measurements conducted on rigid (i.e. high frequency) surfaces. To verify the different authors conclusions the vertical pressure measurements invoked during the walking was performed, using pressure gauges in various configurations. The obtained forces are analyzed using Fourier transformation. This load is often decisive in the design of footbridges. Design criteria and load models proposed by widely used standards and other researchers were introduced and a comparison was made.

Influence of Static Pressure on Viability of Entomopathogenic Nematodes – Steinernema feltiae

The entomopathogenic nematodes Steinernema feltiaeare are components of many biological pesticides. The biological pesticides are applicated by means a spraying machines. The influence of high pressure operating time on viability of nematodes has been experimentally investigated in order to explain if static pressure inside of the sprayers installation was able to destroy nematodes. The value of pressure was 55 MPa and its maximum operating time was 3 hours. Changes were found in viability of pressurized samples of nematodes, mixed with water.