Six Sigma Solutions and its Benefit-Cost Ratio for Quality Improvement

This is an application research presenting the improvement of production quality using the six sigma solutions and the analyses of benefit-cost ratio. The case of interest is the production of tile-concrete. Such production has faced with the problem of high nonconforming products from an inappropriate surface coating and had low process capability based on the strength property of tile. Surface coating and tile strength are the most critical to quality of this product. The improvements followed five stages of six sigma solutions. After the improvement, the production yield was improved to 80% as target required and the defective products from coating process was remarkably reduced from 29.40% to 4.09%. The process capability based on the strength quality was increased from 0.87 to 1.08 as customer oriented. The improvement was able to save the materials loss for 3.24 millions baht or 0.11 million dollars. The benefits from the improvement were analyzed from (1) the reduction of the numbers of non conforming tile using its factory price for surface coating improvement and (2) the materials saved from the increment of process capability. The benefit-cost ratio of overall improvement was high as 7.03. It was non valuable investment in define, measure, analyses and the initial of improve stages after that it kept increasing. This was due to there were no benefits in define, measure, and analyze stages of six sigma since these three stages mainly determine the cause of problem and its effects rather than improve the process. The benefit-cost ratio starts existing in the improve stage and go on. Within each stage, the individual benefitcost ratio was much higher than the accumulative one as there was an accumulation of cost since the first stage of six sigma. The consideration of the benefit-cost ratio during the improvement project helps make decisions for cost saving of similar activities during the improvement and for new project. In conclusion, the determination of benefit-cost ratio behavior through out six sigma implementation period provides the useful data for managing quality improvement for the optimal effectiveness. This is the additional outcome from the regular proceeding of six sigma.

Electrical Resistivity of Subsurface: Field and Laboratory Assessment

The objective of this paper is to study the electrical resistivity complexity between field and laboratory measurement, in order to improve the effectiveness of data interpretation for geophysical ground resistivity survey. The geological outcrop in Penang, Malaysia with an obvious layering contact was chosen as the study site. Two dimensional geoelectrical resistivity imaging were used in this study to maps the resistivity distribution of subsurface, whereas few subsurface sample were obtained for laboratory advance. In this study, resistivity of samples in original conditions is measured in laboratory by using time domain low-voltage technique, particularly for granite core sample and soil resistivity measuring set for soil sample. The experimentation results from both schemes are studied, analyzed, calibrated and verified, including basis and correlation, degree of tolerance and characteristics of substance. Consequently, the significant different between both schemes is explained comprehensively within this paper.

Concurrent Approach to Data Parallel Model using Java

Parallel programming models exist as an abstraction of hardware and memory architectures. There are several parallel programming models in commonly use; they are shared memory model, thread model, message passing model, data parallel model, hybrid model, Flynn-s models, embarrassingly parallel computations model, pipelined computations model. These models are not specific to a particular type of machine or memory architecture. This paper expresses the model program for concurrent approach to data parallel model through java programming.

A New Design of Mobile Thermoelectric Power Generation System

This paper presents a compact thermoelectric power generator system based on temperature difference across the element. The system can transfer the burning heat energy to electric energy directly. The proposed system has a thermoelectric generator and a power control box. In the generator, there are 4 thermoelectric modules (TEMs), each of which uses 2 thermoelectric chips (TEs) and 2 cold sinks, 1 thermal absorber, and 1 thermal conduction flat board. In the power control box, there are 1 storing energy device, 1 converter, and 1 inverter. The total net generating power is about 11W. This system uses commercial portable gas stoves or burns timber or the coal as the heat source, which is easily obtained. It adopts solid-state thermoelectric chips as heat inverter parts. The system has the advantages of being light-weight, quite, and mobile, requiring no maintenance, and havng easily-supplied heat source. The system can be used a as long as burning is allowed. This system works well for highly-mobilized outdoors situations by providing a power for illumination, entertainment equipment or the wireless equipment at refuge. Under heavy storms such as typhoon, when the solar panels become ineffective and the wind-powered machines malfunction, the thermoelectric power generator can continue providing the vital power.

An Overview of Islanding Detection Methods in Photovoltaic Systems

The issue of unintentional islanding in PV grid interconnection still remains as a challenge in grid-connected photovoltaic (PV) systems. This paper discusses the overview of popularly used anti-islanding detection methods, practically applied in PV grid-connected systems. Anti-islanding methods generally can be classified into four major groups, which include passive methods, active methods, hybrid methods and communication base methods. Active methods have been the preferred detection technique over the years due to very small non-detected zone (NDZ) in small scale distribution generation. Passive method is comparatively simpler than active method in terms of circuitry and operations. However, it suffers from large NDZ that significantly reduces its performance. Communication base methods inherit the advantages of active and passive methods with reduced drawbacks. Hybrid method which evolved from the combination of both active and passive methods has been proven to achieve accurate anti-islanding detection by many researchers. For each of the studied anti-islanding methods, the operation analysis is described while the advantages and disadvantages are compared and discussed. It is difficult to pinpoint a generic method for a specific application, because most of the methods discussed are governed by the nature of application and system dependent elements. This study concludes that the setup and operation cost is the vital factor for anti-islanding method selection in order to achieve minimal compromising between cost and system quality.

Combining Color and Layout Features for the Identification of Low-resolution Documents

This paper proposes a method, combining color and layout features, for identifying documents captured from lowresolution handheld devices. On one hand, the document image color density surface is estimated and represented with an equivalent ellipse and on the other hand, the document shallow layout structure is computed and hierarchically represented. The combined color and layout features are arranged in a symbolic file, which is unique for each document and is called the document-s visual signature. Our identification method first uses the color information in the signatures in order to focus the search space on documents having a similar color distribution, and finally selects the document having the most similar layout structure in the remaining search space. Finally, our experiment considers slide documents, which are often captured using handheld devices.

European Radical Right Parties as Actors in Securitization of Migration

This study reveals that anti-immigrant policies in Europe result from a process of securitization, and that, within this process, radical right parties have been formulating discourses and approaches through a construction process by using some common security themes. These security themes can be classified as national security, economic security, cultural security and internal security. The frequency with which radical right parties use these themes may vary according to the specific historical, social and cultural characteristics of a particular country.

A Novel Arabic Text Steganography Method Using Letter Points and Extensions

This paper presents a new steganography approach suitable for Arabic texts. It can be classified under steganography feature coding methods. The approach hides secret information bits within the letters benefiting from their inherited points. To note the specific letters holding secret bits, the scheme considers the two features, the existence of the points in the letters and the redundant Arabic extension character. We use the pointed letters with extension to hold the secret bit 'one' and the un-pointed letters with extension to hold 'zero'. This steganography technique is found attractive to other languages having similar texts to Arabic such as Persian and Urdu.

Modeling of Pulsatile Blood Flow in a Weak Magnetic Field

Blood pulse is an important human physiological signal commonly used for the understanding of the individual physical health. Current methods of non-invasive blood pulse sensing require direct contact or access to the human skin. As such, the performances of these devices tend to vary with time and are subjective to human body fluids (e.g. blood, perspiration and skin-oil) and environmental contaminants (e.g. mud, water, etc). This paper proposes a simulation model for the novel method of non-invasive acquisition of blood pulse using the disturbance created by blood flowing through a localized magnetic field. The simulation model geometry represents a blood vessel, a permanent magnet, a magnetic sensor, surrounding tissues and air in 2-dimensional. In this model, the velocity and pressure fields in the blood stream are described based on Navier-Stroke equations and the walls of the blood vessel are assumed to have no-slip condition. The blood assumes a parabolic profile considering a laminar flow for blood in major artery near the skin. And the inlet velocity follows a sinusoidal equation. This will allow the computational software to compute the interactions between the magnetic vector potential generated by the permanent magnet and the magnetic nanoparticles in the blood. These interactions are simulated based on Maxwell equations at the location where the magnetic sensor is placed. The simulated magnetic field at the sensor location is found to assume similar sinusoidal waveform characteristics as the inlet velocity of the blood. The amplitude of the simulated waveforms at the sensor location are compared with physical measurements on human subjects and found to be highly correlated.

Polyurethane Nanofibers Obtained By Electrospinning Process

Electrospinning is a broadly used technology to obtain polymeric nanofibers ranging from several micrometers down to several hundred nanometers for a wide range of applications. It offers unique capabilities to produce nanofibers with controllable porous structure. With smaller pores and higher surface area than regular fibers, electrospun fibers have been successfully applied in various fields, such as, nanocatalysis, tissue engineering scaffolds, protective clothing, filtration, biomedical, pharmaceutical, optical electronics, healthcare, biotechnology, defense and security, and environmental engineering. In this study, polyurethane nanofibers were obtained under different electrospinning parameters. Fiber morphology and diameter distribution were investigated in order to understand them as a function of process parameters.

An Evolutionary Statistical Learning Theory

Statistical learning theory was developed by Vapnik. It is a learning theory based on Vapnik-Chervonenkis dimension. It also has been used in learning models as good analytical tools. In general, a learning theory has had several problems. Some of them are local optima and over-fitting problems. As well, statistical learning theory has same problems because the kernel type, kernel parameters, and regularization constant C are determined subjectively by the art of researchers. So, we propose an evolutionary statistical learning theory to settle the problems of original statistical learning theory. Combining evolutionary computing into statistical learning theory, our theory is constructed. We verify improved performances of an evolutionary statistical learning theory using data sets from KDD cup.

CBCTL: A Reasoning System of TemporalEpistemic Logic with Communication Channel

This paper introduces a temporal epistemic logic CBCTL that updates agent-s belief states through communications in them, based on computational tree logic (CTL). In practical environments, communication channels between agents may not be secure, and in bad cases agents might suffer blackouts. In this study, we provide inform* protocol based on ACL of FIPA, and declare the presence of secure channels between two agents, dependent on time. Thus, the belief state of each agent is updated along with the progress of time. We show a prover, that is a reasoning system for a given formula in a given a situation of an agent ; if it is directly provable or if it could be validated through the chains of communications, the system returns the proof.

A Proposed Technique for Software Development Risks Identification by using FTA Model

Software Development Risks Identification (SDRI), using Fault Tree Analysis (FTA), is a proposed technique to identify not only the risk factors but also the causes of the appearance of the risk factors in software development life cycle. The method is based on analyzing the probable causes of software development failures before they become problems and adversely affect a project. It uses Fault tree analysis (FTA) to determine the probability of a particular system level failures that are defined by A Taxonomy for Sources of Software Development Risk to deduce failure analysis in which an undesired state of a system by using Boolean logic to combine a series of lower-level events. The major purpose of this paper is to use the probabilistic calculations of Fault Tree Analysis approach to determine all possible causes that lead to software development risk occurrence

Computational Fluid Dynamics Expert System using Artificial Neural Networks

The design of a modern aircraft is based on three pillars: theoretical results, experimental test and computational simulations. As a results of this, Computational Fluid Dynamic (CFD) solvers are widely used in the aeronautical field. These solvers require the correct selection of many parameters in order to obtain successful results. Besides, the computational time spent in the simulation depends on the proper choice of these parameters. In this paper we create an expert system capable of making an accurate prediction of the number of iterations and time required for the convergence of a computational fluid dynamic (CFD) solver. Artificial neural network (ANN) has been used to design the expert system. It is shown that the developed expert system is capable of making an accurate prediction the number of iterations and time required for the convergence of a CFD solver.

X-ray Crystallographic Analysis of MinC N-Terminal Domain from Escherichia coli

MinC plays an important role in bacterial cell division system by inhibiting FtsZ assembly. However, the molecular mechanism of the action is poorly understood. E. coli MinC Nterminus domain was purified and crystallized using 1.4 M sodium citrate pH 6.5 as a precipitant. X-ray diffraction data was collected and processed to 2.3 Å from a native crystal. The crystal belonged to space group P212121, with the unit cell parameters a = 52.7, b = 54.0, c = 64.7 Å. Assuming the presence of two molecules in the asymmetric unit, the Matthews coefficient value is 1.94 Å3 Da-1, which corresponds to a solvent content of 36.5%. The overall structure of MinCN is observed as a dimer form through anti-parallel ß-strand interaction.

Artificial Intelligence Model to Predict Surface Roughness of Ti-15-3 Alloy in EDM Process

Conventionally the selection of parameters depends intensely on the operator-s experience or conservative technological data provided by the EDM equipment manufacturers that assign inconsistent machining performance. The parameter settings given by the manufacturers are only relevant with common steel grades. A single parameter change influences the process in a complex way. Hence, the present research proposes artificial neural network (ANN) models for the prediction of surface roughness on first commenced Ti-15-3 alloy in electrical discharge machining (EDM) process. The proposed models use peak current, pulse on time, pulse off time and servo voltage as input parameters. Multilayer perceptron (MLP) with three hidden layer feedforward networks are applied. An assessment is carried out with the models of distinct hidden layer. Training of the models is performed with data from an extensive series of experiments utilizing copper electrode as positive polarity. The predictions based on the above developed models have been verified with another set of experiments and are found to be in good agreement with the experimental results. Beside this they can be exercised as precious tools for the process planning for EDM.

Image Similarity: A Genetic Algorithm Based Approach

The paper proposes an approach using genetic algorithm for computing the region based image similarity. The image is denoted using a set of segmented regions reflecting color and texture properties of an image. An image is associated with a family of image features corresponding to the regions. The resemblance of two images is then defined as the overall similarity between two families of image features, and quantified by a similarity measure, which integrates properties of all the regions in the images. A genetic algorithm is applied to decide the most plausible matching. The performance of the proposed method is illustrated using examples from an image database of general-purpose images, and is shown to produce good results.

Routing Capability and Blocking Analysis of Dynamic ROADM Optical Networks (Category - II) for Dynamic Traffic

Reconfigurable optical add/drop multiplexers (ROADMs) can be classified into three categories based on their underlying switching technologies. Category I consists of a single large optical switch; category II is composed of a number of small optical switches aligned in parallel; and category III has a single optical switch and only one wavelength being added/dropped. In this paper, to evaluate the wavelength-routing capability of ROADMs of category-II in dynamic optical networks,the dynamic traffic models are designed based on Bernoulli, Poisson distributions for smooth and regular types of traffic. Through Analytical and Simulation results, the routing power of cat-II of ROADM networks for two traffic models are determined.

Numerical Simulation of a Single Air Bubble Rising in Water with Various Models of Surface Tension Force

Different numerical methods are employed and developed for simulating interfacial flows. A large range of applications belong to this group, e.g. two-phase flows of air bubbles in water or water drops in air. In such problems surface tension effects often play a dominant role. In this paper, various models of surface tension force for interfacial flows, the CSF, CSS, PCIL and SGIP models have been applied to simulate the motion of small air bubbles in water and the results were compared and reviewed. It has been pointed out that by using SGIP or PCIL models, we are able to simulate bubble rise and obtain results in close agreement with the experimental data.

Managing Iterations in Product Design and Development

The inherent iterative nature of product design and development poses significant challenge to reduce the product design and development time (PD). In order to shorten the time to market, organizations have adopted concurrent development where multiple specialized tasks and design activities are carried out in parallel. Iterative nature of work coupled with the overlap of activities can result in unpredictable time to completion and significant rework. Many of the products have missed the time to market window due to unanticipated or rather unplanned iteration and rework. The iterative and often overlapped processes introduce greater amounts of ambiguity in design and development, where the traditional methods and tools of project management provide less value. In this context, identifying critical metrics to understand the iteration probability is an open research area where significant contribution can be made given that iteration has been the key driver of cost and schedule risk in PD projects. Two important questions that the proposed study attempts to address are: Can we predict and identify the number of iterations in a product development flow? Can we provide managerial insights for a better control over iteration? The proposal introduces the concept of decision points and using this concept intends to develop metrics that can provide managerial insights into iteration predictability. By characterizing the product development flow as a network of decision points, the proposed research intends to delve further into iteration probability and attempts to provide more clarity.