DEA ANN Approach in Supplier Evaluation System

In Supply Chain Management (SCM), strengthening partnerships with suppliers is a significant factor for enhancing competitiveness. Hence, firms increasingly emphasize supplier evaluation processes. Supplier evaluation systems are basically developed in terms of criteria such as quality, cost, delivery, and flexibility. Because there are many variables to be analyzed, this process becomes hard to execute and needs expertise. On this account, this study aims to develop an expert system on supplier evaluation process by designing Artificial Neural Network (ANN) that is supported with Data Envelopment Analysis (DEA). The methods are applied on the data of 24 suppliers, which have longterm relationships with a medium sized company from German Iron and Steel Industry. The data of suppliers consists of variables such as material quality (MQ), discount of amount (DOA), discount of cash (DOC), payment term (PT), delivery time (DT) and annual revenue (AR). Meanwhile, the efficiency that is generated by using DEA is added to the supplier evaluation system in order to use them as system outputs.

A Review of Methanol Production from Methane Oxidation via Non-Thermal Plasma Reactor

Direct conversion of methane to methanol by partial oxidation in a thermal reactor has a poor yield of about 2% which is less than the expected economical yield of about 10%. Conventional thermal catalytic reactors have been proposed to be superseded by plasma reactors as a promising approach, due to strength of the electrical energy which can break C-H bonds of methane. Among the plasma techniques, non-thermal dielectric barrier discharge (DBD) plasma chemical process is one of the most future promising technologies in synthesizing methanol. The purpose of this paper is presenting a brief review of CH4 oxidation with O2 in DBD plasma reactors based on the recent investigations. For this reason, the effect of various parameters of reactor configuration, feed ratio, applied voltage, residence time (gas flow rate), type of applied catalyst, pressure and reactor wall temperature on methane conversion and methanol selectivity are discussed.

On Two Control Approaches for The Output Voltage Regulation of a Boost Converter

This paper deals with the comparison between two proposed control strategies for a DC-DC boost converter. The first control is a classical Sliding Mode Control (SMC) and the second one is a distance based Fuzzy Sliding Mode Control (FSMC). The SMC is an analytical control approach based on the boost mathematical model. However, the FSMC is a non-conventional control approach which does not need the controlled system mathematical model. It needs only the measures of the output voltage to perform the control signal. The obtained simulation results show that the two proposed control methods are robust for the case of load resistance and the input voltage variations. However, the proposed FSMC gives a better step voltage response than the one obtained by the SMC.

Meteorological Data Study and Forecasting Using Particle Swarm Optimization Algorithm

Weather systems use enormously complex combinations of numerical tools for study and forecasting. Unfortunately, due to phenomena in the world climate, such as the greenhouse effect, classical models may become insufficient mostly because they lack adaptation. Therefore, the weather forecast problem is matched for heuristic approaches, such as Evolutionary Algorithms. Experimentation with heuristic methods like Particle Swarm Optimization (PSO) algorithm can lead to the development of new insights or promising models that can be fine tuned with more focused techniques. This paper describes a PSO approach for analysis and prediction of data and provides experimental results of the aforementioned method on realworld meteorological time series.

Discontinuous Feedback Linearization of an Electrically Driven Fast Robot Manipulator

A multivariable discontinuous feedback linearization approach is proposed to position control of an electrically driven fast robot manipulator. A desired performance is achieved by selecting a useful controller and suitable sampling rate and considering saturation for actuators. There is a high flexibility to apply the proposed control approach on different electrically driven manipulators. The control approach can guarantee the stability and satisfactory tracking performance. A PUMA 560 robot driven by geared permanent magnet dc motors is simulated. The simulation results show a desired performance for control system under technical specifications.

Optimal Manufacturing Scheduling for Dependent Details Processing

The increasing competitiveness in manufacturing industry is forcing manufacturers to seek effective processing schedules. The paper presents an optimization manufacture scheduling approach for dependent details processing with given processing sequences and times on multiple machines. By defining decision variables as start and end moments of details processing it is possible to use straightforward variables restrictions to satisfy different technological requirements and to formulate easy to understand and solve optimization tasks for multiple numbers of details and machines. A case study example is solved for seven base moldings for CNC metalworking machines processed on five different machines with given processing order among details and machines and known processing time-s duration. As a result of linear optimization task solution the optimal manufacturing schedule minimizing the overall processing time is obtained. The manufacturing schedule defines the moments of moldings delivery thus minimizing storage costs and provides mounting due-time satisfaction. The proposed optimization approach is based on real manufacturing plant problem. Different processing schedules variants for different technological restrictions were defined and implemented in the practice of Bulgarian company RAIS Ltd. The proposed approach could be generalized for other job shop scheduling problems for different applications.

The Effect of TV and Online Shopping Value on Online Patronage Intention in a Multi-channel Retail Context

With the proliferation of multi-channel retailing, developing a better understanding of the factors that affect customers- purchase behaviors within a multi-channel retail context has become an important topic for practitioners and academics. While many studies have investigated the various customer behaviors associated with brick-and-mortar retailing, online retailing, and brick-and-click retailing, little research has explored how customer shopping value perceptions influence online purchase behaviors within the TV-and-online retail environment. The main purpose of this study is to investigate the influence of TV and online shopping values on online patronage intention. Data collected from 116 respondents in Taiwan are tested against the research model using the partial least squares (PLS) approach. The results indicate that utilitarian and hedonic TV shopping values have indirect, positive influences on online patronage intention through their online counterparts in the TV-and-online retail context. The findings of this study provide several important theoretical and practical implications for multi-channel retailing.

Bayesian Belief Networks for Test Driven Development

Testing accounts for the major percentage of technical contribution in the software development process. Typically, it consumes more than 50 percent of the total cost of developing a piece of software. The selection of software tests is a very important activity within this process to ensure the software reliability requirements are met. Generally tests are run to achieve maximum coverage of the software code and very little attention is given to the achieved reliability of the software. Using an existing methodology, this paper describes how to use Bayesian Belief Networks (BBNs) to select unit tests based on their contribution to the reliability of the module under consideration. In particular the work examines how the approach can enhance test-first development by assessing the quality of test suites resulting from this development methodology and providing insight into additional tests that can significantly reduce the achieved reliability. In this way the method can produce an optimal selection of inputs and the order in which the tests are executed to maximize the software reliability. To illustrate this approach, a belief network is constructed for a modern software system incorporating the expert opinion, expressed through probabilities of the relative quality of the elements of the software, and the potential effectiveness of the software tests. The steps involved in constructing the Bayesian Network are explained as is a method to allow for the test suite resulting from test-driven development.

Formosa3: A Cloud-Enabled HPC Cluster in NCHC

This paper proposes a new approach to offer a private cloud service in HPC clusters. In particular, our approach relies on automatically scheduling users- customized environment request as a normal job in batch system. After finishing virtualization request jobs, those guest operating systems will dismiss so that compute nodes will be released again for computing. We present initial work on the innovative integration of HPC batch system and virtualization tools that aims at coexistence such that they suffice for meeting the minimizing interference required by a traditional HPC cluster. Given the design of initial infrastructure, the proposed effort has the potential to positively impact on synergy model. The results from the experiment concluded that goal for provisioning customized cluster environment indeed can be fulfilled by using virtual machines, and efficiency can be improved with proper setup and arrangements.

Stress Intensity Factors for Plates with Collinear and Non-Aligned Straight Cracks

Multi-site damage (MSD) has been a challenge to aircraft, civil and power plant structures. In real life components are subjected to cracking at many vulnerable locations such as the bolt holes. However, we do not consider for the presence of multiple cracks. Unlike components with a single crack, these components are difficult to predict. When two cracks approach one another, their stress fields influence each other and produce enhancing or shielding effect depending on the position of the cracks. In the present study, numerical studies on fracture analysis have been conducted by using the developed code based on the modified virtual crack closure integral (MVCCI) technique and finite element analysis (FEA) software ABAQUS for computing SIF of plates with multiple cracks. Various parametric studies have been carried out and the results have been compared with literature where ever available and also with the solution, obtained by using ABAQUS. By conducting extensive numerical studies expressions for SIF have been obtained for collinear cracks and non-aligned cracks.

A Comparison of Experimental Data with Monte Carlo Calculations for Optimisation of the Sourceto- Detector Distance in Determining the Efficiency of a LaBr3:Ce (5%) Detector

Cerium-doped lanthanum bromide LaBr3:Ce(5%) crystals are considered to be one of the most advanced scintillator materials used in PET scanning, combining a high light yield, fast decay time and excellent energy resolution. Apart from the correct choice of scintillator, it is also important to optimise the detector geometry, not least in terms of source-to-detector distance in order to obtain reliable measurements and efficiency. In this study a commercially available 25 mm x 25 mm BrilLanCeTM 380 LaBr3: Ce (5%) detector was characterised in terms of its efficiency at varying source-to-detector distances. Gamma-ray spectra of 22Na, 60Co, and 137Cs were separately acquired at distances of 5, 10, 15, and 20cm. As a result of the change in solid angle subtended by the detector, the geometric efficiency reduced in efficiency with increasing distance. High efficiencies at low distances can cause pulse pile-up when subsequent photons are detected before previously detected events have decayed. To reduce this systematic error the source-to-detector distance should be balanced between efficiency and pulse pile-up suppression as otherwise pile-up corrections would need to be necessary at short distances. In addition to the experimental measurements Monte Carlo simulations have been carried out for the same setup, allowing a comparison of results. The advantages and disadvantages of each approach have been highlighted.

X-Ray Intensity Measurement Using Frequency Output Sensor for Computed Tomography

Quality of 2D and 3D cross-sectional images produce by Computed Tomography primarily depend upon the degree of precision of primary and secondary X-Ray intensity detection. Traditional method of primary intensity detection is apt to errors. Recently the X-Ray intensity measurement system along with smart X-Ray sensors is developed by our group which is able to detect primary X-Ray intensity unerringly. In this study a new smart X-Ray sensor is developed using Light-to-Frequency converter TSL230 from Texas Instruments which has numerous advantages in terms of noiseless data acquisition and transmission. TSL230 construction is based on a silicon photodiode which converts incoming X-Ray radiation into the proportional current signal. A current to frequency converter is attached to this photodiode on a single monolithic CMOS integrated circuit which provides proportional frequency count to incoming current signal in the form of the pulse train. The frequency count is delivered to the center of PICDEM FS USB board with PIC18F4550 microcontroller mounted on it. With highly compact electronic hardware, this Demo Board efficiently read the smart sensor output data. The frequency output approaches overcome nonlinear behavior of sensors with analog output thus un-attenuated X-Ray intensities could be measured precisely and better normalization could be acquired in order to attain high resolution.

A New Heuristic Approach for Large Size Zero-One Multi Knapsack Problem Using Intercept Matrix

This paper presents a heuristic to solve large size 0-1 Multi constrained Knapsack problem (01MKP) which is NP-hard. Many researchers are used heuristic operator to identify the redundant constraints of Linear Programming Problem before applying the regular procedure to solve it. We use the intercept matrix to identify the zero valued variables of 01MKP which is known as redundant variables. In this heuristic, first the dominance property of the intercept matrix of constraints is exploited to reduce the search space to find the optimal or near optimal solutions of 01MKP, second, we improve the solution by using the pseudo-utility ratio based on surrogate constraint of 01MKP. This heuristic is tested for benchmark problems of sizes upto 2500, taken from literature and the results are compared with optimum solutions. Space and computational complexity of solving 01MKP using this approach are also presented. The encouraging results especially for relatively large size test problems indicate that this heuristic can successfully be used for finding good solutions for highly constrained NP-hard problems.

Packing Theory for Natural and Crushed Aggregate to Obtain the Best Mix of Aggregate: Research and Development

Concrete performance is strongly affected by the particle packing degree since it determines the distribution of the cementitious component and the interaction of mineral particles. By using packing theory designers will be able to select optimal aggregate materials for preparing concrete with low cement content, which is beneficial from the point of cost. Optimum particle packing implies minimizing porosity and thereby reducing the amount of cement paste needed to fill the voids between the aggregate particles, taking also the rheology of the concrete into consideration. For reaching good fluidity superplasticizers are required. The results from pilot tests at Luleå University of Technology (LTU) show various forms of the proposed theoretical models, and the empirical approach taken in the study seems to provide a safer basis for developing new, improved packing models.

Cost Optimization of Concentric Braced Steel Building Structures

Seismic design may require non-conventional concept, due to the fact that the stiffness and layout of the structure have a great effect on the overall structural behaviour, on the seismic load intensity as well as on the internal force distribution. To find an economical and optimal structural configuration the key issue is the optimal design of the lateral load resisting system. This paper focuses on the optimal design of regular, concentric braced frame (CBF) multi-storey steel building structures. The optimal configurations are determined by a numerical method using genetic algorithm approach, developed by the authors. Aim is to find structural configurations with minimum structural cost. The design constraints of objective function are assigned in accordance with Eurocode 3 and Eurocode 8 guidelines. In this paper the results are presented for various building geometries, different seismic intensities, and levels of energy dissipation.

Classifying of Maize Inbred Lines into Heterotic Groups using Diallel Analysis

The selection of parents and breeding strategies for the successful maize hybrid production will be facilitated by heterotic groupings of parental lines and determination of combining abilities of them. Fourteen maize inbred lines, used in maize breeding programs in Iran, were crossed in a diallel mating design. The 91 F1 hybrids and the 14 parental lines were studied during two years at four locations of Iran for investigation of combining ability of gentypes for grain yield and to determine heterotic patterns among germplasm sources, using both, the Griffing-s method and the biplot approach for diallel analysis. The graphical representation offered by biplot analysis allowed a rapid and effective overview of general combining ability (GCA) and specific combining ability (SCA) effects of the inbred lines, their performance in crosses, as well as grouping patterns of similar genotypes. GCA and SCA effects were significant for grain yield (GY). Based on significant positive GCA effects, the lines derived from LSC could be used as parent in crosses to increase GY. The maximum best- parent heterosis values and highest SCA effects resulted from crosses B73 × MO17 and A679 × MO17 for GY. The best heterotic patterns were LSC × RYD, which would be potentially useful in maize breeding programs to obtain high-yielding hybrids in the same climate of Iran.

An Adequate Choice of Initial Sample Size for Selection Approach

In this paper, we consider the effect of the initial sample size on the performance of a sequential approach that used in selecting a good enough simulated system, when the number of alternatives is very large. We implement a sequential approach on M=M=1 queuing system under some parameter settings, with a different choice of the initial sample sizes to explore the impacts on the performance of this approach. The results show that the choice of the initial sample size does affect the performance of our selection approach.

Image Search by Features of Sorted Gray level Histogram Polynomial Curve

Image Searching was always a problem specially when these images are not properly managed or these are distributed over different locations. Currently different techniques are used for image search. On one end, more features of the image are captured and stored to get better results. Storing and management of such features is itself a time consuming job. While on the other extreme if fewer features are stored the accuracy rate is not satisfactory. Same image stored with different visual properties can further reduce the rate of accuracy. In this paper we present a new concept of using polynomials of sorted histogram of the image. This approach need less overhead and can cope with the difference in visual features of image.

A Perceptually Optimized Wavelet Embedded Zero Tree Image Coder

In this paper, we propose a Perceptually Optimized Embedded ZeroTree Image Coder (POEZIC) that introduces a perceptual weighting to wavelet transform coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to the coding quality obtained using the SPIHT algorithm only. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEZIC quality assessment. Our POEZIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) luminance masking and Contrast masking, 2) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting, 3) the Wavelet Error Sensitivity WES used to reduce the perceptual quantization errors. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

Signature Recognition Using Conjugate Gradient Neural Networks

There are two common methodologies to verify signatures: the functional approach and the parametric approach. This paper presents a new approach for dynamic handwritten signature verification (HSV) using the Neural Network with verification by the Conjugate Gradient Neural Network (NN). It is yet another avenue in the approach to HSV that is found to produce excellent results when compared with other methods of dynamic. Experimental results show the system is insensitive to the order of base-classifiers and gets a high verification ratio.