Cost Based Warranty Optimisation Using Genetic Algorithm

Warranty is a powerful marketing tool for the manufacturer and a good protection for both the manufacturer and the customer. However, warranty always involves additional costs to the manufacturer, which depend on product reliability characteristics and warranty parameters. This paper presents an approach to optimisation of warranty parameters for known product failure distribution to reduce the warranty costs to the manufacturer while retaining the promotional function of the warranty. Combination free replacement and pro-rata warranty policy is chosen as a model and the length of free replacement period and pro-rata policy period are varied, as well as the coefficients that define the pro-rata cost function. Multiparametric warranty optimisation is done by using genetic algorithm. Obtained results are guideline for the manufacturer to choose the warranty policy that minimises the costs and maximises the profit.

The Current Awareness of Just-In-Time Techniques within the Libyan Textile Private Industry: A Case Study

Almost all Libyan industries (both private and public) have struggled with many difficulties during the past three decades due to many problems. These problems have created a strongly negative impact on the productivity and utilization of many companies within Libya. This paper studies the current awareness and implementation levels of Just-In-Time (JIT) within the Libyan Textile private industry. A survey has been applied in this study using an intensive detailed questionnaire. Based on the analysis of the survey responses, the results show that the management body within the surveyed companies has a modest strategy towards most of the areas that are considered as being very crucial in any successful implementation of JIT. The results also show a variation within the implementation levels of the JIT elements as these varies between Low and Acceptable levels. The paper has also identified limitations within the investigated areas within this industry, and has pointed to areas where senior managers within the Libyan textile industry should take immediate actions in order to achieve effective implementation of JIT within their companies.

Using Mixed Amine Solution for Gas Sweetening

The use of amine mixtures employing methyldiethanolamine (MDEA), monoethanolamine (MEA), and diethanolamine (DEA) have been investigated for a variety of cases using a process simulation program called HYSYS. The results show that, at high pressures, amine mixtures have little or no advantage in the cases studied. As the pressure is lowered, it becomes more difficult for MDEA to meet residual gas requirements and mixtures can usually improve plant performance. Since the CO2 reaction rate with the primary and secondary amines is much faster than with MDEA, the addition of small amounts of primary or secondary amines to an MDEA based solution should greatly improve the overall reaction rate of CO2 with the amine solution. The addition of MEA caused the CO2 to be absorbed more strongly in the upper portion of the column than for MDEA along. On the other hand, raising the concentration for MEA to 11%wt, CO2 is almost completely absorbed in the lower portion of the column. The addition of MEA would be most advantageous. Thus, in areas where MDEA cannot meet the residual gas requirements, the use of amine mixtures can usually improve the plant performance.

Interest of the Sequences Pseudo Noises Codes of Different Lengths for the Reduction from the Interference between Users of CDMA Network

The third generation (3G) of cellular system adopted the spread spectrum as solution for the transmission of the data in the physical layer. Contrary to systems IS-95 or CDMAOne (systems with spread spectrum of the preceding generation), the new standard, called Universal Mobil Telecommunications System (UMTS), uses long codes in the down link. The system is conceived for the vocal communication and the transmission of the data. In particular, the down link is very important, because of the asymmetrical request of the data, i.e., more remote loading towards the mobiles than towards the basic station. Moreover, the UMTS uses for the down link an orthogonal spreading out with a variable factor of spreading out (OVSF for Orthogonal Variable Spreading Factor). This characteristic makes it possible to increase the flow of data of one or more users by reducing their factor of spreading out without changing the factor of spreading out of other users. In the current standard of the UMTS, two techniques to increase the performances of the down link were proposed, the diversity of sending antenna and the codes space-time. These two techniques fight only fainding. The receiver proposed for the mobil station is the RAKE, but one can imagine a receiver more sophisticated, able to reduce the interference between users and the impact of the coloured noise and interferences to narrow band. In this context, where the users have long codes synchronized with variable factor of spreading out and ignorance by the mobile of the other active codes/users, the use of the sequences of code pseudo-noises different lengths is presented in the form of one of the most appropriate solutions.

Causes of Rotor Distortions and Applicable Common Straightening Methods for Turbine Rotors and Shafts

Different problems may causes distortion of the rotor, and hence vibration, which is the most severe damage of the turbine rotors. In many years different techniques have been developed for the straightening of bent rotors. The method for straightening can be selected according to initial information from preliminary inspections and tests such as nondestructive tests, chemical analysis, run out tests and also a knowledge of the shaft material. This article covers the various causes of excessive bends and then some applicable common straightening methods are reviewed. Finally, hot spotting is opted for a particular bent rotor. A 325 MW steam turbine rotor is modeled and finite element analyses are arranged to investigate this straightening process. Results of experimental data show that performing the exact hot spot straightening process reduced the bending of the rotor significantly.

Diagnosing the Cause and its Timing of Changes in Multivariate Process Mean Vector from Quality Control Charts using Artificial Neural Network

Quality control charts are very effective in detecting out of control signals but when a control chart signals an out of control condition of the process mean, searching for a special cause in the vicinity of the signal time would not always lead to prompt identification of the source(s) of the out of control condition as the change point in the process parameter(s) is usually different from the signal time. It is very important to manufacturer to determine at what point and which parameters in the past caused the signal. Early warning of process change would expedite the search for the special causes and enhance quality at lower cost. In this paper the quality variables under investigation are assumed to follow a multivariate normal distribution with known means and variance-covariance matrix and the process means after one step change remain at the new level until the special cause is being identified and removed, also it is supposed that only one variable could be changed at the same time. This research applies artificial neural network (ANN) to identify the time the change occurred and the parameter which caused the change or shift. The performance of the approach was assessed through a computer simulation experiment. The results show that neural network performs effectively and equally well for the whole shift magnitude which has been considered.

Delay-Distribution-Dependent Stability Criteria for BAM Neural Networks with Time-Varying Delays

This paper is concerned with the delay-distributiondependent stability criteria for bidirectional associative memory (BAM) neural networks with time-varying delays. Based on the Lyapunov-Krasovskii functional and stochastic analysis approach, a delay-probability-distribution-dependent sufficient condition is derived to achieve the globally asymptotically mean square stable of the considered BAM neural networks. The criteria are formulated in terms of a set of linear matrix inequalities (LMIs), which can be checked efficiently by use of some standard numerical packages. Finally, a numerical example and its simulation is given to demonstrate the usefulness and effectiveness of the proposed results.

Environmental Capacity and Sustainability of European Regional Airports: A Case Study

Airport capacity has always been perceived in the traditional sense as the number of aircraft operations during a specified time corresponding to a tolerable level of average delay and it mostly depends on the airside characteristics, on the fleet mix variability and on the ATM. The adoption of the Directive 2002/30/EC in the EU countries drives the stakeholders to conceive airport capacity in a different way though. Airport capacity in this sense is fundamentally driven by environmental criteria, and since acoustical externalities represent the most important factors, those are the ones that could pose a serious threat to the growth of airports and to aviation market itself in the short-medium term. The importance of the regional airports in the deregulated market grew fast during the last decade since they represent spokes for network carriers and a preferential destination for low-fares carriers. Not only regional airports have witnessed a fast and unexpected growth in traffic but also a fast growth in the complaints for the nuisance by the people living near those airports. In this paper the results of a study conducted in cooperation with the airport of Bologna G. Marconi are presented in order to investigate airport acoustical capacity as a defacto constraint of airport growth.

Complex-Valued Neural Network in Signal Processing: A Study on the Effectiveness of Complex Valued Generalized Mean Neuron Model

A complex valued neural network is a neural network which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in signal processing. In Neural networks, generalized mean neuron model (GMN) is often discussed and studied. The GMN includes a new aggregation function based on the concept of generalized mean of all the inputs to the neuron. This paper aims to present exhaustive results of using Generalized Mean Neuron model in a complex-valued neural network model that uses the back-propagation algorithm (called -Complex-BP-) for learning. Our experiments results demonstrate the effectiveness of a Generalized Mean Neuron Model in a complex plane for signal processing over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error required on a Generalized Mean neural network model. Some inherent properties of this complex back propagation algorithm are also studied and discussed.

The Resource Description Framework (RDF) as a Modern Structure for Medical Data

The amount and heterogeneity of data in biomedical research, notably in interdisciplinary fields, requires new methods for the collection, presentation and analysis of information. Important data from laboratory experiments as well as patient trials are available but come out of distributed resources. The Charité - University Hospital Berlin has established together with the German Research Foundation (DFG) a new information service centre for kidney diseases and transplantation (Open European Nephrology Science Centre - OpEN.SC). Beside a collaborative aspect to create new research groups every single partner or institution of this science information centre making his own data available is allowed to search the whole data pool of the various involved centres. A core task is the implementation of a non-restricting open data structure for the various different data sources. We decided to use a modern RDF model and in a first phase transformed original data coming from the web-based Electronic Patient Record database TBase©.

Optimum Time Coordination of Overcurrent Relays using Two Phase Simplex Method

Overcurrent (OC) relays are the major protection devices in a distribution system. The operating time of the OC relays are to be coordinated properly to avoid the mal-operation of the backup relays. The OC relay time coordination in ring fed distribution networks is a highly constrained optimization problem which can be stated as a linear programming problem (LPP). The purpose is to find an optimum relay setting to minimize the time of operation of relays and at the same time, to keep the relays properly coordinated to avoid the mal-operation of relays. This paper presents two phase simplex method for optimum time coordination of OC relays. The method is based on the simplex algorithm which is used to find optimum solution of LPP. The method introduces artificial variables to get an initial basic feasible solution (IBFS). Artificial variables are removed using iterative process of first phase which minimizes the auxiliary objective function. The second phase minimizes the original objective function and gives the optimum time coordination of OC relays.

Detection of Ultrasonic Images in the Presence of a Random Number of Scatterers: A Statistical Learning Approach

Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of medical ultrasound images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to clinical ultrasound images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected ultrasound images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (detection hypotheses) in the original images.

Biosynthesis and In vitro Studies of Silver Bionanoparticles Synthesized from Aspergillusspecies and its Antimicrobial Activity against Multi Drug Resistant Clinical Isolates

Antimicrobial resistant is becoming a major factor in virtually all hospital acquired infection may soon untreatable is a serious public health problem. These concerns have led to major research effort to discover alternative strategies for the treatment of bacterial infection. Nanobiotehnology is an upcoming and fast developing field with potential application for human welfare. An important area of nanotechnology for development of reliable and environmental friendly process for synthesis of nanoscale particles through biological systems In the present studies are reported on the use of fungal strain Aspergillus species for the extracellular synthesis of bionanoparticles from 1 mM silver nitrate (AgNO3) solution. The report would be focused on the synthesis of metallic bionanoparticles of silver using a reduction of aqueous Ag+ ion with the culture supernatants of Microorganisms. The bio-reduction of the Ag+ ions in the solution would be monitored in the aqueous component and the spectrum of the solution would measure through UV-visible spectrophotometer The bionanoscale particles were further characterized by Atomic Force Microscopy (AFM), Fourier Transform Infrared Spectroscopy (FTIR) and Thin layer chromatography. The synthesized bionanoscale particle showed a maximum absorption at 385 nm in the visible region. Atomic Force Microscopy investigation of silver bionanoparticles identified that they ranged in the size of 250 nm - 680 nm; the work analyzed the antimicrobial efficacy of the silver bionanoparticles against various multi drug resistant clinical isolates. The present Study would be emphasizing on the applicability to synthesize the metallic nanostructures and to understand the biochemical and molecular mechanism of nanoparticles formation by the cell filtrate in order to achieve better control over size and polydispersity of the nanoparticles. This would help to develop nanomedicine against various multi drug resistant human pathogens.

Pulsed Multi-Layered Image Filtering: A VLSI Implementation

Image convolution similar to the receptive fields found in mammalian visual pathways has long been used in conventional image processing in the form of Gabor masks. However, no VLSI implementation of parallel, multi-layered pulsed processing has been brought forward which would emulate this property. We present a technical realization of such a pulsed image processing scheme. The discussed IC also serves as a general testbed for VLSI-based pulsed information processing, which is of interest especially with regard to the robustness of representing an analog signal in the phase or duration of a pulsed, quasi-digital signal, as well as the possibility of direct digital manipulation of such an analog signal. The network connectivity and processing properties are reconfigurable so as to allow adaptation to various processing tasks.

Application of Artificial Neural Network for Predicting Maintainability Using Object-Oriented Metrics

Importance of software quality is increasing leading to development of new sophisticated techniques, which can be used in constructing models for predicting quality attributes. One such technique is Artificial Neural Network (ANN). This paper examined the application of ANN for software quality prediction using Object- Oriented (OO) metrics. Quality estimation includes estimating maintainability of software. The dependent variable in our study was maintenance effort. The independent variables were principal components of eight OO metrics. The results showed that the Mean Absolute Relative Error (MARE) was 0.265 of ANN model. Thus we found that ANN method was useful in constructing software quality model.

Site Inspection and Evaluation Behavior of Qing Shang Concrete Bridge

It is necessary to evaluate the bridges conditions and strengthen bridges or parts of them. The reinforcement necessary due to some reasons can be summarized as: First, a changing in use of bridge could produce internal forces in a part of structural which exceed the existing cross-sectional capacity. Second, bridges may also need reinforcement because damage due to external factors which reduced the cross-sectional resistance to external loads. One of other factors could listed here its misdesign in some details, like safety of bridge or part of its.This article identify the design demands of Qing Shan bridge located in is in Heilongjiang Province He gang - Nen Jiang Road 303 provincial highway, Wudalianchi area, China, is an important bridge in the urban areas. The investigation program was include the observation and evaluate the damage in T- section concrete beams , prestressed concrete box girder bridges section in additional evaluate the whole state of bridge includes the pier , abutments , bridge decks, wings , bearing and capping beam, joints, ........etc. The test results show that the bridges in general structural condition are good. T beam span No 10 were observed, crack extended upward along the ribbed T beam, and continue to the T beam flange. Crack width varying between 0.1mm to 0.4mm, the maximum about 0.4mm. The bridge needs to be improved flexural bending strength especially at for T beam section.

Optimization of the Characteristic Straight Line Method by a “Best Estimate“ of Observed, Normal Orthometric Elevation Differences

In this paper, to optimize the “Characteristic Straight Line Method" which is used in the soil displacement analysis, a “best estimate" of the geodetic leveling observations has been achieved by taking in account the concept of 'Height systems'. This concept has been discussed in detail and consequently the concept of “height". In landslides dynamic analysis, the soil is considered as a mosaic of rigid blocks. The soil displacement has been monitored and analyzed by using the “Characteristic Straight Line Method". Its characteristic components have been defined constructed from a “best estimate" of the topometric observations. In the measurement of elevation differences, we have used the most modern leveling equipment available. Observational procedures have also been designed to provide the most effective method to acquire data. In addition systematic errors which cannot be sufficiently controlled by instrumentation or observational techniques are minimized by applying appropriate corrections to the observed data: the level collimation correction minimizes the error caused by nonhorizontality of the leveling instrument's line of sight for unequal sight lengths, the refraction correction is modeled to minimize the refraction error caused by temperature (density) variation of air strata, the rod temperature correction accounts for variation in the length of the leveling rod' s Invar/LO-VAR® strip which results from temperature changes, the rod scale correction ensures a uniform scale which conforms to the international length standard and the introduction of the concept of the 'Height systems' where all types of height (orthometric, dynamic, normal, gravity correction, and equipotential surface) have been investigated. The “Characteristic Straight Line Method" is slightly more convenient than the “Characteristic Circle Method". It permits to evaluate a displacement of very small magnitude even when the displacement is of an infinitesimal quantity. The inclination of the landslide is given by the inverse of the distance reference point O to the “Characteristic Straight Line". Its direction is given by the bearing of the normal directed from point O to the Characteristic Straight Line (Fig..6). A “best estimate" of the topometric observations was used to measure the elevation of points carefully selected, before and after the deformation. Gross errors have been eliminated by statistical analyses and by comparing the heights within local neighborhoods. The results of a test using an area where very interesting land surface deformation occurs are reported. Monitoring with different options and qualitative comparison of results based on a sufficient number of check points are presented.

Estimation of Time -Varying Linear Regression with Unknown Time -Volatility via Continuous Generalization of the Akaike Information Criterion

The problem of estimating time-varying regression is inevitably concerned with the necessity to choose the appropriate level of model volatility - ranging from the full stationarity of instant regression models to their absolute independence of each other. In the stationary case the number of regression coefficients to be estimated equals that of regressors, whereas the absence of any smoothness assumptions augments the dimension of the unknown vector by the factor of the time-series length. The Akaike Information Criterion is a commonly adopted means of adjusting a model to the given data set within a succession of nested parametric model classes, but its crucial restriction is that the classes are rigidly defined by the growing integer-valued dimension of the unknown vector. To make the Kullback information maximization principle underlying the classical AIC applicable to the problem of time-varying regression estimation, we extend it onto a wider class of data models in which the dimension of the parameter is fixed, but the freedom of its values is softly constrained by a family of continuously nested a priori probability distributions.

Screening Wheat Parents of Mapping Population for Heat and Drought Tolerance, Detection of Wheat Genetic Variation

To evaluate genetic variation of wheat (Triticum aestivum) affected by heat and drought stress on eight Australian wheat genotypes that are parents of Doubled Haploid (HD) mapping populations at the vegetative stage, the water stress experiment was conducted at 65% field capacity in growth room. Heat stress experiment was conducted in the research field under irrigation over summer. Result show that water stress decreased dry shoot weight and RWC but increased osmolarity and means of Fv/Fm values in all varieties except for Krichauff. Krichauff and Kukri had the maximum RWC under drought stress. Trident variety was shown maximum WUE, osmolarity (610 mM/Kg), dry mater, quantum yield and Fv/Fm 0.815 under water stress condition. However, the recovery of quantum yield was apparent between 4 to 7 days after stress in all varieties. Nevertheless, increase in water stress after that lead to strong decrease in quantum yield. There was a genetic variation for leaf pigments content among varieties under heat stress. Heat stress decreased significantly the total chlorophyll content that measured by SPAD. Krichauff had maximum value of Anthocyanin content (2.978 A/g FW), chlorophyll a+b (2.001 mg/g FW) and chlorophyll a (1.502 mg/g FW). Maximum value of chlorophyll b (0.515 mg/g FW) and Carotenoids (0.234 mg/g FW) content belonged to Kukri. The quantum yield of all varieties decreased significantly, when the weather temperature increased from 28 ÔùªC to 36 ÔùªC during the 6 days. However, the recovery of quantum yield was apparent after 8th day in all varieties. The maximum decrease and recovery in quantum yield was observed in Krichauff. Drought and heat tolerant and moderately tolerant wheat genotypes were included Trident, Krichauff, Kukri and RAC875. Molineux, Berkut and Excalibur were clustered into most sensitive and moderately sensitive genotypes. Finally, the results show that there was a significantly genetic variation among the eight varieties that were studied under heat and water stress.

MDA of Hexagonal Honeycomb Plates used for Space Applications

The purpose of this paper is to perform a multidisciplinary design and analysis (MDA) of honeycomb panels used in the satellites structural design. All the analysis is based on clamped-free boundary conditions. In the present work, detailed finite element models for honeycomb panels are developed and analysed. Experimental tests were carried out on a honeycomb specimen of which the goal is to compare the previous modal analysis made by the finite element method as well as the existing equivalent approaches. The obtained results show a good agreement between the finite element analysis, equivalent and tests results; the difference in the first two frequencies is less than 4% and less than 10% for the third frequency. The results of the equivalent model presented in this analysis are obtained with a good accuracy. Moreover, investigations carried out in this research relate to the honeycomb plate modal analysis under several aspects including the structural geometrical variation by studying the various influences of the dimension parameters on the modal frequency, the variation of core and skin material of the honeycomb. The various results obtained in this paper are promising and show that the geometry parameters and the type of material have an effect on the value of the honeycomb plate modal frequency.