Prioritizing Service Quality Dimensions:A Neural Network Approach

One of the determinants of a firm-s prosperity is the customers- perceived service quality and satisfaction. While service quality is wide in scope, and consists of various dimensions, there may be differences in the relative importance of these dimensions in affecting customers- overall satisfaction of service quality. Identifying the relative rank of different dimensions of service quality is very important in that it can help managers to find out which service dimensions have a greater effect on customers- overall satisfaction. Such an insight will consequently lead to more effective resource allocation which will finally end in higher levels of customer satisfaction. This issue –despite its criticality- has not received enough attention so far. Therefore, using a sample of 240 bank customers in Iran, an artificial neural network is developed to address this gap in the literature. As customers- evaluation of service quality is a subjective process, artificial neural networks –as a brain metaphor- may appear to have a potentiality to model such a complicated process. Proposing a neural network which is able to predict the customers- overall satisfaction of service quality with a promising level of accuracy is the first contribution of this study. In addition, prioritizing the service quality dimensions in affecting customers- overall satisfaction –by using sensitivity analysis of neural network- is the second important finding of this paper.

A New Spectral-based Approach to Query-by-Humming for MP3 Songs Database

In this paper, we propose a new approach to query-by-humming, focusing on MP3 songs database. Since MP3 songs are much more difficult in melody representation than symbolic performance data, we adopt to extract feature descriptors from the vocal sounds part of the songs. Our approach is based on signal filtering, sub-band spectral processing, MDCT coefficients analysis and peak energy detection by ignorance of the background music as much as possible. Finally, we apply dual dynamic programming algorithm for feature similarity matching. Experiments will show us its online performance in precision and efficiency.

Electronic System Design for Respiratory Signal Processing

This paper presents the design related to the electronic system design of the respiratory signal, including phases for processing, followed by the transmission and reception of this signal and finally display. The processing of this signal is added to the ECG and temperature sign, put up last year. Under this scheme is proposed that in future also be conditioned blood pressure signal under the same final printed circuit and worked.

Data Oriented Model of Image: as a Framework for Image Processing

This paper presents a new data oriented model of image. Then a representation of it, ADBT, is introduced. The ability of ADBT is clustering, segmentation, measuring similarity of images etc, with desired precision and corresponding speed.

Active Contours with Prior Corner Detection

Deformable active contours are widely used in computer vision and image processing applications for image segmentation, especially in biomedical image analysis. The active contour or “snake" deforms towards a target object by controlling the internal, image and constraint forces. However, if the contour initialized with a lesser number of control points, there is a high probability of surpassing the sharp corners of the object during deformation of the contour. In this paper, a new technique is proposed to construct the initial contour by incorporating prior knowledge of significant corners of the object detected using the Harris operator. This new reconstructed contour begins to deform, by attracting the snake towards the targeted object, without missing the corners. Experimental results with several synthetic images show the ability of the new technique to deal with sharp corners with a high accuracy than traditional methods.

Three-dimensional Finite Element Analysis of the Front Cross Member of the Peugeot 405

Undoubtedly, chassis is one of the most important parts of a vehicle. Chassis that today are produced for vehicles are made up of four parts. These parts are jointed together by screwing. Transverse parts are called cross member. This study reviews the stress generated by cyclic laboratory loads in front cross member of Peugeot 405. In this paper the finite element method is used to simulate the welding process and to determine the physical response of the spot-welded joints. Analysis is done by the Abaqus software. The Stresses generated in cross member structure are generally classified into two groups: The stresses remained in form of residual stresses after welding process and the mechanical stress generated by cyclic load. Accordingly the total stress must be obtained by determining residual stress and mechanical stress separately and then sum them according to the superposition principle. In order to improve accuracy, material properties including physical, thermal and mechanical properties were supposed to be temperature-dependent. Simulation shows that maximum Von Misses stresses are located at special points. The model results are then compared to the experimental results which are reported by producing factory and good agreement is observed.

Rheological Modeling for Production of High Quality Polymeric

The fundamental defect inherent to the thermoforming technology is wall-thickness variation of the products due to inadequate thermal processing during production of polymer. A nonlinear viscoelastic rheological model is implemented for developing the process model. This model describes deformation process of a sheet in thermoforming process. Because of relaxation pause after plug-assist stage and also implementation of two stage thermoforming process have minor wall-thickness variation and consequently better mechanical properties of polymeric articles. For model validation, a comparative analysis of the theoretical and experimental data is presented.

On the Dynamic Model of Service Innovation in Manufacturing Industry

As the trend of manufacturing is being dominated depending on services, products and processes are more and more related with sophisticated services. Thus, this research starts with the discussion about integration of the product, process, and service in the innovation process. In particular, this paper sets out some foundations for a theory of service innovation in the field of manufacturing, and proposes the dynamic model of service innovation related to product and process. Two dynamic models of service innovation are suggested to investigate major tendencies and dynamic variations during the innovation cycle: co-innovation and sequential innovation. To structure dynamic models of product, process, and service innovation, the innovation stages in which two models are mainly achieved are identified. The research would encourage manufacturers to formulate strategy and planning for service development with product and process.

Modeling And Analysis of Simple Open Cycle Gas Turbine Using Graph Networks

This paper presents a unified approach based graph theory and system theory postulates for the modeling and analysis of Simple open cycle Gas turbine system. In the present paper, the simple open cycle gas turbine system has been modeled up to its subsystem level and system variables have been identified to develop the process subgraphs. The theorems and algorithms of the graph theory have been used to represent behavioural properties of the system like rate of heat and work transfers rates, pressure drops and temperature drops in the involved processes of the system. The processes have been represented as edges of the process subgraphs and their limits as the vertices of the process subgraphs. The system across variables and through variables has been used to develop terminal equations of the process subgraphs of the system. The set of equations developed for vertices and edges of network graph are used to solve the system for its process variables.

Fast Painting with Different Colors Using Cross Correlation in the Frequency Domain

In this paper, a new technique for fast painting with different colors is presented. The idea of painting relies on applying masks with different colors to the background. Fast painting is achieved by applying these masks in the frequency domain instead of spatial (time) domain. New colors can be generated automatically as a result from the cross correlation operation. This idea was applied successfully for faster specific data (face, object, pattern, and code) detection using neural algorithms. Here, instead of performing cross correlation between the input input data (e.g., image, or a stream of sequential data) and the weights of neural networks, the cross correlation is performed between the colored masks and the background. Furthermore, this approach is developed to reduce the computation steps required by the painting operation. The principle of divide and conquer strategy is applied through background decomposition. Each background is divided into small in size subbackgrounds and then each sub-background is processed separately by using a single faster painting algorithm. Moreover, the fastest painting is achieved by using parallel processing techniques to paint the resulting sub-backgrounds using the same number of faster painting algorithms. In contrast to using only faster painting algorithm, the speed up ratio is increased with the size of the background when using faster painting algorithm and background decomposition. Simulation results show that painting in the frequency domain is faster than that in the spatial domain.

An Optimization Analysis on an Automotive Component with Fatigue Constraint Using HyperWorks Software for Environmental Sustainability

A finite element analysis (FEA) computer software HyperWorks is utilized in re-designing an automotive component to reduce its mass. Reduction of components mass contributes towards environmental sustainability by saving world-s valuable metal resources and by reducing carbon emission through improved overall vehicle fuel efficiency. A shape optimization analysis was performed on a rear spindle component. Pre-processing and solving procedures were performed using HyperMesh and RADIOSS respectively. Shape variables were defined using HyperMorph. Then optimization solver OptiStruct was utilized with fatigue life set as a design constraint. Since Stress-Number of Cycle (S-N) theory deals with uni-axial stress, the Signed von Misses stress on the component was used for looking up damage on S-N curve, and Gerber criterion for mean stress corrections. The optimization analysis resulted in mass reduction of 24% of the original mass. The study proved that the adopted approach has high potential use for environmental sustainability.

Analysis of Delay and Throughput in MANET for DSR Protocol

A wireless Ad-hoc network consists of wireless nodes communicating without the need for a centralized administration, in which all nodes potentially contribute to the routing process.In this paper, we report the simulation results of four different scenarios for wireless ad hoc networks having thirty nodes. The performances of proposed networks are evaluated in terms of number of hops per route, delay and throughput with the help of OPNET simulator. Channel speed 1 Mbps and simulation time 600 sim-seconds were taken for all scenarios. For the above analysis DSR routing protocols has been used. The throughput obtained from the above analysis (four scenario) are compared as shown in Figure 3. The average media access delay at node_20 for two routes and at node_20 for four different scenario are compared as shown in Figures 4 and 5. It is observed that the throughput will degrade when it will follow different hops for same source to destination (i.e. it has dropped from 1.55 Mbps to 1.43 Mbps which is around 9.7%, and then dropped to 0.48Mbps which is around 35%).

Self Organizing Analysis Platform for Wear Particle

Integration of system process information obtained through an image processing system with an evolving knowledge database to improve the accuracy and predictability of wear particle analysis is the main focus of the paper. The objective is to automate intelligently the analysis process of wear particle using classification via self organizing maps. This is achieved using relationship measurements among corresponding attributes of various measurements for wear particle. Finally, visualization technique is proposed that helps the viewer in understanding and utilizing these relationships that enable accurate diagnostics.

Building Relationship Network for Machine Analysis from Wear Debris Measurements

Integration of system process information obtained through an image processing system with an evolving knowledge database to improve the accuracy and predictability of wear debris analysis is the main focus of the paper. The objective is to automate intelligently the analysis process of wear particle using classification via self-organizing maps. This is achieved using relationship measurements among corresponding attributes of various measurements for wear debris. Finally, visualization technique is proposed that helps the viewer in understanding and utilizing these relationships that enable accurate diagnostics.

Bridging Quantitative and Qualitative of Glaucoma Detection

Glaucoma diagnosis involves extracting three features of the fundus image; optic cup, optic disc and vernacular. Present manual diagnosis is expensive, tedious and time consuming. A number of researches have been conducted to automate this process. However, the variability between the diagnostic capability of an automated system and ophthalmologist has yet to be established. This paper discusses the efficiency and variability between ophthalmologist opinion and digital technique; threshold. The efficiency and variability measures are based on image quality grading; poor, satisfactory or good. The images are separated into four channels; gray, red, green and blue. A scientific investigation was conducted on three ophthalmologists who graded the images based on the image quality. The images are threshold using multithresholding and graded as done by the ophthalmologist. A comparison of grade from the ophthalmologist and threshold is made. The results show there is a small variability between result of ophthalmologists and digital threshold.

Algebraic Approach for the Reconstruction of Linear and Convolutional Error Correcting Codes

In this paper we present a generic approach for the problem of the blind estimation of the parameters of linear and convolutional error correcting codes. In a non-cooperative context, an adversary has only access to the noised transmission he has intercepted. The intercepter has no knowledge about the parameters used by the legal users. So, before having acess to the information he has first to blindly estimate the parameters of the error correcting code of the communication. The presented approach has the main advantage that the problem of reconstruction of such codes can be expressed in a very simple way. This allows us to evaluate theorical bounds on the complexity of the reconstruction process but also bounds on the estimation rate. We show that some classical reconstruction techniques are optimal and also explain why some of them have theorical complexities greater than these experimentally observed.

Efficient System for Speech Recognition using General Regression Neural Network

In this paper we present an efficient system for independent speaker speech recognition based on neural network approach. The proposed architecture comprises two phases: a preprocessing phase which consists in segmental normalization and features extraction and a classification phase which uses neural networks based on nonparametric density estimation namely the general regression neural network (GRNN). The relative performances of the proposed model are compared to the similar recognition systems based on the Multilayer Perceptron (MLP), the Recurrent Neural Network (RNN) and the well known Discrete Hidden Markov Model (HMM-VQ) that we have achieved also. Experimental results obtained with Arabic digits have shown that the use of nonparametric density estimation with an appropriate smoothing factor (spread) improves the generalization power of the neural network. The word error rate (WER) is reduced significantly over the baseline HMM method. GRNN computation is a successful alternative to the other neural network and DHMM.

Performance Evaluation of Powder Metallurgy Electrode in Electrical Discharge Machining of AISI D2 Steel Using Taguchi Method

In this paper an attempt has been made to correlate the usefulness of electrodes made through powder metallurgy (PM) in comparison with conventional copper electrode during electric discharge machining. Experimental results are presented on electric discharge machining of AISI D2 steel in kerosene with copper tungsten (30% Cu and 70% W) tool electrode made through powder metallurgy (PM) technique and Cu electrode. An L18 (21 37) orthogonal array of Taguchi methodology was used to identify the effect of process input factors (viz. current, duty cycle and flushing pressure) on the output factors {viz. material removal rate (MRR) and surface roughness (SR)}. It was found that CuW electrode (made through PM) gives high surface finish where as the Cu electrode is better for higher material removal rate.

Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis

In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.

Experimental Design and Performance Analysis in Plasma Arc Surface Hardening

In this paper, the experimental design of using the Taguchi method is employed to optimize the processing parameters in the plasma arc surface hardening process. The processing parameters evaluated are arc current, scanning velocity and carbon content of steel. In addition, other significant effects such as the relation between processing parameters are also investigated. An orthogonal array, signal-to-noise (S/N) ratio and analysis of variance (ANOVA) are employed to investigate the effects of these processing parameters. Through this study, not only the hardened depth increased and surface roughness improved, but also the parameters that significantly affect the hardening performance are identified. Experimental results are provided to verify the effectiveness of this approach.