Evolution of Quality Function Deployment (QFD) via Fuzzy Concepts and Neural Networks

Quality Function Deployment (QFD) is an expounded, multi-step planning method for delivering commodity, services, and processes to customers, both external and internal to an organization. It is a way to convert between the diverse customer languages expressing demands (Voice of the Customer), and the organization-s languages expressing results that sate those demands. The policy is to establish one or more matrices that inter-relate producer and consumer reciprocal expectations. Due to its visual presence is called the “House of Quality" (HOQ). In this paper, we assumed HOQ in multi attribute decision making (MADM) pattern and through a proposed MADM method, rank technical specifications. Thereafter compute satisfaction degree of customer requirements and for it, we apply vagueness and uncertainty conditions in decision making by fuzzy set theory. This approach would propound supervised neural network (perceptron) for MADM problem solving.

Neural Network Based Determination of Splice Junctions by ROC Analysis

Gene, principal unit of inheritance, is an ordered sequence of nucleotides. The genes of eukaryotic organisms include alternating segments of exons and introns. The region of Deoxyribonucleic acid (DNA) within a gene containing instructions for coding a protein is called exon. On the other hand, non-coding regions called introns are another part of DNA that regulates gene expression by removing from the messenger Ribonucleic acid (RNA) in a splicing process. This paper proposes to determine splice junctions that are exon-intron boundaries by analyzing DNA sequences. A splice junction can be either exon-intron (EI) or intron exon (IE). Because of the popularity and compatibility of the artificial neural network (ANN) in genetic fields; various ANN models are applied in this research. Multi-layer Perceptron (MLP), Radial Basis Function (RBF) and Generalized Regression Neural Networks (GRNN) are used to analyze and detect the splice junctions of gene sequences. 10-fold cross validation is used to demonstrate the accuracy of networks. The real performances of these networks are found by applying Receiver Operating Characteristic (ROC) analysis.

Examining Corporate Tax Evaders: Evidence from the Finalized Audit Cases

This paper aims to (1) analyze the profiles of transgressors (detected evaders); (2) examine reason(s) that triggered a tax audit, causes of tax evasion, audit timeframe and tax penalty charged; and (3) to assess if tax auditors followed the guidelines as stated in the 'Tax Audit Framework' when conducting tax audits. In 2011, the Inland Revenue Board Malaysia (IRBM) had audited and finalized 557 company cases. With official permission, data of all the 557 cases were obtained from the IRBM. Of these, a total of 421 cases with complete information were analyzed. About 58.1% was small and medium corporations and from the construction industry (32.8%). The selection for tax audit was based on risk analysis (66.8%), information from third party (11.1%), and firm with low profitability or fluctuating profit pattern (7.8%). The three persistent causes of tax evasion by firms were over claimed expenses (46.8%), fraudulent reporting of income (38.5%) and overstating purchases (10.5%). These findings are consistent with past literature. Results showed that tax auditors took six to 18 months to close audit cases. More than half of tax evaders were fined 45% on additional tax raised during audit for the first offence. The study found tax auditors did follow the guidelines in the 'Tax Audit Framework' in audit selection, settlement and penalty imposition.

An Experimental Investigation of Thermoelectric Air-Cooling Module

This article experimentally investigates the thermal performance of thermoelectric air-cooling module which comprises a thermoelectric cooler (TEC) and an air-cooling heat sink. The influences of input current and heat load are determined. And performances under each situation are quantified by thermal resistance analysis. Since TEC generates Joule heat, this nature makes construction of thermal resistance network difficult. To simplify the analysis, this article emphasizes on the resistance heat load might meet when passing through the device. Therefore, the thermal resistances in this paper are to divide temperature differences by heat load. According to the result, there exists an optimum input current under every heating power. In this case, the optimum input current is around 6A or 7A. The performance of the heat sink would be improved with TEC under certain heating power and input current, especially at a low heat load. According to the result, the device can even make the heat source cooler than the ambient. However, TEC is not always effective at every heat load and input current. In some situation, the device works worse than the heat sink without TEC. To determine the availability of TEC, this study figures out the effective operating region in which the TEC air-cooling module works better than the heat sink without TEC. The result shows that TEC is more effective at a lower heat load. If heat load is too high, heat sink with TEC will perform worse than without TEC. The limit of this device is 57W. Besides, TEC is not helpful if input current is too high or too low. There is an effective range of input current, and the range becomes narrower when the heat load grows.

Enhanced Efficacy of Kinetic Power Transform for High-Speed Wind Field

The three-time-scale plant model of a wind power generator, including a wind turbine, a flexible vertical shaft, a Variable Inertia Flywheel (VIF) module, an Active Magnetic Bearing (AMB) unit and the applied wind sequence, is constructed. In order to make the wind power generator be still able to operate as the spindle speed exceeds its rated speed, the VIF is equipped so that the spindle speed can be appropriately slowed down once any stronger wind field is exerted. To prevent any potential damage due to collision by shaft against conventional bearings, the AMB unit is proposed to regulate the shaft position deviation. By singular perturbation order-reduction technique, a lower-order plant model can be established for the synthesis of feedback controller. Two major system parameter uncertainties, an additive uncertainty and a multiplicative uncertainty, are constituted by the wind turbine and the VIF respectively. Frequency Shaping Sliding Mode Control (FSSMC) loop is proposed to account for these uncertainties and suppress the unmodeled higher-order plant dynamics. At last, the efficacy of the FSSMC is verified by intensive computer and experimental simulations for regulation on position deviation of the shaft and counter-balance of unpredictable wind disturbance.

Direct Numerical Simulation of Oxygen Transfer at the Air-Water Interface in a Convective Flow Environment and Comparison to Experiments

Two-dimensional Direct Numerical Simulation (DNS) of high Schmidt number mass transfer in a convective flow environment (Rayleigh-B'enard) is carried out and results are compared to experimental data. A fourth-order accurate WENO-scheme has been used for scalar transport in order to aim for a high accuracy in areas of high concentration gradients. It was found that the typical spatial distance between downward plumes of cold high concentration water and the eddy size are in good agreement with experiments using a combined PIV-LIF technique for simultaneous and spatially synoptic measurements of 2D velocity and concentration fields.

Using Automated Database Reverse Engineering for Database Integration

One important problem in today organizations is the existence of non-integrated information systems, inconsistency and lack of suitable correlations between legacy and modern systems. One main solution is to transfer the local databases into a global one. In this regards we need to extract the data structures from the legacy systems and integrate them with the new technology systems. In legacy systems, huge amounts of a data are stored in legacy databases. They require particular attention since they need more efforts to be normalized, reformatted and moved to the modern database environments. Designing the new integrated (global) database architecture and applying the reverse engineering requires data normalization. This paper proposes the use of database reverse engineering in order to integrate legacy and modern databases in organizations. The suggested approach consists of methods and techniques for generating data transformation rules needed for the data structure normalization.

An Analytical Electron Mobility Model based on Particle Swarm Computation for Siliconbased Devices

The study of the transport coefficients in electronic devices is currently carried out by analytical and empirical models. This study requires several simplifying assumptions, generally necessary to lead to analytical expressions in order to study the different characteristics of the electronic silicon-based devices. Further progress in the development, design and optimization of Silicon-based devices necessarily requires new theory and modeling tools. In our study, we use the PSO (Particle Swarm Optimization) technique as a computational tool to develop analytical approaches in order to study the transport phenomenon of the electron in crystalline silicon as function of temperature and doping concentration. Good agreement between our results and measured data has been found. The optimized analytical models can also be incorporated into the circuits simulators to study Si-based devices without impact on the computational time and data storage.

Design of a DCT-based Image Compression with Efficient Enhancement Filter

The algorithm represents the DCT coefficients to concentrate signal energy and proposes combination and dictator to eliminate the correlation in the same level subband for encoding the DCT-based images. This work adopts DCT and modifies the SPIHT algorithm to encode DCT coefficients. The proposed algorithm also provides the enhancement function in low bit rate in order to improve the perceptual quality. Experimental results indicate that the proposed technique improves the quality of the reconstructed image in terms of both PSNR and the perceptual results close to JPEG2000 at the same bit rate.

Neural Network Based Approach for Face Detection cum Face Recognition

Automatic face detection is a complex problem in image processing. Many methods exist to solve this problem such as template matching, Fisher Linear Discriminate, Neural Networks, SVM, and MRC. Success has been achieved with each method to varying degrees and complexities. In proposed algorithm we used upright, frontal faces for single gray scale images with decent resolution and under good lighting condition. In the field of face recognition technique the single face is matched with single face from the training dataset. The author proposed a neural network based face detection algorithm from the photographs as well as if any test data appears it check from the online scanned training dataset. Experimental result shows that the algorithm detected up to 95% accuracy for any image.

Enhancing Cache Performance Based on Improved Average Access Time

A high performance computer includes a fast processor and millions bytes of memory. During the data processing, huge amount of information are shuffled between the memory and processor. Because of its small size and its effectiveness speed, cache has become a common feature of high performance computers. Enhancing cache performance proved to be essential in the speed up of cache-based computers. Most enhancement approaches can be classified as either software based or hardware controlled. The performance of the cache is quantified in terms of hit ratio or miss ratio. In this paper, we are optimizing the cache performance based on enhancing the cache hit ratio. The optimum cache performance is obtained by focusing on the cache hardware modification in the way to make a quick rejection to the missed line's tags from the hit-or miss comparison stage, and thus a low hit time for the wanted line in the cache is achieved. In the proposed technique which we called Even- Odd Tabulation (EOT), the cache lines come from the main memory into cache are classified in two types; even line's tags and odd line's tags depending on their Least Significant Bit (LSB). This division is exploited by EOT technique to reject the miss match line's tags in very low time compared to the time spent by the main comparator in the cache, giving an optimum hitting time for the wanted cache line. The high performance of EOT technique against the familiar mapping technique FAM is shown in the simulated results.

Image Segmentation Based on Graph Theoretical Approach to Improve the Quality of Image Segmentation

Graph based image segmentation techniques are considered to be one of the most efficient segmentation techniques which are mainly used as time & space efficient methods for real time applications. How ever, there is need to focus on improving the quality of segmented images obtained from the earlier graph based methods. This paper proposes an improvement to the graph based image segmentation methods already described in the literature. We contribute to the existing method by proposing the use of a weighted Euclidean distance to calculate the edge weight which is the key element in building the graph. We also propose a slight modification of the segmentation method already described in the literature, which results in selection of more prominent edges in the graph. The experimental results show the improvement in the segmentation quality as compared to the methods that already exist, with a slight compromise in efficiency.

Non-Invasive Capillary Blood Flow Measurement: Laser Speckle and Laser Doppler

Microcirculation is essential for the proper supply of oxygen and nutritive substances to the biological tissue and the removal of waste products of metabolism. The determination of blood flow in the capillaries is therefore of great interest to clinicians. A comparison has been carried out using the developed non-invasive, non-contact and whole field laser speckle contrast imaging (LSCI) based technique and as well as a commercially available laser Doppler blood flowmeter (LDF) to evaluate blood flow at the finger tip and elbow and is presented here. The LSCI technique gives more quantitative information on the velocity of blood when compared to the perfusion values obtained using the LDF. Measurement of blood flow in capillaries can be of great interest to clinicians in the diagnosis of vascular diseases of the upper extremities.

A 1.8 V RF CMOS Active Inductor with 0.18 um CMOS Technology

A active inductor in CMOS techonology with a supply voltage of 1.8V is presented. The value of the inductance L can be in the range from 0.12nH to 0.25nH in high frequency(HF). The proposed active inductor is designed in TSMC 0.18-um CMOS technology. The power dissipation of this inductor can retain constant at all operating frequency bands and consume around 20mW from 1.8V power supply. Inductors designed by integrated circuit occupy much smaller area, for this reason,attracted researchers attention for more than decade. In this design we used Advanced Designed System (ADS) for simulating cicuit.

The PARADIGMA Approach for Cooperative Work in the Medical Domain

PARADIGMA (PARticipative Approach to DIsease Global Management) is a pilot project which aims to develop and demonstrate an Internet based reference framework to share scientific resources and findings in the treatment of major diseases. PARADIGMA defines and disseminates a common methodology and optimised protocols (Clinical Pathways) to support service functions directed to patients and individuals on matters like prevention, posthospitalisation support and awareness. PARADIGMA will provide a platform of information services - user oriented and optimised against social, cultural and technological constraints - supporting the Health Care Global System of the Euro-Mediterranean Community in a continuous improvement process.

Securing Message in Wireless Sensor Network by using New Method of Code Conversions

Recently, wireless sensor networks have been paid more interest, are widely used in a lot of commercial and military applications, and may be deployed in critical scenarios (e.g. when a malfunctioning network results in danger to human life or great financial loss). Such networks must be protected against human intrusion by using the secret keys to encrypt the exchange messages between communicating nodes. Both the symmetric and asymmetric methods have their own drawbacks for use in key management. Thus, we avoid the weakness of these two cryptosystems and make use of their advantages to establish a secure environment by developing the new method for encryption depending on the idea of code conversion. The code conversion-s equations are used as the key for designing the proposed system based on the basics of logic gate-s principals. Using our security architecture, we show how to reduce significant attacks on wireless sensor networks.

Digital Image Watermarking in the Wavelet Transform Domain

In this paper, we start by first characterizing the most important and distinguishing features of wavelet-based watermarking schemes. We studied the overwhelming amount of algorithms proposed in the literature. Application scenario, copyright protection is considered and building on the experience that was gained, implemented two distinguishing watermarking schemes. Detailed comparison and obtained results are presented and discussed. We concluded that Joo-s [1] technique is more robust for standard noise attacks than Dote-s [2] technique.

Computational Tool for Techno-Economical Evaluation of Steam/Oxygen Fluidized Bed Biomass Gasification Technologies

The paper presents a computational tool developed for the evaluation of technical and economic advantages of an innovative cleaning and conditioning technology of fluidized bed steam/oxygen gasifiers outlet product gas. This technology integrates into a single unit the steam gasification of biomass and the hot gas cleaning and conditioning system. Both components of the computational tool, process flowsheet and economic evaluator, have been developed under IPSEpro software. The economic model provides information that can help potential users, especially small and medium size enterprises acting in the regenerable energy field, to decide the optimal scale of a plant and to better understand both potentiality and limits of the system when applied to a wide range of conditions.

A Technique for Execution of Written Values on Shared Variables

The current paper conceptualizes the technique of release consistency indispensable with the concept of synchronization that is user-defined. Programming model concreted with object and class is illustrated and demonstrated. The essence of the paper is phases, events and parallel computing execution .The technique by which the values are visible on shared variables is implemented. The second part of the paper consist of user defined high level synchronization primitives implementation and system architecture with memory protocols. There is a proposition of techniques which are core in deciding the validating and invalidating a stall page .

Efficient Sensors Selection Algorithm in Cyber Physical System

Cyber physical system (CPS) for target tracking, military surveillance, human health monitoring, and vehicle detection all require maximizing the utility and saving the energy. Sensor selection is one of the most important parts of CPS. Sensor selection problem (SSP) is concentrating to balance the tradeoff between the number of sensors which we used and the utility which we will get. In this paper, we propose a performance constrained slide windows (PCSW) based algorithm for SSP in CPS. we present results of extensive simulations that we have carried out to test and validate the PCSW algorithms when we track a target, Experiment shows that the PCSW based algorithm improved the performance including selecting time and communication times for selecting.