Neural Network Based Determination of Splice Junctions by ROC Analysis

Gene, principal unit of inheritance, is an ordered sequence of nucleotides. The genes of eukaryotic organisms include alternating segments of exons and introns. The region of Deoxyribonucleic acid (DNA) within a gene containing instructions for coding a protein is called exon. On the other hand, non-coding regions called introns are another part of DNA that regulates gene expression by removing from the messenger Ribonucleic acid (RNA) in a splicing process. This paper proposes to determine splice junctions that are exon-intron boundaries by analyzing DNA sequences. A splice junction can be either exon-intron (EI) or intron exon (IE). Because of the popularity and compatibility of the artificial neural network (ANN) in genetic fields; various ANN models are applied in this research. Multi-layer Perceptron (MLP), Radial Basis Function (RBF) and Generalized Regression Neural Networks (GRNN) are used to analyze and detect the splice junctions of gene sequences. 10-fold cross validation is used to demonstrate the accuracy of networks. The real performances of these networks are found by applying Receiver Operating Characteristic (ROC) analysis.

A Study of Panel Logit Model and Adaptive Neuro-Fuzzy Inference System in the Prediction of Financial Distress Periods

The purpose of this paper is to present two different approaches of financial distress pre-warning models appropriate for risk supervisors, investors and policy makers. We examine a sample of the financial institutions and electronic companies of Taiwan Security Exchange (TSE) market from 2002 through 2008. We present a binary logistic regression with paned data analysis. With the pooled binary logistic regression we build a model including more variables in the regression than with random effects, while the in-sample and out-sample forecasting performance is higher in random effects estimation than in pooled regression. On the other hand we estimate an Adaptive Neuro-Fuzzy Inference System (ANFIS) with Gaussian and Generalized Bell (Gbell) functions and we find that ANFIS outperforms significant Logit regressions in both in-sample and out-of-sample periods, indicating that ANFIS is a more appropriate tool for financial risk managers and for the economic policy makers in central banks and national statistical services.

Examining Corporate Tax Evaders: Evidence from the Finalized Audit Cases

This paper aims to (1) analyze the profiles of transgressors (detected evaders); (2) examine reason(s) that triggered a tax audit, causes of tax evasion, audit timeframe and tax penalty charged; and (3) to assess if tax auditors followed the guidelines as stated in the 'Tax Audit Framework' when conducting tax audits. In 2011, the Inland Revenue Board Malaysia (IRBM) had audited and finalized 557 company cases. With official permission, data of all the 557 cases were obtained from the IRBM. Of these, a total of 421 cases with complete information were analyzed. About 58.1% was small and medium corporations and from the construction industry (32.8%). The selection for tax audit was based on risk analysis (66.8%), information from third party (11.1%), and firm with low profitability or fluctuating profit pattern (7.8%). The three persistent causes of tax evasion by firms were over claimed expenses (46.8%), fraudulent reporting of income (38.5%) and overstating purchases (10.5%). These findings are consistent with past literature. Results showed that tax auditors took six to 18 months to close audit cases. More than half of tax evaders were fined 45% on additional tax raised during audit for the first offence. The study found tax auditors did follow the guidelines in the 'Tax Audit Framework' in audit selection, settlement and penalty imposition.

An Experimental Investigation of Thermoelectric Air-Cooling Module

This article experimentally investigates the thermal performance of thermoelectric air-cooling module which comprises a thermoelectric cooler (TEC) and an air-cooling heat sink. The influences of input current and heat load are determined. And performances under each situation are quantified by thermal resistance analysis. Since TEC generates Joule heat, this nature makes construction of thermal resistance network difficult. To simplify the analysis, this article emphasizes on the resistance heat load might meet when passing through the device. Therefore, the thermal resistances in this paper are to divide temperature differences by heat load. According to the result, there exists an optimum input current under every heating power. In this case, the optimum input current is around 6A or 7A. The performance of the heat sink would be improved with TEC under certain heating power and input current, especially at a low heat load. According to the result, the device can even make the heat source cooler than the ambient. However, TEC is not always effective at every heat load and input current. In some situation, the device works worse than the heat sink without TEC. To determine the availability of TEC, this study figures out the effective operating region in which the TEC air-cooling module works better than the heat sink without TEC. The result shows that TEC is more effective at a lower heat load. If heat load is too high, heat sink with TEC will perform worse than without TEC. The limit of this device is 57W. Besides, TEC is not helpful if input current is too high or too low. There is an effective range of input current, and the range becomes narrower when the heat load grows.

Enhanced Efficacy of Kinetic Power Transform for High-Speed Wind Field

The three-time-scale plant model of a wind power generator, including a wind turbine, a flexible vertical shaft, a Variable Inertia Flywheel (VIF) module, an Active Magnetic Bearing (AMB) unit and the applied wind sequence, is constructed. In order to make the wind power generator be still able to operate as the spindle speed exceeds its rated speed, the VIF is equipped so that the spindle speed can be appropriately slowed down once any stronger wind field is exerted. To prevent any potential damage due to collision by shaft against conventional bearings, the AMB unit is proposed to regulate the shaft position deviation. By singular perturbation order-reduction technique, a lower-order plant model can be established for the synthesis of feedback controller. Two major system parameter uncertainties, an additive uncertainty and a multiplicative uncertainty, are constituted by the wind turbine and the VIF respectively. Frequency Shaping Sliding Mode Control (FSSMC) loop is proposed to account for these uncertainties and suppress the unmodeled higher-order plant dynamics. At last, the efficacy of the FSSMC is verified by intensive computer and experimental simulations for regulation on position deviation of the shaft and counter-balance of unpredictable wind disturbance.

Analysis of FWM Penalties in DWDM Systems Based on G.652, G.653, and G.655 Optical Fibers

This paper presents an investigation of the power penalties imposed by four-wave mixing (FWM) on G.652 (Single- Mode Fiber - SMF), G.653 (Dispersion-Shifted Fiber - DSF), and G.655 (Non-Zero Dispersion-Shifted Fiber - NZDSF) compliant fibers, considering the DWDM grids suggested by the ITU-T Recommendations G.692, and G.694.1, with uniform channel spacing of 100, 50, 25, and 12.5 GHz. The mathematical/numerical model assumes undepleted pumping, and shows very clearly the deleterious effect of FWM on the performance of DWDM systems, measured by the signal-to-noise ratio (SNR). The results make it evident that non-uniform channel spacing is practically mandatory for WDM systems based on DSF fibers.

Effect of Geum Kokanicum Total Extract on Induced Nociception and Inflammation in Male Mice

The aim of this study is evaluating the antinociceptive and anti-inflamatory activity of Geum kokanicum. After determination total extract LD50, different doses of extract were chosen for intrapritoneal injections. In inflammation test, male NMRI mice were divided into 6 groups: control (normal saline), positive control (Dexamethasone 15mg/kg), and total extract (0.025, 0.05, 0.1, and 0.2 gr/kg). The inflammation was produced by xyleneinduced edema. In order to evaluate the antinociceptive effect of total extract, formalin test was used. Mice were divided into 6 groups: control, positive control (morphine 10mg/kg), and 4 groups which received total extract. Then they received Formalin. The animals were observed for the reaction to pain. Data were analyzed using One-way ANOVA followed by Tukey-Kramer multiple comparison test. LD50 was 1 gr/kg. Data indicated that 0.5,0.1 and 0.2 gr/kg doses of total extract have particular antinociceptive and antiinflammatory effects in a comparison with control (P

Almost Periodic Sequence Solutions of a Discrete Cooperation System with Feedback Controls

In this paper, we consider the almost periodic solutions of a discrete cooperation system with feedback controls. Assuming that the coefficients in the system are almost periodic sequences, we obtain the existence and uniqueness of the almost periodic solution which is uniformly asymptotically stable.

Using Automated Database Reverse Engineering for Database Integration

One important problem in today organizations is the existence of non-integrated information systems, inconsistency and lack of suitable correlations between legacy and modern systems. One main solution is to transfer the local databases into a global one. In this regards we need to extract the data structures from the legacy systems and integrate them with the new technology systems. In legacy systems, huge amounts of a data are stored in legacy databases. They require particular attention since they need more efforts to be normalized, reformatted and moved to the modern database environments. Designing the new integrated (global) database architecture and applying the reverse engineering requires data normalization. This paper proposes the use of database reverse engineering in order to integrate legacy and modern databases in organizations. The suggested approach consists of methods and techniques for generating data transformation rules needed for the data structure normalization.

Towards Development of Solution for Business Process-Oriented Data Analysis

This paper proposes a modeling methodology for the development of data analysis solution. The Author introduce the approach to address data warehousing issues at the at enterprise level. The methodology covers the process of the requirements eliciting and analysis stage as well as initial design of data warehouse. The paper reviews extended business process model, which satisfy the needs of data warehouse development. The Author considers that the use of business process models is necessary, as it reflects both enterprise information systems and business functions, which are important for data analysis. The Described approach divides development into three steps with different detailed elaboration of models. The Described approach gives possibility to gather requirements and display them to business users in easy manner.

CAD Based Predictive Models of the Undeformed Chip Geometry in Drilling

Twist drills are geometrical complex tools and thus various researchers have adopted different mathematical and experimental approaches for their simulation. The present paper acknowledges the increasing use of modern CAD systems and using the API (Application Programming Interface) of a CAD system, drilling simulations are carried out. The developed DRILL3D software routine, creates parametrically controlled tool geometries and using different cutting conditions, achieves the generation of solid models for all the relevant data involved (drilling tool, cut workpiece, undeformed chip). The final data derived, consist a platform for further direct simulations regarding the determination of cutting forces, tool wear, drilling optimizations etc.

Enhancing Cache Performance Based on Improved Average Access Time

A high performance computer includes a fast processor and millions bytes of memory. During the data processing, huge amount of information are shuffled between the memory and processor. Because of its small size and its effectiveness speed, cache has become a common feature of high performance computers. Enhancing cache performance proved to be essential in the speed up of cache-based computers. Most enhancement approaches can be classified as either software based or hardware controlled. The performance of the cache is quantified in terms of hit ratio or miss ratio. In this paper, we are optimizing the cache performance based on enhancing the cache hit ratio. The optimum cache performance is obtained by focusing on the cache hardware modification in the way to make a quick rejection to the missed line's tags from the hit-or miss comparison stage, and thus a low hit time for the wanted line in the cache is achieved. In the proposed technique which we called Even- Odd Tabulation (EOT), the cache lines come from the main memory into cache are classified in two types; even line's tags and odd line's tags depending on their Least Significant Bit (LSB). This division is exploited by EOT technique to reject the miss match line's tags in very low time compared to the time spent by the main comparator in the cache, giving an optimum hitting time for the wanted cache line. The high performance of EOT technique against the familiar mapping technique FAM is shown in the simulated results.

An Interval-Based Multi-Attribute Decision Making Approach for Electric Utility Resource Planning

This paper presents an interval-based multi-attribute decision making (MADM) approach in support of the decision process with imprecise information. The proposed decision methodology is based on the model of linear additive utility function but extends the problem formulation with the measure of composite utility variance. A sample study concerning with the evaluation of electric generation expansion strategies is provided showing how the imprecise data may affect the choice toward the best solution and how a set of alternatives, acceptable to the decision maker (DM), may be identified with certain confidence.

CFD Analysis of Two Phase Flow in a Horizontal Pipe – Prediction of Pressure Drop

In designing of condensers, the prediction of pressure drop is as important as the prediction of heat transfer coefficient. Modeling of two phase flow, particularly liquid – vapor flow under diabatic conditions inside a horizontal tube using CFD analysis is difficult with the available two phase models in FLUENT due to continuously changing flow patterns. In the present analysis, CFD analysis of two phase flow of refrigerants inside a horizontal tube of inner diameter, 0.0085 m and 1.2 m length is carried out using homogeneous model under adiabatic conditions. The refrigerants considered are R22, R134a and R407C. The analysis is performed at different saturation temperatures and at different flow rates to evaluate the local frictional pressure drop. Using Homogeneous model, average properties are obtained for each of the refrigerants that is considered as single phase pseudo fluid. The so obtained pressure drop data is compared with the separated flow models available in literature.

Improvement in Performance and Emission Characteristics of a Single Cylinder S.I. Engine Operated on Blends of CNG and Hydrogen

This paper presents the experimental results of a single cylinder Enfield engine using an electronically controlled fuel injection system which was developed to carry out exhaustive tests using neat CNG, and mixtures of hydrogen in compressed natural gas (HCNG) as 0, 5, 10, 15 and 20% by energy. Experiments were performed at 2000 and 2400 rpm with wide open throttle and varying the equivalence ratio. Hydrogen which has fast burning rate, when added to compressed natural gas, enhances its flame propagation rate. The emissions of HC, CO, decreased with increasing percentage of hydrogen but NOx was found to increase. The results indicated a marked improvement in the brake thermal efficiency with the increase in percentage of hydrogen added. The improved thermal efficiency was clearly observed to be more in lean region as compared to rich region. This study is expected to reduce vehicular emissions along with increase in thermal efficiency and thus help in reduction of further environmental degradation.

Non-Invasive Capillary Blood Flow Measurement: Laser Speckle and Laser Doppler

Microcirculation is essential for the proper supply of oxygen and nutritive substances to the biological tissue and the removal of waste products of metabolism. The determination of blood flow in the capillaries is therefore of great interest to clinicians. A comparison has been carried out using the developed non-invasive, non-contact and whole field laser speckle contrast imaging (LSCI) based technique and as well as a commercially available laser Doppler blood flowmeter (LDF) to evaluate blood flow at the finger tip and elbow and is presented here. The LSCI technique gives more quantitative information on the velocity of blood when compared to the perfusion values obtained using the LDF. Measurement of blood flow in capillaries can be of great interest to clinicians in the diagnosis of vascular diseases of the upper extremities.

Identifying the Objectives of Outsourcing Logistics Services as a Basis for Measuring Its Financial and Operational Performance

Logistics outsourcing is a growing trend and measuring its performance, a challenge. It must be consistent with the objectives set for logistics outsourcing, but we have found no objective-based performance measurement system. We have conducted a comprehensive review of the specialist literature to cover this gap, which has led us to identify and define these objectives. The outcome is that we have obtained a list of the most relevant objectives and their descriptions. This will enable us to analyse in a future study whether the indicators used for measuring logistics outsourcing performance are consistent with the objectives pursued with the outsourcing. If this is not the case, a proposal will be made for a set of financial and operational indicators to measure performance in logistics outsourcing that take the goals being pursued into account.

Modified Fast and Exact Algorithm for Fast Haar Transform

Wavelet transform or wavelet analysis is a recently developed mathematical tool in applied mathematics. In numerical analysis, wavelets also serve as a Galerkin basis to solve partial differential equations. Haar transform or Haar wavelet transform has been used as a simplest and earliest example for orthonormal wavelet transform. Since its popularity in wavelet analysis, there are several definitions and various generalizations or algorithms for calculating Haar transform. Fast Haar transform, FHT, is one of the algorithms which can reduce the tedious calculation works in Haar transform. In this paper, we present a modified fast and exact algorithm for FHT, namely Modified Fast Haar Transform, MFHT. The algorithm or procedure proposed allows certain calculation in the process decomposition be ignored without affecting the results.

The PARADIGMA Approach for Cooperative Work in the Medical Domain

PARADIGMA (PARticipative Approach to DIsease Global Management) is a pilot project which aims to develop and demonstrate an Internet based reference framework to share scientific resources and findings in the treatment of major diseases. PARADIGMA defines and disseminates a common methodology and optimised protocols (Clinical Pathways) to support service functions directed to patients and individuals on matters like prevention, posthospitalisation support and awareness. PARADIGMA will provide a platform of information services - user oriented and optimised against social, cultural and technological constraints - supporting the Health Care Global System of the Euro-Mediterranean Community in a continuous improvement process.

A Model for Estimation of Efforts in Development of Software Systems

Software effort estimation is the process of predicting the most realistic use of effort required to develop or maintain software based on incomplete, uncertain and/or noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets. There are various models like Halstead, Walston-Felix, Bailey-Basili, Doty and GA Based models which have already used to estimate the software effort for projects. In this study Statistical Models, Fuzzy-GA and Neuro-Fuzzy (NF) Inference Systems are experimented to estimate the software effort for projects. The performances of the developed models were tested on NASA software project datasets and results are compared with the Halstead, Walston-Felix, Bailey-Basili, Doty and Genetic Algorithm Based models mentioned in the literature. The result shows that the NF Model has the lowest MMRE and RMSE values. The NF Model shows the best results as compared with the Fuzzy-GA based hybrid Inference System and other existing Models that are being used for the Effort Prediction with lowest MMRE and RMSE values.