The Impact of ERP Systems on Accounting Processes

Advances in information technology, recent changes in business environment, globalization, deregulation, privatization have made running a successful business more difficult than ever before. To remain successful and to be competitive have forced companies to react to the new changes in order to survive and succeed. The implementation of an Enterprise Resource planning (ERP) system improves information flow, reduce costs, establish linkage with suppliers and reduce response time to customer needs. This paper focuses on a sample of Greek companies, investigates the ERP market in Greece, the reasons why the Greek companies are investing in ERP systems, the benefits that users have achieved and the influence of ERP systems on the use of new accounting practices. The results indicate a greater level on information integration, flexibility in information access and greater functionality provided by ERP systems but little influence on the use of new accounting practices.

Damage of Tubular Equipment in Process Industry

Tubular process equipment is often damaged in industrial processes. The damage occurs both on devices working at high temperatures and also on less exposed devices. In case of sudden damage of key equipment a shutdown of the whole production unit and resulting significant economic losses are imminent. This paper presents a solution of several types of tubular process equipment. The causes of damage and suggestions of correction actions are discussed in all cases. Very important part is the analysis of operational conditions, determination of unfavourable working states decreasing lifetime of devices and suggestions of correction actions. Lately very popular numerical methods are used for analysis of the equipment.

Novel Approach for Promoting the Generalization Ability of Neural Networks

A new approach to promote the generalization ability of neural networks is presented. It is based on the point of view of fuzzy theory. This approach is implemented through shrinking or magnifying the input vector, thereby reducing the difference between training set and testing set. It is called “shrinking-magnifying approach" (SMA). At the same time, a new algorithm; α-algorithm is presented to find out the appropriate shrinking-magnifying-factor (SMF) α and obtain better generalization ability of neural networks. Quite a few simulation experiments serve to study the effect of SMA and α-algorithm. The experiment results are discussed in detail, and the function principle of SMA is analyzed in theory. The results of experiments and analyses show that the new approach is not only simpler and easier, but also is very effective to many neural networks and many classification problems. In our experiments, the proportions promoting the generalization ability of neural networks have even reached 90%.

Performance Analysis of a Series of Adaptive Filters in Non-Stationary Environment for Noise Cancelling Setup

One of the essential components of much of DSP application is noise cancellation. Changes in real time signals are quite rapid and swift. In noise cancellation, a reference signal which is an approximation of noise signal (that corrupts the original information signal) is obtained and then subtracted from the noise bearing signal to obtain a noise free signal. This approximation of noise signal is obtained through adaptive filters which are self adjusting. As the changes in real time signals are abrupt, this needs adaptive algorithm that converges fast and is stable. Least mean square (LMS) and normalized LMS (NLMS) are two widely used algorithms because of their plainness in calculations and implementation. But their convergence rates are small. Adaptive averaging filters (AFA) are also used because they have high convergence, but they are less stable. This paper provides the comparative study of LMS and Normalized NLMS, AFA and new enhanced average adaptive (Average NLMS-ANLMS) filters for noise cancelling application using speech signals.

Hourly Electricity Load Forecasting: An Empirical Application to the Italian Railways

Due to the liberalization of countless electricity markets, load forecasting has become crucial to all public utilities for which electricity is a strategic variable. With the goal of contributing to the forecasting process inside public utilities, this paper addresses the issue of applying the Holt-Winters exponential smoothing technique and the time series analysis for forecasting the hourly electricity load curve of the Italian railways. The results of the analysis confirm the accuracy of the two models and therefore the relevance of forecasting inside public utilities.

An Investigation into Kanji Character Discrimination Process from EEG Signals

The frontal area in the brain is known to be involved in behavioral judgement. Because a Kanji character can be discriminated visually and linguistically from other characters, in Kanji character discrimination, we hypothesized that frontal event-related potential (ERP) waveforms reflect two discrimination processes in separate time periods: one based on visual analysis and the other based on lexcical access. To examine this hypothesis, we recorded ERPs while performing a Kanji lexical decision task. In this task, either a known Kanji character, an unknown Kanji character or a symbol was presented and the subject had to report if the presented character was a known Kanji character for the subject or not. The same response was required for unknown Kanji trials and symbol trials. As a preprocessing of signals, we examined the performance of a method using independent component analysis for artifact rejection and found it was effective. Therefore we used it. In the ERP results, there were two time periods in which the frontal ERP wavefoms were significantly different betweeen the unknown Kanji trials and the symbol trials: around 170ms and around 300ms after stimulus onset. This result supported our hypothesis. In addition, the result suggests that Kanji character lexical access may be fully completed by around 260ms after stimulus onset.

A Bi-Objective Model for Location-Allocation Problem within Queuing Framework

This paper proposes a bi-objective model for the facility location problem under a congestion system. The idea of the model is motivated by applications of locating servers in bank automated teller machines (ATMS), communication networks, and so on. This model can be specifically considered for situations in which fixed service facilities are congested by stochastic demand within queueing framework. We formulate this model with two perspectives simultaneously: (i) customers and (ii) service provider. The objectives of the model are to minimize (i) the total expected travelling and waiting time and (ii) the average facility idle-time. This model represents a mixed-integer nonlinear programming problem which belongs to the class of NP-hard problems. In addition, to solve the model, two metaheuristic algorithms including nondominated sorting genetic algorithms (NSGA-II) and non-dominated ranking genetic algorithms (NRGA) are proposed. Besides, to evaluate the performance of the two algorithms some numerical examples are produced and analyzed with some metrics to determine which algorithm works better.

Impact of Faults in Different Software Systems: A Survey

Software maintenance is extremely important activity in software development life cycle. It involves a lot of human efforts, cost and time. Software maintenance may be further subdivided into different activities such as fault prediction, fault detection, fault prevention, fault correction etc. This topic has gained substantial attention due to sophisticated and complex applications, commercial hardware, clustered architecture and artificial intelligence. In this paper we surveyed the work done in the field of software maintenance. Software fault prediction has been studied in context of fault prone modules, self healing systems, developer information, maintenance models etc. Still a lot of things like modeling and weightage of impact of different kind of faults in the various types of software systems need to be explored in the field of fault severity.

A New Quantile Based Fuzzy Time Series Forecasting Model

Time series models have been used to make predictions of academic enrollments, weather, road accident, casualties and stock prices, etc. Based on the concepts of quartile regression models, we have developed a simple time variant quantile based fuzzy time series forecasting method. The proposed method bases the forecast using prediction of future trend of the data. In place of actual quantiles of the data at each point, we have converted the statistical concept into fuzzy concept by using fuzzy quantiles using fuzzy membership function ensemble. We have given a fuzzy metric to use the trend forecast and calculate the future value. The proposed model is applied for TAIFEX forecasting. It is shown that proposed method work best as compared to other models when compared with respect to model complexity and forecasting accuracy.

Intragenic MicroRNAs Binding Sites in MRNAs of Genes Involved in Carcinogenesis

MiRNAs participate in gene regulation of translation. Some studies have investigated the interactions between genes and intragenic miRNAs. It is important to study the miRNA binding sites of genes involved in carcinogenesis. RNAHybrid 2.1 and ERNAhybrid programmes were used to compute the hybridization free energy of miRNA binding sites. Of these 54 mRNAs, 22.6%, 37.7%, and 39.7% of miRNA binding sites were present in the 5'UTRs, CDSs, and 3'UTRs, respectively. The density of the binding sites for miRNAs in the 5'UTR ranged from 1.6 to 43.2 times and from 1.8 to 8.0 times greater than in the CDS and 3'UTR, respectively. Three types of miRNA interactions with mRNAs have been revealed: 5'- dominant canonical, 3'-compensatory, and complementary binding sites. MiRNAs regulate gene expression, and information on the interactions between miRNAs and mRNAs could be useful in molecular medicine. We recommend that newly described sites undergo validation by experimental investigation.

Application of Pearson Parametric Distribution Model in Fatigue Life Reliability Evaluation

The aim of this paper is to introduce a parametric distribution model in fatigue life reliability analysis dealing with variation in material properties. Service loads in terms of responsetime history signal of Belgian pave were replicated on a multi-axial spindle coupled road simulator and stress-life method was used to estimate the fatigue life of automotive stub axle. A PSN curve was obtained by monotonic tension test and two-parameter Weibull distribution function was used to acquire the mean life of the component. A Pearson system was developed to evaluate the fatigue life reliability by considering stress range intercept and slope of the PSN curve as random variables. Considering normal distribution of fatigue strength, it is found that the fatigue life of the stub axle to have the highest reliability between 10000 – 15000 cycles. Taking into account the variation of material properties associated with the size effect, machining and manufacturing conditions, the method described in this study can be effectively applied in determination of probability of failure of mass-produced parts.

On the Standardizing the Metal Die of Punchand Matrix by Mechanical Desktop Software

In industry, on of the most important subjects is die and it's characteristics in which for cutting and forming different mechanical pieces, various punch and matrix metal die are used. whereas the common parts which form the main frame die are not often proportion with pieces and dies therefore using a part as socalled common part for frames in specified dimension ranges can decrease the time of designing, occupied space of warehouse and manufacturing costs. Parts in dies with getting uniform in their shape and dimension make common parts of dies. Common parts of punch and matrix metal die are as bolster, guide bush, guide pillar and shank. In this paper the common parts and effective parameters in selecting each of them as the primary information are studied, afterward for selection and design of mechanical parts an introduction and investigation based on the Mech. Desk. software is done hence with developing this software can standardize the metal common parts of punch and matrix. These studies will be so useful for designer in their designing and also using it has with very much advantage for manufactures of products in decreasing occupied spaces by dies.

Information Technology Application for Knowledge Management in Medium-Size Businesses

Result of the study on knowledge management systems in businesses was shown that the most of these businesses provide internet accessibility for their employees in order to study new knowledge via internet, corporate website, electronic mail, and electronic learning system. These business organizations use information technology application for knowledge management because of convenience, time saving, ease of use, accuracy of information and knowledge usefulness. The result indicated prominent improvements for corporate knowledge management systems as the following; 1) administrations must support corporate knowledge management system 2) the goal of corporate knowledge management must be clear 3) corporate culture should facilitate the exchange and sharing of knowledge within the organization 4) cooperation of personnel of all levels must be obtained 5) information technology infrastructure must be provided 6) they must develop the system regularly and constantly. 

A Genetic Algorithm with Priority Selection for the Traveling Salesman Problem

The conventional GA combined with a local search algorithm, such as the 2-OPT, forms a hybrid genetic algorithm(HGA) for the traveling salesman problem (TSP). However, the geometric properties which are problem specific knowledge can be used to improve the search process of the HGA. Some tour segments (edges) of TSPs are fine while some maybe too long to appear in a short tour. This knowledge could constrain GAs to work out with fine tour segments without considering long tour segments as often. Consequently, a new algorithm is proposed, called intelligent-OPT hybrid genetic algorithm (IOHGA), to improve the GA and the 2-OPT algorithm in order to reduce the search time for the optimal solution. Based on the geometric properties, all the tour segments are assigned 2-level priorities to distinguish between good and bad genes. A simulation study was conducted to evaluate the performance of the IOHGA. The experimental results indicate that in general the IOHGA could obtain near-optimal solutions with less time and better accuracy than the hybrid genetic algorithm with simulated annealing algorithm (HGA(SA)).

A New Method for Extracting Ocean Wave Energy Utilizing the Wave Shoaling Phenomenon

Fossil fuels are the major source to meet the world energy requirements but its rapidly diminishing rate and adverse effects on our ecological system are of major concern. Renewable energy utilization is the need of time to meet the future challenges. Ocean energy is the one of these promising energy resources. Threefourths of the earth-s surface is covered by the oceans. This enormous energy resource is contained in the oceans- waters, the air above the oceans, and the land beneath them. The renewable energy source of ocean mainly is contained in waves, ocean current and offshore solar energy. Very fewer efforts have been made to harness this reliable and predictable resource. Harnessing of ocean energy needs detail knowledge of underlying mathematical governing equation and their analysis. With the advent of extra ordinary computational resources it is now possible to predict the wave climatology in lab simulation. Several techniques have been developed mostly stem from numerical analysis of Navier Stokes equations. This paper presents a brief over view of such mathematical model and tools to understand and analyze the wave climatology. Models of 1st, 2nd and 3rd generations have been developed to estimate the wave characteristics to assess the power potential. A brief overview of available wave energy technologies is also given. A novel concept of on-shore wave energy extraction method is also presented at the end. The concept is based upon total energy conservation, where energy of wave is transferred to the flexible converter to increase its kinetic energy. Squeezing action by the external pressure on the converter body results in increase velocities at discharge section. High velocity head then can be used for energy storage or for direct utility of power generation. This converter utilizes the both potential and kinetic energy of the waves and designed for on-shore or near-shore application. Increased wave height at the shore due to shoaling effects increases the potential energy of the waves which is converted to renewable energy. This approach will result in economic wave energy converter due to near shore installation and more dense waves due to shoaling. Method will be more efficient because of tapping both potential and kinetic energy of the waves.

Electronic Voting System using Mobile Terminal

Electronic voting (E-voting) using an internet has been recently performed in some nations and regions. There is no spatial restriction which a voter directly has to visit the polling place, but an e-voting using an internet has to go together the computer in which the internet connection is possible. Also, this voting requires an access code for the e-voting through the beforehand report of a voter. To minimize these disadvantages, we propose a method in which a voter, who has the wireless certificate issued in advance, uses its own cellular phone for an e-voting without the special registration for a vote. Our proposal allows a voter to cast his vote in a simple and convenient way without the limit of time and location, thereby increasing the voting rate, and also ensuring confidentiality and anonymity.

A Hidden Markov Model for Modeling Pavement Deterioration under Incomplete Monitoring Data

In this paper, the potential use of an exponential hidden Markov model to model a hidden pavement deterioration process, i.e. one that is not directly measurable, is investigated. It is assumed that the evolution of the physical condition, which is the hidden process, and the evolution of the values of pavement distress indicators, can be adequately described using discrete condition states and modeled as a Markov processes. It is also assumed that condition data can be collected by visual inspections over time and represented continuously using an exponential distribution. The advantage of using such a model in decision making process is illustrated through an empirical study using real world data.

Parallel Direct Integration Variable Step Block Method for Solving Large System of Higher Order Ordinary Differential Equations

The aim of this paper is to investigate the performance of the developed two point block method designed for two processors for solving directly non stiff large systems of higher order ordinary differential equations (ODEs). The method calculates the numerical solution at two points simultaneously and produces two new equally spaced solution values within a block and it is possible to assign the computational tasks at each time step to a single processor. The algorithm of the method was developed in C language and the parallel computation was done on a parallel shared memory environment. Numerical results are given to compare the efficiency of the developed method to the sequential timing. For large problems, the parallel implementation produced 1.95 speed-up and 98% efficiency for the two processors.

Process and Supply-Chain Optimization for Testing and Verification of Formation Tester/Pressure-While- Drilling Tools

Applying a rigorous process to optimize the elements of a supply-chain network resulted in reduction of the waiting time for a service provider and customer. Different sources of downtime of hydraulic pressure controller/calibrator (HPC) were causing interruptions in the operations. The process examined all the issues to drive greater efficiencies. The issues included inherent design issues with HPC pump, contamination of the HPC with impurities, and the lead time required for annual calibration in the USA. HPC is used for mandatory testing/verification of formation tester/pressure measurement/logging-while drilling tools by oilfield service providers, including Halliburton. After market study andanalysis, it was concluded that the current HPC model is best suited in the oilfield industry. To use theexisting HPC model effectively, design andcontamination issues were addressed through design and process improvements. An optimum network is proposed after comparing different supply-chain models for calibration lead-time reduction.

From Hype to Ignorance – A Review of 30 Years of Lean Production

Lean production (or lean management respectively) gained popularity in several waves. The last three decades have been filled with numerous attempts to apply these concepts in companies. However, this has only been partially successful. The roots of lean production can be traced back to Toyota-s just-in-time production. This concept, which according to Womack-s, Jones- and Roos- research at MIT was employed by Japanese car manufacturers, became popular under its international names “lean production", “lean-manufacturing" and was termed “Schlanke Produktion" in Germany. This contribution shows a review about lean production in Germany over the last thirty years: development, trial & error and implementation as well.