Parallel Direct Integration Variable Step Block Method for Solving Large System of Higher Order Ordinary Differential Equations

The aim of this paper is to investigate the performance of the developed two point block method designed for two processors for solving directly non stiff large systems of higher order ordinary differential equations (ODEs). The method calculates the numerical solution at two points simultaneously and produces two new equally spaced solution values within a block and it is possible to assign the computational tasks at each time step to a single processor. The algorithm of the method was developed in C language and the parallel computation was done on a parallel shared memory environment. Numerical results are given to compare the efficiency of the developed method to the sequential timing. For large problems, the parallel implementation produced 1.95 speed-up and 98% efficiency for the two processors.

Gradual Shot Boundary Detection and Classification Based on Fractal Analysis

Shot boundary detection is a fundamental step for the organization of large video data. In this paper, we propose a new method for video gradual shots detection and classification, using advantages of fractal analysis and AIS-based classifier. Proposed features are “vertical intercept" and “fractal dimension" of each frame of videos which are computed using Fourier transform coefficients. We also used a classifier based on Clonal Selection Algorithm. We have carried out our solution and assessed it according to the TRECVID2006 benchmark dataset.

Data Preprocessing for Supervised Leaning

Many factors affect the success of Machine Learning (ML) on a given task. The representation and quality of the instance data is first and foremost. If there is much irrelevant and redundant information present or noisy and unreliable data, then knowledge discovery during the training phase is more difficult. It is well known that data preparation and filtering steps take considerable amount of processing time in ML problems. Data pre-processing includes data cleaning, normalization, transformation, feature extraction and selection, etc. The product of data pre-processing is the final training set. It would be nice if a single sequence of data pre-processing algorithms had the best performance for each data set but this is not happened. Thus, we present the most well know algorithms for each step of data pre-processing so that one achieves the best performance for their data set.

Integrating Big Island Layout with Pull System for Production Optimization

Lean manufacturing is a production philosophy made popular by Toyota Motor Corporation (TMC). It is globally known as the Toyota Production System (TPS) and has the ultimate aim of reducing cost by thoroughly eliminating wastes or muda. TPS embraces the Just-in-time (JIT) manufacturing; achieving cost reduction through lead time reduction. JIT manufacturing can be achieved by implementing Pull system in the production. Furthermore, TPS aims to improve productivity and creating continuous flow in the production by arranging the machines and processes in cellular configurations. This is called as Cellular Manufacturing Systems (CMS). This paper studies on integrating the CMS with the Pull system to establish a Big Island-Pull system production for High Mix Low Volume (HMLV) products in an automotive component industry. The paper will use the build-in JIT system steps adapted from TMC to create the Pull system production and also create a shojinka line which, according to takt time, has the flexibility to adapt to demand changes simply by adding and taking out manpower. This will lead to optimization in production.

A New Heuristic Approach for the Stock- Cutting Problems

This paper addresses a stock-cutting problem with rotation of items and without the guillotine cutting constraint. In order to solve the large-scale problem effectively and efficiently, we propose a simple but fast heuristic algorithm. It is shown that this heuristic outperforms the latest published algorithms for large-scale problem instances.

A New Method of Adaptation in Integrated Learning Environment

A new method of adaptation in a partially integrated learning environment that includes electronic textbook (ET) and integrated tutoring system (ITS) is described. The algorithm of adaptation is described in detail. It includes: establishment of Interconnections of operations and concepts; estimate of the concept mastering level (for all concepts); estimate of student-s non-mastering level on the current learning step of information on each page of ET; creation of a rank-order list of links to the e-manual pages containing information that require repeated work.

Trust and Security in Electronic Payments: What We Have and Need to Know?

The growth of open networks created the interest to commercialise it. The establishment of an electronic business mechanism must be accompanied by a digital-electronic payment system to transfer the value of transactions. Financial organizations are requested to offer a secure e-payment synthesis with equivalent levels of trust and security served in conventional paper-based payment transactions. The paper addresses the challenge of the first trade problem in e-commerce, provides a brief literature review on electronic payment and attempts to explain the underlying concept and method of trust in relevance to electronic payment.

Performance Enhancement of Cellular OFDM Based Wireless LANs by Exploiting Spatial Diversity Techniques

This paper represents an investigation on how exploiting multiple transmit antennas by OFDM based wireless LAN subscribers can mitigate physical layer error rate. Then by comparing the Wireless LANs that utilize spatial diversity techniques with the conventional ones it will reveal how PHY and TCP throughputs behaviors are ameliorated. In the next step it will assess the same issues based on a cellular context operation which is mainly introduced as an innovated solution that beside a multi cell operation scenario benefits spatio-temporal signaling schemes as well. Presented simulations will shed light on the improved performance of the wide range and high quality wireless LAN services provided by the proposed approach.

Operational Risk – Scenario Analysis

This paper focuses on operational risk measurement techniques and on economic capital estimation methods. A data sample of operational losses provided by an anonymous Central European bank is analyzed using several approaches. Loss Distribution Approach and scenario analysis method are considered. Custom plausible loss events defined in a particular scenario are merged with the original data sample and their impact on capital estimates and on the financial institution is evaluated. Two main questions are assessed – What is the most appropriate statistical method to measure and model operational loss data distribution? and What is the impact of hypothetical plausible events on the financial institution? The g&h distribution was evaluated to be the most suitable one for operational risk modeling. The method based on the combination of historical loss events modeling and scenario analysis provides reasonable capital estimates and allows for the measurement of the impact of extreme events on banking operations.

Enhancing Learning Experiences in Outcomebased Higher Education: A Step towards Student Centered Learning

Bologna process has influenced enhancing studentcentered learning in Estonian higher education since 2009, but there is no information about what helps or hinders students to achieve learning outcomes and how quality of student-centered learning might be improved. The purpose of this study is to analyze two questions from outcome-based course evaluation questionnaire which is used in Estonian Entrepreneurship University of Applied Sciences. In this qualitative research, 384 students from 22 different courses described what helped and hindered them to achieve learning outcomes. The analysis showed that the aspects that hinder students to achieve learning outcomes are mostly personal: time management, family and personal matters, motivation and non-academic activities. The results indicate that students- learning is commonly supported by school, where teacher, teaching and characteristics of teaching methods help mostly to achieve learning outcomes, also learning material, practical assignments and independent study was brought up as one of the key elements.

Study of Sugarcane Bagasse Pretreatment with Sulfuric Acid as a Step of Cellulose Obtaining

To produce sugar and ethanol, sugarcane processing generates several agricultural residues, being straw and bagasse is considered as the main among them. And what to do with this residues has been subject of many studies and experiences in an industry that, in recent years, highlighted by the ability to transform waste into valuable products such as electric power. Cellulose is the main component of these materials. It is the most common organic polymer and represents about 1.5 x 1012 tons of total production of biomass per year and is considered an almost inexhaustible source of raw material. Pretreatment with mineral acids is one of the most widely used as stage of cellulose extraction from lignocellulosic materials for solubilizing most of the hemicellulose content. This study had as goal to find the best reaction time of sugarcane bagasse pretreatment with sulfuric acid in order to minimize the losses of cellulose concomitantly with the highest possible removal of hemicellulose and lignin. It was found that the best time for this reaction was 40 minutes, in which it was reached a loss of hemicelluloses around 70% and lignin and cellulose, around 15%. Over this time, it was verified that the cellulose loss increased and there was no loss of lignin and hemicellulose.

Load Discontinuity in Shock Response and Its Remedies

It has been shown that a load discontinuity at the end of an impulse will result in an extra impulse and hence an extra amplitude distortion if a step-by-step integration method is employed to yield the shock response. In order to overcome this difficulty, three remedies are proposed to reduce the extra amplitude distortion. The first remedy is to solve the momentum equation of motion instead of the force equation of motion in the step-by-step solution of the shock response, where an external momentum is used in the solution of the momentum equation of motion. Since the external momentum is a resultant of the time integration of external force, the problem of load discontinuity will automatically disappear. The second remedy is to perform a single small time step immediately upon termination of the applied impulse while the other time steps can still be conducted by using the time step determined from general considerations. This is because that the extra impulse caused by a load discontinuity at the end of an impulse is almost linearly proportional to the step size. Finally, the third remedy is to use the average value of the two different values at the integration point of the load discontinuity to replace the use of one of them for loading input. The basic motivation of this remedy originates from the concept of no loading input error associated with the integration point of load discontinuity. The feasibility of the three remedies are analytically explained and numerically illustrated.

A New Face Recognition Method using PCA, LDA and Neural Network

In this paper, a new face recognition method based on PCA (principal Component Analysis), LDA (Linear Discriminant Analysis) and neural networks is proposed. This method consists of four steps: i) Preprocessing, ii) Dimension reduction using PCA, iii) feature extraction using LDA and iv) classification using neural network. Combination of PCA and LDA is used for improving the capability of LDA when a few samples of images are available and neural classifier is used to reduce number misclassification caused by not-linearly separable classes. The proposed method was tested on Yale face database. Experimental results on this database demonstrated the effectiveness of the proposed method for face recognition with less misclassification in comparison with previous methods.

A Method of Planar-Template- Based Camera Self-Calibration for Single-View

Camera calibration is an important step in 3D reconstruction. Camera calibration may be classified into two major types: traditional calibration and self-calibration. However, a calibration method in using a checkerboard is intermediate between traditional calibration and self-calibration. A self is proposed based on a square in this paper. Only a square in the planar template, the camera self-calibration can be completed through the single view. The proposed algorithm is that the virtual circle and straight line are established by a square on planar template, and circular points, vanishing points in straight lines and the relation between them are be used, in order to obtain the image of the absolute conic (IAC) and establish the camera intrinsic parameters. To make the calibration template is simpler, as compared with the Zhang Zhengyou-s method. Through real experiments and experiments, the experimental results show that this algorithm is feasible and available, and has a certain precision and robustness.

Pulse Skipping Modulated DC to DC Step Down Converter Under Discontinuous Conduction Mode

Reduced switching loss favours Pulse Skipping Modulation mode of switching dc-to-dc converters at light loads. Under certain conditions the converter operates in discontinuous conduction mode (DCM). Inductor current starts from zero in each switching cycle as the switching frequency is constant and not adequately high. A DC-to-DC buck converter is modelled and simulated in this paper under DCM. Effect of ESR of the filter capacitor in input current frequency components is studied. The converter is studied for its operation under input voltage and load variation. The operating frequency is selected to be close to and above audio range.

Optimization of Two-Stage Pretreatment Combined with Microwave Radiation Using Response Surface Methodology

Pretreatment is an essential step in the conversion of lignocellulosic biomass to fermentable sugar that used for biobutanol production. Among pretreatment processes, microwave is considered to improve pretreatment efficiency due to its high heating efficiency, easy operation, and easily to combine with chemical reaction. The main objectives of this work are to investigate the feasibility of microwave pretreatment to enhance enzymatic hydrolysis of corncobs and to determine the optimal conditions using response surface methodology. Corncobs were pretreated via two-stage pretreatment in dilute sodium hydroxide (2 %) followed by dilute sulfuric acid 1 %. Pretreated corncobs were subjected to enzymatic hydrolysis to produce reducing sugar. Statistical experimental design was used to optimize pretreatment parameters including temperature, residence time and solid-to-liquid ratio to achieve the highest amount of glucose. The results revealed that solid-to-liquid ratio and temperature had a significant effect on the amount of glucose.

Constructing a Suitable Model of Distance Training for Community Leader in the Upper Northeastern Region

The objective of this research intends to create a suitable model of distance training for community leaders in the upper northeastern region of Thailand. The implementation of the research process is divided into four steps: The first step is to analyze relevant documents. The second step deals with an interview in depth with experts. The third step is concerned with constructing a model. And the fourth step takes aim at model validation by expert assessments. The findings reveal the two important components for constructing an appropriate model of distance training for community leaders in the upper northeastern region. The first component consists of the context of technology management, e.g., principle, policy and goals. The second component can be viewed in two ways. Firstly, there are elements comprising input, process, output and feedback. Secondly, the sub-components include steps and process in training. The result of expert assessments informs that the researcher-s constructed model is consistent and suitable and overall the most appropriate.

Fuzzy Clustering of Categorical Attributes and its Use in Analyzing Cultural Data

We develop a three-step fuzzy logic-based algorithm for clustering categorical attributes, and we apply it to analyze cultural data. In the first step the algorithm employs an entropy-based clustering scheme, which initializes the cluster centers. In the second step we apply the fuzzy c-modes algorithm to obtain a fuzzy partition of the data set, and the third step introduces a novel cluster validity index, which decides the final number of clusters.

One-Class Support Vector Machines for Protein-Protein Interactions Prediction

Predicting protein-protein interactions represent a key step in understanding proteins functions. This is due to the fact that proteins usually work in context of other proteins and rarely function alone. Machine learning techniques have been applied to predict protein-protein interactions. However, most of these techniques address this problem as a binary classification problem. Although it is easy to get a dataset of interacting proteins as positive examples, there are no experimentally confirmed non-interacting proteins to be considered as negative examples. Therefore, in this paper we solve this problem as a one-class classification problem using one-class support vector machines (SVM). Using only positive examples (interacting protein pairs) in training phase, the one-class SVM achieves accuracy of about 80%. These results imply that protein-protein interaction can be predicted using one-class classifier with comparable accuracy to the binary classifiers that use artificially constructed negative examples.