Mathematical Modeling to Predict Surface Roughness in CNC Milling

Surface roughness (Ra) is one of the most important requirements in machining process. In order to obtain better surface roughness, the proper setting of cutting parameters is crucial before the process take place. This research presents the development of mathematical model for surface roughness prediction before milling process in order to evaluate the fitness of machining parameters; spindle speed, feed rate and depth of cut. 84 samples were run in this study by using FANUC CNC Milling α-Τ14ιE. Those samples were randomly divided into two data sets- the training sets (m=60) and testing sets(m=24). ANOVA analysis showed that at least one of the population regression coefficients was not zero. Multiple Regression Method was used to determine the correlation between a criterion variable and a combination of predictor variables. It was established that the surface roughness is most influenced by the feed rate. By using Multiple Regression Method equation, the average percentage deviation of the testing set was 9.8% and 9.7% for training data set. This showed that the statistical model could predict the surface roughness with about 90.2% accuracy of the testing data set and 90.3% accuracy of the training data set.

Improvement of Blood Detection Accuracy using Image Processing Techniques suitable for Capsule Endoscopy

Bleeding in the digestive duct is an important diagnostic parameter for patients. Blood in the endoscopic image can be determined by investigating the color tone of blood due to the degree of oxygenation, under- or over- illumination, food debris and secretions, etc. However, we found that how to pre-process raw images obtained from the capsule detectors was very important. We applied various image process methods suitable for the capsule endoscopic image in order to remove noises and unbalanced sensitivities for the image pixels. The results showed that much improvement was achieved by additional pre-processing techniques on the algorithm of determining bleeding areas.

Computational Intelligence Hybrid Learning Approach to Time Series Forecasting

Time series forecasting is an important and widely popular topic in the research of system modeling. This paper describes how to use the hybrid PSO-RLSE neuro-fuzzy learning approach to the problem of time series forecasting. The PSO algorithm is used to update the premise parameters of the proposed prediction system, and the RLSE is used to update the consequence parameters. Thanks to the hybrid learning (HL) approach for the neuro-fuzzy system, the prediction performance is excellent and the speed of learning convergence is much faster than other compared approaches. In the experiments, we use the well-known Mackey-Glass chaos time series. According to the experimental results, the prediction performance and accuracy in time series forecasting by the proposed approach is much better than other compared approaches, as shown in Table IV. Excellent prediction performance by the proposed approach has been observed.

Machine Learning Techniques for Short-Term Rain Forecasting System in the Northeastern Part of Thailand

This paper presents the methodology from machine learning approaches for short-term rain forecasting system. Decision Tree, Artificial Neural Network (ANN), and Support Vector Machine (SVM) were applied to develop classification and prediction models for rainfall forecasts. The goals of this presentation are to demonstrate (1) how feature selection can be used to identify the relationships between rainfall occurrences and other weather conditions and (2) what models can be developed and deployed for predicting the accurate rainfall estimates to support the decisions to launch the cloud seeding operations in the northeastern part of Thailand. Datasets collected during 2004-2006 from the Chalermprakiat Royal Rain Making Research Center at Hua Hin, Prachuap Khiri khan, the Chalermprakiat Royal Rain Making Research Center at Pimai, Nakhon Ratchasima and Thai Meteorological Department (TMD). A total of 179 records with 57 features was merged and matched by unique date. There are three main parts in this work. Firstly, a decision tree induction algorithm (C4.5) was used to classify the rain status into either rain or no-rain. The overall accuracy of classification tree achieves 94.41% with the five-fold cross validation. The C4.5 algorithm was also used to classify the rain amount into three classes as no-rain (0-0.1 mm.), few-rain (0.1- 10 mm.), and moderate-rain (>10 mm.) and the overall accuracy of classification tree achieves 62.57%. Secondly, an ANN was applied to predict the rainfall amount and the root mean square error (RMSE) were used to measure the training and testing errors of the ANN. It is found that the ANN yields a lower RMSE at 0.171 for daily rainfall estimates, when compared to next-day and next-2-day estimation. Thirdly, the ANN and SVM techniques were also used to classify the rain amount into three classes as no-rain, few-rain, and moderate-rain as above. The results achieved in 68.15% and 69.10% of overall accuracy of same-day prediction for the ANN and SVM models, respectively. The obtained results illustrated the comparison of the predictive power of different methods for rainfall estimation.

An Analytical Solution for Vibration of Elevator Cables with Small Bending Stiffness

Responses of the dynamical systems are highly affected by the natural frequencies and it has a huge impact on design and operation of high-rise and high-speed elevators. In the present paper, the variational iteration method (VIM) is employed to investigate better understanding the dynamics of elevator cable as a single-degree-of-freedom (SDOF) swing system. Comparisons made among the results of the proposed closed-form analytical solution, the traditional numerical iterative time integration solution, and the linearized governing equations confirm the accuracy and efficiency of the proposed approach. Furthermore, based on the results of the proposed closed-form solution, the linearization errors in calculating the natural frequencies in different cases are discussed.

Verification of a Locked CFD Approach to Cool Down Modeling

Increasing demand on the performance of Subsea Production Systems (SPS) suggests a need for more detailed investigation of fluid behavior taking place in subsea equipment. Complete CFD cool down analyses of subsea equipment are very time demanding. The objective of this paper is to investigate a Locked CFD approach, which enables significant reduction of the computational time and at the same time maintains sufficient accuracy during thermal cool down simulations. The result comparison of a dead leg simulation using the Full CFD and the three LCFD-methods confirms the validity of the locked flow field assumption for the selected case. For the tested case the LCFD simulation speed up by factor of 200 results in the absolute thermal error of 0.5 °C (3% relative error), speed up by factor of 10 keeps the LCFD results within 0.1 °C (0.5 % relative error) comparing to the Full CFD.

Assessment of the Influence of External Earth Terrain at Construction of the Physicmathematical Models or Finding the Dynamics of Pollutants' Distribution in Urban Atmosphere

There is a complex situation on the transport environment in the cities of the world. For the analysis and prevention of environmental problems an accurate calculation hazardous substances concentrations at each point of the investigated area is required. In the turbulent atmosphere of the city the wellknown methods of mathematical statistics for these tasks cannot be applied with a satisfactory level of accuracy. Therefore, to solve this class of problems apparatus of mathematical physics is more appropriate. In such models, because of the difficulty as a rule the influence of uneven land surface on streams of air masses in the turbulent atmosphere of the city are not taken into account. In this paper the influence of the surface roughness, which can be quite large, is mathematically shown. The analysis of this problem under certain conditions identified the possibility of areas appearing in the atmosphere with pressure tending to infinity, i.e. so-called "wall effect".

An Efficient Framework to Build Up Malware Dataset

This research paper presents a framework on how to build up malware dataset.Many researchers took longer time to clean the dataset from any noise or to transform the dataset into a format that can be used straight away for testing. Therefore, this research is proposing a framework to help researchers to speed up the malware dataset cleaningprocesses which later can be used for testing. It is believed, an efficient malware dataset cleaning processes, can improved the quality of the data, thus help to improve the accuracy and the efficiency of the subsequent analysis. Apart from that, an in-depth understanding of the malware taxonomy is also important prior and during the dataset cleaning processes. A new Trojan classification has been proposed to complement this framework.This experiment has been conducted in a controlled lab environment and using the dataset from VxHeavens dataset. This framework is built based on the integration of static and dynamic analyses, incident response method and knowledge database discovery (KDD) processes.This framework can be used as the basis guideline for malware researchers in building malware dataset.

Palmprint Recognition by Wavelet Transform with Competitive Index and PCA

This manuscript presents, palmprint recognition by combining different texture extraction approaches with high accuracy. The Region of Interest (ROI) is decomposed into different frequencytime sub-bands by wavelet transform up-to two levels and only the approximate image of two levels is selected, which is known as Approximate Image ROI (AIROI). This AIROI has information of principal lines of the palm. The Competitive Index is used as the features of the palmprint, in which six Gabor filters of different orientations convolve with the palmprint image to extract the orientation information from the image. The winner-take-all strategy is used to select dominant orientation for each pixel, which is known as Competitive Index. Further, PCA is applied to select highly uncorrelated Competitive Index features, to reduce the dimensions of the feature vector, and to project the features on Eigen space. The similarity of two palmprints is measured by the Euclidean distance metrics. The algorithm is tested on Hong Kong PolyU palmprint database. Different AIROI of different wavelet filter families are also tested with the Competitive Index and PCA. AIROI of db7 wavelet filter achievs Equal Error Rate (EER) of 0.0152% and Genuine Acceptance Rate (GAR) of 99.67% on the palm database of Hong Kong PolyU.

Numerical Simulation of Progressive Collapse for a Reinforced Concrete Building

Though nonlinear dynamic analysis using a specialized hydro-code such as AUTODYN is accurate and useful tool for progressive collapse assessment of a multi-story building subjected to blast load, it takes too much time to be applied to a practical simulation of progressive collapse of a tall building. In this paper, blast analysis of a RC frame structure using a simplified model with Reinforcement Contact technique provided in Ansys Workbench was introduced and investigated on its accuracy. Even though the simplified model has a fraction of elements of the detailed model, the simplified model with this modeling technique shows similar structural behavior under the blast load to the detailed model. The proposed modeling method can be effectively applied to blast loading progressive collapse analysis of a RC frame structure.

A Decision Boundary based Discretization Technique using Resampling

Many supervised induction algorithms require discrete data, even while real data often comes in a discrete and continuous formats. Quality discretization of continuous attributes is an important problem that has effects on speed, accuracy and understandability of the induction models. Usually, discretization and other types of statistical processes are applied to subsets of the population as the entire population is practically inaccessible. For this reason we argue that the discretization performed on a sample of the population is only an estimate of the entire population. Most of the existing discretization methods, partition the attribute range into two or several intervals using a single or a set of cut points. In this paper, we introduce a technique by using resampling (such as bootstrap) to generate a set of candidate discretization points and thus, improving the discretization quality by providing a better estimation towards the entire population. Thus, the goal of this paper is to observe whether the resampling technique can lead to better discretization points, which opens up a new paradigm to construction of soft decision trees.

Target Concept Selection by Property Overlap in Ontology Population

An ontology is widely used in many kinds of applications as a knowledge representation tool for domain knowledge. However, even though an ontology schema is well prepared by domain experts, it is tedious and cost-intensive to add instances into the ontology. The most confident and trust-worthy way to add instances into the ontology is to gather instances from tables in the related Web pages. In automatic populating of instances, the primary task is to find the most proper concept among all possible concepts within the ontology for a given table. This paper proposes a novel method for this problem by defining the similarity between the table and the concept using the overlap of their properties. According to a series of experiments, the proposed method achieves 76.98% of accuracy. This implies that the proposed method is a plausible way for automatic ontology population from Web tables.

Schmitt Trigger Based SRAM Using Finfet Technology- Shorted Gate Mode

The most widely used semiconductor memory types are the Dynamic Random Access Memory (DRAM) and Static Random Access memory (SRAM). Competition among memory manufacturers drives the need to decrease power consumption and reduce the probability of read failure. A technology that is relatively new and has not been explored is the FinFET technology. In this paper, a single cell Schmitt Trigger Based Static RAM using FinFET technology is proposed and analyzed. The accuracy of the result is validated by means of HSPICE simulations with 32nm FinFET technology and the results are then compared with 6T SRAM using the same technology.

A Comparison of Some Splines-Based Methods for the One-dimensional Heat Equation

In this paper, collocation based cubic B-spline and extended cubic uniform B-spline method are considered for solving one-dimensional heat equation with a nonlocal initial condition. Finite difference and θ-weighted scheme is used for time and space discretization respectively. The stability of the method is analyzed by the Von Neumann method. Accuracy of the methods is illustrated with an example. The numerical results are obtained and compared with the analytical solutions.

Adaptive Kernel Principal Analysis for Online Feature Extraction

The batch nature limits the standard kernel principal component analysis (KPCA) methods in numerous applications, especially for dynamic or large-scale data. In this paper, an efficient adaptive approach is presented for online extraction of the kernel principal components (KPC). The contribution of this paper may be divided into two parts. First, kernel covariance matrix is correctly updated to adapt to the changing characteristics of data. Second, KPC are recursively formulated to overcome the batch nature of standard KPCA.This formulation is derived from the recursive eigen-decomposition of kernel covariance matrix and indicates the KPC variation caused by the new data. The proposed method not only alleviates sub-optimality of the KPCA method for non-stationary data, but also maintains constant update speed and memory usage as the data-size increases. Experiments for simulation data and real applications demonstrate that our approach yields improvements in terms of both computational speed and approximation accuracy.

WLAN Positioning Based on Joint TOA and RSS Characteristics

WLAN Positioning has been presented by many approaches in literatures using the characteristics of Received Signal Strength (RSS), Time of Arrival (TOA) or Time Difference of Arrival (TDOA), Angle of Arrival (AOA) and cell ID. Among these, RSS approach is the simplest method to implement because there is no need of modification on both access points and client devices whereas its accuracy is terrible due to physical environments. For TOA or TDOA approach, the accuracy is quite acceptable but most researches have to modify either software or hardware on existing WLAN infrastructure. The scales of modifications are made on only access card up to the changes in protocol of WLAN. Hence, it is an unattractive approach to use TOA or TDOA for positioning system. In this paper, the new concept of merging both RSS and TOA positioning techniques is proposed. In addition, the method to achieve TOA characteristic for positioning WLAN user without any extra modification necessarily appended in the existing system is presented. The measurement results confirm that the proposed technique using both RSS and TOA characteristics provides better accuracy than using only either RSS or TOA approach.

Investigation of VMAT Algorithms and Dosimetry

Purpose: Planning and dosimetry of different VMAT algorithms (SmartArc, Ergo++, Autobeam) is compared with IMRT for Head and Neck Cancer patients. Modelling was performed to rule out the causes of discrepancies between planned and delivered dose. Methods: Five HNC patients previously treated with IMRT were re-planned with SmartArc (SA), Ergo++ and Autobeam. Plans were compared with each other and against IMRT and evaluated using DVHs for PTVs and OARs, delivery time, monitor units (MU) and dosimetric accuracy. Modelling of control point (CP) spacing, Leaf-end Separation and MLC/Aperture shape was performed to rule out causes of discrepancies between planned and delivered doses. Additionally estimated arc delivery times, overall plan generation times and effect of CP spacing and number of arcs on plan generation times were recorded. Results: Single arc SmartArc plans (SA4d) were generally better than IMRT and double arc plans (SA2Arcs) in terms of homogeneity and target coverage. Double arc plans seemed to have a positive role in achieving improved Conformity Index (CI) and better sparing of some Organs at Risk (OARs) compared to Step and Shoot IMRT (ss-IMRT) and SA4d. Overall Ergo++ plans achieved best CI for both PTVs. Dosimetric validation of all VMAT plans without modelling was found to be lower than ss-IMRT. Total MUs required for delivery were on average 19%, 30%, 10.6% and 6.5% lower than ss-IMRT for SA4d, SA2d (Single arc with 20 Gantry Spacing), SA2Arcs and Autobeam plans respectively. Autobeam was most efficient in terms of actual treatment delivery times whereas Ergo++ plans took longest to deliver. Conclusion: Overall SA single arc plans on average achieved best target coverage and homogeneity for both PTVs. SA2Arc plans showed improved CI and some OARs sparing. Very good dosimetric results were achieved with modelling. Ergo++ plans achieved best CI. Autobeam resulted in fastest treatment delivery times.

Computational Initial Value Method for Vibration Analysis of Symmetrically Laminated Composite Plate

In the present paper, an improved initial value numerical technique is presented to analyze the free vibration of symmetrically laminated rectangular plate. A combination of the initial value method (IV) and the finite differences (FD) devices is utilized to develop the present (IVFD) technique. The achieved technique is applied to the equation of motion of vibrating laminated rectangular plate under various types of boundary conditions. Three common types of laminated symmetrically cross-ply, orthotropic and isotropic plates are analyzed here. The convergence and accuracy of the presented Initial Value-Finite Differences (IVFD) technique have been examined. Also, the merits and validity of improved technique are satisfied via comparing the obtained results with those available in literature indicating good agreements.

Research on the Layout of Ground Control Points in Plain area 1:10000 DLG Production Using POS Technique

POS (also been called DGPS/IMU) technique can obtain the Exterior Orientation Elements of aerial photo, so the triangulation and DLG production using POS can save large numbers of ground control points (GCP), and this will improve the produce efficiency of DLG and reduce the cost of collecting GCP. This paper mainly research on POS technique in production of 1:10 000 scale DLG on GCP distribution. We designed 23 kinds of ground control points distribution schemes, using integrated sensor direction method to do the triangulation experiments, based on the results of triangulation, we produce a map with the scale of 1:10 000 and test its accuracy. This paper put forward appropriate GCP distributing schemes by experiments and research above, and made preparations for the application of POS technique on photogrammetry 4D data production.

Design of Angular Estimator of Inertial Sensor Using the Least Square Method

Since MEMS gyro sensors measure not angle of rotation but angular rate, an estimator is designed to estimate the angles in many applications. Gyro and accelerometer are used to improve estimating accuracy of the angle. This paper presents a method of finding filter coefficients of the well-known estimator which is to get rotation angles from gyro and accelerometer data. In order to verify the performance of our method, the estimated angle is compared with the encoder output in a rotary pendulum system.