Optimization of Control Parameters for EWR in Injection Flushing Type of EDM on Stainless Steel 304 Workpiece

The operating control parameters of injection flushing type of electrical discharge machining process on stainless steel 304 workpiece using copper tools are being optimized according to its individual machining characteristic i.e. Electrode Wear Ratio (EWR). Higher EWR would give bad dimensional precision for the EDM machined workpiece because of high electrode wear. Hence, the quality characteristic for EWR is set to lower-the-better to achieve the optimum dimensional precision for the machined workpiece. Taguchi method has been used for the construction, layout and analysis of the experiment for EWR machining characteristic. The use of Taguchi method in the experiment saves a lot of time and cost of preparing and machining the experiment samples. Therefore, an L18 Orthogonal array which was the fundamental component in the statistical design of experiments has been used to plan the experiments and Analysis of Variance (ANOVA) is used to determine the optimum machining parameters for this machining characteristic. The control parameters selected for this optimization experiments are polarity, pulse on duration, discharge current, discharge voltage, machining depth, machining diameter and dielectric liquid pressure. The result had shown that negative polarity machining parameter setting will decreases EWR.

Scaling up Detection Rates and Reducing False Positives in Intrusion Detection using NBTree

In this paper, we present a new learning algorithm for anomaly based network intrusion detection using improved self adaptive naïve Bayesian tree (NBTree), which induces a hybrid of decision tree and naïve Bayesian classifier. The proposed approach scales up the balance detections for different attack types and keeps the false positives at acceptable level in intrusion detection. In complex and dynamic large intrusion detection dataset, the detection accuracy of naïve Bayesian classifier does not scale up as well as decision tree. It has been successfully tested in other problem domains that naïve Bayesian tree improves the classification rates in large dataset. In naïve Bayesian tree nodes contain and split as regular decision-trees, but the leaves contain naïve Bayesian classifiers. The experimental results on KDD99 benchmark network intrusion detection dataset demonstrate that this new approach scales up the detection rates for different attack types and reduces false positives in network intrusion detection.

Prediction of Soil Hydraulic Conductivity from Particle-Size Distribution

Hydraulic conductivity is one parameter important for predicting the movement of water and contaminants dissolved in the water through the soil. The hydraulic conductivity is measured on soil samples in the lab and sometimes tests carried out in the field. The hydraulic conductivity has been related to soil particle diameter by a number of investigators. In this study, 25 set of soil samples with sand texture. The results show approximately success in predicting hydraulic conductivity from particle diameters data. The following relationship obtained from multiple linear regressions on data (R2 = 0.52): Where d10, d50 and d60, are the soil particle diameter (mm) that 10%, 50% and 60% of all soil particles are finer (smaller) by weight and Ks, saturated hydraulic conductivity is expressed in m/day. The results of regression analysis showed that d10 play a more significant role with respect to Ks, saturated hydraulic conductivity (m/day), and has been named as the effective parameter in Ks calculation.

Exact Solutions of Steady Plane Flows of an Incompressible Fluid of Variable Viscosity Using (ξ, ψ)- Or (η, ψ)- Coordinates

The exact solutions of the equations describing the steady plane motion of an incompressible fluid of variable viscosity for an arbitrary state equation are determined in the (ξ,ψ) − or (η,ψ )- coordinates where ψ(x,y) is the stream function, ξ and η are the parts of the analytic function, ϖ =ξ( x,y )+iη( x,y ). Most of the solutions involve arbitrary function/ functions indicating  that the flow equations possess an infinite set of solutions. 

Influences of Si and C- Doping on the Al-27 and N-14 Quardrupole Coupling Constants in AlN Nanotubes: A DFT Study

A computational study at the level density functional theory (DFT) was carried out to investigate the influences of Si and C-doping on the 14N and 27Al quadrupole coupling constant in the (10, 0) zigzag single ? walled Aluminum-Nitride nanotube (AlNNT). To this aim, a 1.16nm, length of AlNNT consisting of 40 Al atoms and 40 N atoms were selected where the end atoms are capped by hydrogen atom. To follow the purpose, three Si atoms and three C atoms were doped instead of three Al atoms and three N atoms as a central ring in the surface of the Si and C-doped AlNNT. At first both of systems optimized at the level of BLYP method and 6-31G (d) basis set and after that, the NQR parameters were calculated at the level BLYP method and 6-311+G** basis set in two optimized forms. The calculate CQ values for both optimized AlNNT systems, raw and Si and C-doped, reveal different electronic environments in the mentioned systems. It was also demonstrated that the end nuclei have the largest CQ values in both considered AlNNT systems. All the calculations were carried out using Gaussian 98 package of program.

Curriculum Development of Successful Intelligence Promoting for Nursing Students

Successful intelligence (SI) is the integrated set of the ability needed to attain success in life, within individual-s sociocultural context. People are successfully intelligent by recognizing their strengths and weaknesses. They will find ways to strengthen their weakness and maintain their strength or even improve it. SI people can shape, select, and adapt to the environments by using balance of higher-ordered thinking abilities including; critical, creative, and applicative. Aims: The purposes of this study were to; 1) develop curriculum that promotes SI for nursing students, and 2) study the effectiveness of the curriculum development. Method: Research and Development was a method used for this study. The design was divided into two phases; 1) the curriculum development which composed of three steps (needs assessment, curriculum development and curriculum field trail), and 2) the curriculum implementation. In this phase, a pre-experimental research design (one group pretest-posttest design) was conducted. The sample composed of 49 sophomore nursing students of Boromarajonani College of Nursing, Surin, Thailand who enrolled in Nursing care of Health problem course I in 2011 academic year. Data were carefully collected using 4 instruments; 1) Modified essay questions test (MEQ) 2) Nursing Care Plan evaluation form 3) Group processing observation form (α = 0.74) and 4) Satisfied evaluation form of learning (α = 0.82). Data were analyzed using descriptive statistics and content analysis. Results: The results revealed that the sample had post-test average score of SI higher than pre-test average score (mean difference was 5.03, S.D. = 2.84). Fifty seven percentages of the sample passed the MEQ posttest at the criteria of 60 percentages. Students demonstrated the strategies of how to develop nursing care plan. Overall, students- satisfaction on teaching performance was at high level (mean = 4.35, S.D. = 0.46). Conclusion: This curriculum can promote the attribute of characteristic of SI person and was highly required to be continued.

Human Action Recognition Based on Ridgelet Transform and SVM

In this paper, a novel algorithm based on Ridgelet Transform and support vector machine is proposed for human action recognition. The Ridgelet transform is a directional multi-resolution transform and it is more suitable for describing the human action by performing its directional information to form spatial features vectors. The dynamic transition between the spatial features is carried out using both the Principal Component Analysis and clustering algorithm K-means. First, the Principal Component Analysis is used to reduce the dimensionality of the obtained vectors. Then, the kmeans algorithm is then used to perform the obtained vectors to form the spatio-temporal pattern, called set-of-labels, according to given periodicity of human action. Finally, a Support Machine classifier is used to discriminate between the different human actions. Different tests are conducted on popular Datasets, such as Weizmann and KTH. The obtained results show that the proposed method provides more significant accuracy rate and it drives more robustness in very challenging situations such as lighting changes, scaling and dynamic environment

Experimental Investigation of Adjacent Hall Structures Parameters

Adjacent Hall microsensors, comprising a silicon substrate and four contacts, providing simultaneously two supply inputs and two differential outputs, are characterized. The voltage related sensitivity is in the order of 0.11T-1, and a cancellation method for offset compensation is used, achieving residual offset in the micro scale which is also compared to a single Hall plate.

An Intelligent Combined Method Based on Power Spectral Density, Decision Trees and Fuzzy Logic for Hydraulic Pumps Fault Diagnosis

Recently, the issue of machine condition monitoring and fault diagnosis as a part of maintenance system became global due to the potential advantages to be gained from reduced maintenance costs, improved productivity and increased machine availability. The aim of this work is to investigate the effectiveness of a new fault diagnosis method based on power spectral density (PSD) of vibration signals in combination with decision trees and fuzzy inference system (FIS). To this end, a series of studies was conducted on an external gear hydraulic pump. After a test under normal condition, a number of different machine defect conditions were introduced for three working levels of pump speed (1000, 1500, and 2000 rpm), corresponding to (i) Journal-bearing with inner face wear (BIFW), (ii) Gear with tooth face wear (GTFW), and (iii) Journal-bearing with inner face wear plus Gear with tooth face wear (B&GW). The features of PSD values of vibration signal were extracted using descriptive statistical parameters. J48 algorithm is used as a feature selection procedure to select pertinent features from data set. The output of J48 algorithm was employed to produce the crisp if-then rule and membership function sets. The structure of FIS classifier was then defined based on the crisp sets. In order to evaluate the proposed PSD-J48-FIS model, the data sets obtained from vibration signals of the pump were used. Results showed that the total classification accuracy for 1000, 1500, and 2000 rpm conditions were 96.42%, 100%, and 96.42% respectively. The results indicate that the combined PSD-J48-FIS model has the potential for fault diagnosis of hydraulic pumps.

A Study of Dose Distribution and Image Quality under an Automatic Tube Current Modulation (ATCM) System for a Toshiba Aquilion 64 CT Scanner Using a New Design of Phantom

Automatic tube current modulation (ATCM) systems are available for all CT manufacturers and are used for the majority of patients. Understanding how the systems work and their influence on patient dose and image quality is important for CT users, in order to gain the most effective use of the systems. In the present study, a new phantom was used for evaluating dose distribution and image quality under the ATCM operation for the Toshiba Aquilion 64 CT scanner using different ATCM options and a fixed mAs technique. A routine chest, abdomen and pelvis (CAP) protocol was selected for study and Gafchromic film was used to measure entrance surface dose (ESD), peripheral dose and central axis dose in the phantom. The results show the dose reductions achievable with various ATCM options, in relation with the target noise. The doses and image noise distribution were more uniform when the ATCM system was implemented compared with the fixed mAs technique. The lower limit set for the tube current will affect the modulations especially for the lower dose option. This limit prevented the tube current being reduced further and therefore the lower dose ATCM setting resembled a fixed mAs technique. Selection of a lower tube current limit is likely to reduce doses for smaller patients in scans of chest and neck regions.

An Experimental Consideration of the Hybrid Architecture Based on the Situated Action Generator

The approaches to make an agent generate intelligent actions in the AI field might be roughly categorized into two ways–the classical planning and situated action system. It is well known that each system have its own strength and weakness. However, each system also has its own application field. In particular, most of situated action systems do not directly deal with the logical problem. This paper first briefly mentions the novel action generator to situatedly extract a set of actions, which is likely to help to achieve the goal at the current situation in the relaxed logical space. After performing the action set, the agent should recognize the situation for deciding the next likely action set. However, since the extracted action is an approximation of the action which helps to achieve the goal, the agent could be caught into the deadlock of the problem. This paper proposes the newly developed hybrid architecture to solve the problem, which combines the novel situated action generator with the conventional planner. The empirical result in some planning domains shows that the quality of the resultant path to the goal is mostly acceptable as well as deriving the fast response time, and suggests the correlation between the structure of problems and the organization of each system which generates the action.

Sentence Modality Recognition in French based on Prosody

This paper deals with automatic sentence modality recognition in French. In this work, only prosodic features are considered. The sentences are recognized according to the three following modalities: declarative, interrogative and exclamatory sentences. This information will be used to animate a talking head for deaf and hearing-impaired children. We first statistically study a real radio corpus in order to assess the feasibility of the automatic modeling of sentence types. Then, we test two sets of prosodic features as well as two different classifiers and their combination. We further focus our attention on questions recognition, as this modality is certainly the most important one for the target application.

New EEM/BEM Hybrid Method for Electric Field Calculation in Cable Joints

A power cable is widely used for power supply in power distributing networks and power transmission lines. Due to limitations in the production, delivery and setting up power cables, they are produced and delivered in several separate lengths. Cable itself, consists of two cable terminations and arbitrary number of cable joints, depending on the cable route length. Electrical stress control is needed to prevent a dielectric breakdown at the end of the insulation shield in both the air and cable insulation. Reliability of cable joint depends on its materials, design, installation and operating environment. The paper describes design and performance results for new modeled cable joints. Design concepts, based on numerical calculations, must be correct. An Equivalent Electrodes Method/Boundary Elements Method-hybrid approach that allows electromagnetic field calculations in multilayer dielectric media, including inhomogeneous regions, is presented.

Artificial Neural Network Model for a Low Cost Failure Sensor: Performance Assessment in Pipeline Distribution

This paper describes an automated event detection and location system for water distribution pipelines which is based upon low-cost sensor technology and signature analysis by an Artificial Neural Network (ANN). The development of a low cost failure sensor which measures the opacity or cloudiness of the local water flow has been designed, developed and validated, and an ANN based system is then described which uses time series data produced by sensors to construct an empirical model for time series prediction and classification of events. These two components have been installed, tested and verified in an experimental site in a UK water distribution system. Verification of the system has been achieved from a series of simulated burst trials which have provided real data sets. It is concluded that the system has potential in water distribution network management.

Design of QFT-Based Self-Tuning Deadbeat Controller

This paper presents a design method of self-tuning Quantitative Feedback Theory (QFT) by using improved deadbeat control algorithm. QFT is a technique to achieve robust control with pre-defined specifications whereas deadbeat is an algorithm that could bring the output to steady state with minimum step size. Nevertheless, usually there are large peaks in the deadbeat response. By integrating QFT specifications into deadbeat algorithm, the large peaks could be tolerated. On the other hand, emerging QFT with adaptive element will produce a robust controller with wider coverage of uncertainty. By combining QFT-based deadbeat algorithm and adaptive element, superior controller that is called selftuning QFT-based deadbeat controller could be achieved. The output response that is fast, robust and adaptive is expected. Using a grain dryer plant model as a pilot case-study, the performance of the proposed method has been evaluated and analyzed. Grain drying process is very complex with highly nonlinear behaviour, long delay, affected by environmental changes and affected by disturbances. Performance comparisons have been performed between the proposed self-tuning QFT-based deadbeat, standard QFT and standard dead-beat controllers. The efficiency of the self-tuning QFTbased dead-beat controller has been proven from the tests results in terms of controller’s parameters are updated online, less percentage of overshoot and settling time especially when there are variations in the plant.

Effect of Using Stone Cutting Waste on the Compression Strength and Slump Characteristics of Concrete

The aim of this work is to study the possible use of stone cutting sludge waste in concrete production, which would reduce both the environmental impact and the production cost .Slurry sludge was used a source of water in concrete production, which was obtained from Samara factory/Jordan, The physico-chemical and mineralogical characterization of the sludge was carried out to identify the major components and to compare it with the typical sand used to produce concrete. Samples analysis showed that 96% of slurry sludge volume is water, so it should be considered as an important source of water. Results indicated that the use of slurry sludge as water source in concrete production has insignificant effect on compression strength, while it has a sharp effect on the slump values. Using slurry sludge with a percentage of 25% of the total water content obtained successful concrete samples regarding slump and compression tests. To clarify slurry sludge, settling process can be used to remove the suspended solid. A settling period of 30 min. obtained 99% removal efficiency. The clarified water is suitable for using in concrete mixes, which reduce water consumption, conserve water recourses, increase the profit, reduce operation cost and save the environment. Additionally, the dry sludge could be used in the mix design instead of the fine materials with sizes < 160 um. This application could conserve the natural materials and solve the environmental and economical problem caused by sludge accumulation.

Investigation of Oil inside the Wells in REY Area in Tehran Oil Refining Company in Iran

REY area has been located in Tehran Province and several archaeological ruins of this area indicate that the settlement in this area has been started since several thousand years ago. In this paper, the main investigation items consist of analysis of oil components and groundwater quality inside the wells. By finding the contents of oil in the well, it is possible to find out the pollution source by comparing the oil contents of well with other oil products that are used inside and outside of the oil farm. Investigation items consist of analysis of BTEX (Benzene, Toluene, Ethyl-benzene, Xylene), Gas chromatographic distillation characteristics, Water content, Density, Sulfur content, Lead content, Atmospheric distillation, MTBE(Methyl tertiary butyl ether). Analysis of polluting oil components showed that except MW(Monitoring Well)10 and MW 15 that oil with slightly heavy components was detected in them; with a high possibility the polluting oil is light oil.

Micro-Penetrator for Canadian Planetary Exploration

Space exploration is a highly visible endeavour of humankind to seek profound answers to questions about the origins of our solar system, whether life exists beyond Earth, and how we could live on other worlds. Different platforms have been utilized in planetary exploration missions, such as orbiters, landers, rovers, and penetrators. Having low mass, good mechanical contact with the surface, ability to acquire high quality scientific subsurface data, and ability to be deployed in areas that may not be conducive to landers or rovers, Penetrators provide an alternative and complimentary solution that makes possible scientific exploration of hardly accessible sites (icy areas, gully sites, highlands etc.). The Canadian Space Agency (CSA) has put space exploration as one of the pillars of its space program, and established ExCo program to prepare Canada for future international planetary exploration. ExCo sets surface mobility as its focus and priority, and invests mainly in the development of rovers because of Canada's niche space robotics technology. Meanwhile, CSA is also investigating how micro-penetrators can help Canada to fulfill its scientific objectives for planetary exploration. This paper presents a review of the micro-penetrator technologies, past missions, and lessons learned. It gives a detailed analysis of the technical challenges of micro-penetrators, such as high impact survivability, high precision guidance navigation and control, thermal protection, communications, and etc. Then, a Canadian perspective of a possible micro-penetrator mission is given, including Canadian scientific objectives and priorities, potential instruments, and flight opportunities.

Genetic Algorithms and Kernel Matrix-based Criteria Combined Approach to Perform Feature and Model Selection for Support Vector Machines

Feature and model selection are in the center of attention of many researches because of their impact on classifiers- performance. Both selections are usually performed separately but recent developments suggest using a combined GA-SVM approach to perform them simultaneously. This approach improves the performance of the classifier identifying the best subset of variables and the optimal parameters- values. Although GA-SVM is an effective method it is computationally expensive, thus a rough method can be considered. The paper investigates a joined approach of Genetic Algorithm and kernel matrix criteria to perform simultaneously feature and model selection for SVM classification problem. The purpose of this research is to improve the classification performance of SVM through an efficient approach, the Kernel Matrix Genetic Algorithm method (KMGA).

Development of EPID-based Real time Dose Verification for Dynamic IMRT

An electronic portal image device (EPID) has become a method of patient-specific IMRT dose verification for radiotherapy. Research studies have focused on pre and post-treatment verification, however, there are currently no interventional procedures using EPID dosimetry that measure the dose in real time as a mechanism to ensure that overdoses do not occur and underdoses are detected as soon as is practically possible. As a result, an EPID-based real time dose verification system for dynamic IMRT was developed and was implemented with MATLAB/Simulink. The EPID image acquisition was set to continuous acquisition mode at 1.4 images per second. The system defined the time constraint gap, or execution gap at the image acquisition time, so that every calculation must be completed before the next image capture is completed. In addition, the