New Product-Type Estimators for the Population Mean Using Quartiles of the Auxiliary Variable

In this paper, we suggest new product-type estimators for the population mean of the variable of interest exploiting the first or the third quartile of the auxiliary variable. We obtain mean square error equations and the bias for the estimators. We study the properties of these estimators using simple random sampling (SRS) and ranked set sampling (RSS) methods. It is found that, SRS and RSS produce approximately unbiased estimators of the population mean. However, the RSS estimators are more efficient than those obtained using SRS based on the same number of measured units for all values of the correlation coefficient.

A New Derivative-Free Quasi-Secant Algorithm For Solving Non-Linear Equations

Most of the nonlinear equation solvers do not converge always or they use the derivatives of the function to approximate the root of such equations. Here, we give a derivative-free algorithm that guarantees the convergence. The proposed two-step method, which is to some extent like the secant method, is accompanied with some numerical examples. The illustrative instances manifest that the rate of convergence in proposed algorithm is more than the quadratically iterative schemes.

Improving Injection Moulding Processes Using Experimental Design

Moulded parts contribute to more than 70% of components in products. However, common defects particularly in plastic injection moulding exist such as: warpage, shrinkage, sink marks, and weld lines. In this paper Taguchi experimental design methods are applied to reduce the warpage defect of thin plate Acrylonitrile Butadiene Styrene (ABS) and are demonstrated in two levels; namely, orthogonal arrays of Taguchi and the Analysis of Variance (ANOVA). Eight trials have been run in which the optimal parameters that can minimize the warpage defect in factorial experiment are obtained. The results obtained from ANOVA approach analysis with respect to those derived from MINITAB illustrate the most significant factors which may cause warpage in injection moulding process. Moreover, ANOVA approach in comparison with other approaches like S/N ratio is more accurate and with the interaction of factors it is possible to achieve higher and the better outcomes.

Weighted k-Nearest-Neighbor Techniques for High Throughput Screening Data

The k-nearest neighbors (knn) is a simple but effective method of classification. In this paper we present an extended version of this technique for chemical compounds used in High Throughput Screening, where the distances of the nearest neighbors can be taken into account. Our algorithm uses kernel weight functions as guidance for the process of defining activity in screening data. Proposed kernel weight function aims to combine properties of graphical structure and molecule descriptors of screening compounds. We apply the modified knn method on several experimental data from biological screens. The experimental results confirm the effectiveness of the proposed method.

Grid-HPA: Predicting Resource Requirements of a Job in the Grid Computing Environment

For complete support of Quality of Service, it is better that environment itself predicts resource requirements of a job by using special methods in the Grid computing. The exact and correct prediction causes exact matching of required resources with available resources. After the execution of each job, the used resources will be saved in the active database named "History". At first some of the attributes will be exploit from the main job and according to a defined similarity algorithm the most similar executed job will be exploited from "History" using statistic terms such as linear regression or average, resource requirements will be predicted. The new idea in this research is based on active database and centralized history maintenance. Implementation and testing of the proposed architecture results in accuracy percentage of 96.68% to predict CPU usage of jobs and 91.29% of memory usage and 89.80% of the band width usage.

The Effect of Ageing on Impact Toughness and Microstructure of 2024 Al-Cu-Mg Alloy

The present study aims at determining the effect of ageing on the impact toughness and microstructure of 2024 Al-Cu - Mg alloy. Following the 2 h solutionizing treatment at 450°C and water quench, the specimens were aged at 200°C for various periods (1 to 18 h). The precipitation stages during ageing were monitored by hardness measurements. For each specimen group, Charpy impact and hardness tests were carried out. During ageing the impact toughness of the alloy first increased, and then, following a maxima decreased due to the precipitation of intermediate phases, finally it reached its minimum at the peak hardness. Correlations between hardness and impact toughness were investigated.

Aircraft Gas Turbine Engines Technical Condition Identification System

In this paper is shown that the probability-statistic methods application, especially at the early stage of the aviation gas turbine engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence is considered the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods. Training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. Thus for GTE technical condition more adequate model making are analysed dynamics of skewness and kurtosis coefficients' changes. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows to draw conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. For checking of models adequacy is considered the Fuzzy Multiple Correlation Coefficient of Fuzzy Multiple Regression. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stage-bystage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine temperature condition was made.

A Survey of Business Component Identification Methods and Related Techniques

With deep development of software reuse, componentrelated technologies have been widely applied in the development of large-scale complex applications. Component identification (CI) is one of the primary research problems in software reuse, by analyzing domain business models to get a set of business components with high reuse value and good reuse performance to support effective reuse. Based on the concept and classification of CI, its technical stack is briefly discussed from four views, i.e., form of input business models, identification goals, identification strategies, and identification process. Then various CI methods presented in literatures are classified into four types, i.e., domain analysis based methods, cohesion-coupling based clustering methods, CRUD matrix based methods, and other methods, with the comparisons between these methods for their advantages and disadvantages. Additionally, some insufficiencies of study on CI are discussed, and the causes are explained subsequently. Finally, it is concluded with some significantly promising tendency about research on this problem.

Analysis of the Communication Methods of an iCIM 3000 System within the Frame of Research Purpose

Current trends in manufacturing are characterized by production broadening, innovation cycle shortening, and the products having a new shape, material and functions. The production strategy focused on time needed change from the traditional functional production structure to flexible manufacturing cells and lines. Production by automated manufacturing system (AMS) is one of the most important manufacturing philosophies in the last years. The main goals of the project we are involved in lies on building a laboratory in which will be located a flexible manufacturing system consisting of at least two production machines with NC control (milling machines, lathe). These machines will be linked to a transport system and they will be served by industrial robots. Within this flexible manufacturing system a station for the quality control consisting of a camera system and rack warehouse will be also located. The design, analysis and improvement of this manufacturing system, specially with a special focus on the communication among devices constitute the main aims of this paper. The key determining factors for the manufacturing system design are: the product, the production volume, the used machines, the disposable manpower, the disposable infrastructure and the legislative frame for the specific cases.

On the Solution of Fully Fuzzy Linear Systems

A linear system is called a fully fuzzy linear system (FFLS) if quantities in this system are all fuzzy numbers. For the FFLS, we investigate its solution and develop a new approximate method for solving the FFLS. Observing the numerical results, we find that our method is accurate than the iterative Jacobi and Gauss- Seidel methods on approximating the solution of FFLS.

Comparison of the Parameter using ECG with Bisepctrum Parameter using EEG during General Anesthesia

The measurement of anesthetic depth is necessary in anesthesiology. NN10 is very simple method among the RR intervals analysis methods. NN10 parameter means the numbers of above the 10 ms intervals of the normal to normal RR intervals. Bispectrum analysis is defined as 2D FFT. EEG signal reflected the non-linear peristalsis phenomena according to the change brain function. After analyzing the bispectrum of the 2 dimension, the most significant power spectrum density peaks appeared abundantly at the specific area in awakening and anesthesia state. These points are utilized to create the new index since many peaks appeared at the specific area in the frequency coordinate. The measured range of an index was 0-100. An index is 20-50 at an anesthesia, while the index is 90-60 at the awake. In this paper, the relation between NN10 parameter using ECG and bisepctrum index using EEG is observed to estimate the depth of anesthesia during anesthesia and then we estimated the utility of the anesthetic.

On Speeding Up Support Vector Machines: Proximity Graphs Versus Random Sampling for Pre-Selection Condensation

Support vector machines (SVMs) are considered to be the best machine learning algorithms for minimizing the predictive probability of misclassification. However, their drawback is that for large data sets the computation of the optimal decision boundary is a time consuming function of the size of the training set. Hence several methods have been proposed to speed up the SVM algorithm. Here three methods used to speed up the computation of the SVM classifiers are compared experimentally using a musical genre classification problem. The simplest method pre-selects a random sample of the data before the application of the SVM algorithm. Two additional methods use proximity graphs to pre-select data that are near the decision boundary. One uses k-Nearest Neighbor graphs and the other Relative Neighborhood Graphs to accomplish the task.

Spine Evaluation Device with Visual Feedback

The posteroanterior manipulation technique is usually include in the procedure of the lumbar spine to evaluate the intervertebral motion according to mechanical resistance. The mechanical device with visual feedback was proposed that allows one to analysis the lumbar segments mobility “in vivo" facilitating for the therapist to take its treatment evolution. The measuring system uses load cell and displacement sensor to estimate spine stiffness. In this work, the device was tested by 2 therapists, female, applying posteroanterior force techniques to 5 volunteers, female, with frequency of approximately 1.2-1.8 Hz. A test-retest procedure was used for 2 periods of day. The visual feedback results small variation of forces and cycle time during 6 cycles rhythmic application. The stiffness values showed good agreement between test-retest procedures when used same order of maximum forces.

Energy Efficient In-Network Data Processing in Sensor Networks

The Sensor Network consists of densely deployed sensor nodes. Energy optimization is one of the most important aspects of sensor application design. Data acquisition and aggregation techniques for processing data in-network should be energy efficient. Due to the cross-layer design, resource-limited and noisy nature of Wireless Sensor Networks(WSNs), it is challenging to study the performance of these systems in a realistic setting. In this paper, we propose optimizing queries by aggregation of data and data redundancy to reduce energy consumption without requiring all sensed data and directed diffusion communication paradigm to achieve power savings, robust communication and processing data in-network. To estimate the per-node power consumption POWERTossim mica2 energy model is used, which provides scalable and accurate results. The performance analysis shows that the proposed methods overcomes the existing methods in the aspects of energy consumption in wireless sensor networks.

Comparison of Parameterization Methods in Recognizing Spoken Arabic Digits

This paper proposes evaluation of sound parameterization methods in recognizing some spoken Arabic words, namely digits from zero to nine. Each isolated spoken word is represented by a single template based on a specific recognition feature, and the recognition is based on the Euclidean distance from those templates. The performance analysis of recognition is based on four parameterization features: the Burg Spectrum Analysis, the Walsh Spectrum Analysis, the Thomson Multitaper Spectrum Analysis and the Mel Frequency Cepstral Coefficients (MFCC) features. The main aim of this paper was to compare, analyze, and discuss the outcomes of spoken Arabic digits recognition systems based on the selected recognition features. The results acqired confirm that the use of MFCC features is a very promising method in recognizing Spoken Arabic digits.

Towards Automatic Recognition and Grading of Ganoderma Infection Pattern Using Fuzzy Systems

This paper deals with the extraction of information from the experts to automatically identify and recognize Ganoderma infection in oil palm stem using tomography images. Expert-s knowledge are used as rules in a Fuzzy Inference Systems to classify each individual patterns observed in he tomography image. The classification is done by defining membership functions which assigned a set of three possible hypotheses : Ganoderma infection (G), non Ganoderma infection (N) or intact stem tissue (I) to every abnormalities pattern found in the tomography image. A complete comparison between Mamdani and Sugeno style,triangular, trapezoids and mixed triangular-trapezoids membership functions and different methods of aggregation and defuzzification is also presented and analyzed to select suitable Fuzzy Inference System methods to perform the above mentioned task. The results showed that seven out of 30 initial possible combination of available Fuzzy Inference methods in MATLAB Fuzzy Toolbox were observed giving result close to the experts estimation.

Interactive Methods of Design Education as the Principles of Social Implications of Modern Communities

The term interactive education indicates the meaning related with multidisciplinary aspects of distance education following contemporary means around a common basis with different functional requirements. The aim of this paper is to reflect the new techniques in education with the new methods and inventions. These methods are better supplied by interactivity. The integration of interactive facilities in the discipline of education with distance learning is not a new concept but in addition the usage of these methods on design issue is newly being adapted to design education. In this paper the general approach of this method and after the analysis of different samples, the advantages and disadvantages of these approaches are being identified. The method of this paper is to evaluate the related samples and then analyzing the main hypothesis. The main focus is to mention the formation processes of this education. Technological developments in education should be filtered around the necessities of the design education and the structure of the system could then be formed or renewed. The conclusion indicates that interactive methods of education in design issue is a meaning capturing not only technical and computational intelligence aspects but also aesthetical and artistic approaches coming together around the same purpose.

Holografic Interferometry used for Measurement of Temperature Field in Fluid

The presented paper shows the possibility of using holographic interferometry for measurement of temperature field in moving fluids. There are a few methods for identification of velocity fields in fluids, such us LDA, PIV, hot wire anemometry. It is very difficult to measure the temperature field in moving fluids. One of the often used methods is Constant Current Anemometry (CCA), which is a point temperature measurement method. Data are possibly acquired at frequencies up to 1000Hz. This frequency should be limiting factor for using of CCA in fluid when fast change of temperature occurs. This shortcoming of CCA measurements should be overcome by using of optical methods such as holographic interferometry. It is necessary to employ a special holographic setup with double sensitivity instead of the commonly used Mach-Zehnder type of holographic interferometer in order to attain the parameters sufficient for the studied case. This setup is not light efficient like the Mach-Zehnder type but has double sensitivity. The special technique of acquiring and phase averaging of results from holographic interferometry is also presented. The results from the holographic interferometry experiments will be compared with the temperature field achieved by methods CCA method.

Artificial Neural Networks Application to Improve Shunt Active Power Filter

Active Power Filters (APFs) are today the most widely used systems to eliminate harmonics compensate power factor and correct unbalanced problems in industrial power plants. We propose to improve the performances of conventional APFs by using artificial neural networks (ANNs) for harmonics estimation. This new method combines both the strategies for extracting the three-phase reference currents for active power filters and DC link voltage control method. The ANNs learning capabilities to adaptively choose the power system parameters for both to compute the reference currents and to recharge the capacitor value requested by VDC voltage in order to ensure suitable transit of powers to supply the inverter. To investigate the performance of this identification method, the study has been accomplished using simulation with the MATLAB Simulink Power System Toolbox. The simulation study results of the new (SAPF) identification technique compared to other similar methods are found quite satisfactory by assuring good filtering characteristics and high system stability.

An Efficient 3D Animation Data Reduction Using Frame Removal

Existing methods in which the animation data of all frames are stored and reproduced as with vertex animation cannot be used in mobile device environments because these methods use large amounts of the memory. So 3D animation data reduction methods aimed at solving this problem have been extensively studied thus far and we propose a new method as follows. First, we find and remove frames in which motion changes are small out of all animation frames and store only the animation data of remaining frames (involving large motion changes). When playing the animation, the removed frame areas are reconstructed using the interpolation of the remaining frames. Our key contribution is to calculate the accelerations of the joints of individual frames and the standard deviations of the accelerations using the information of joint locations in the relevant 3D model in order to find and delete frames in which motion changes are small. Our methods can reduce data sizes by approximately 50% or more while providing quality which is not much lower compared to original animations. Therefore, our method is expected to be usefully used in mobile device environments or other environments in which memory sizes are limited.