Experimental and Statistical Study of Nonlinear Effect of Carbon Nanotube on Mechanical Properties of Polypropylene Composites

In this study concept of experimental design is successfully applied for the determination of optimum condition to produce PP/SWCNT (Polypropylene/Single wall carbon nanotube) nanocomposite. Central composite design as one of experimental design techniques is employed for the optimization and statistical determination of the significant factors influencing on the tensile modulus and yield stress as mechanical properties of this nanocomposite. The significant factors are SWCNT weight fraction and acid treatment time for functionalizing the nanoparticles. Optimum conditions are in 0.7 % of SWCNT weight fraction and 210 min as acid treatment time for 1112.75 ± 28 MPa as maximum tensile modulus and in 216 min and 0.65 % as acid treatment time and SWCNT weight fraction respectively for 40.26 ± 0.3 MPa as maximum yield stress. Also after setting new experiments for test these optimum conditions, found excelent agreement with predicted values.

Knowledge Based Model for Power Transformer Life Cycle Management Using Knowledge Engineering

Under the limitation of investment budget, a utility company is required to maximize the utilization of their existing assets during their life cycle satisfying both engineering and financial requirements. However, utility does not have knowledge about the status of each asset in the portfolio neither in terms of technical nor financial values. This paper presents a knowledge based model for the utility companies in order to make an optimal decision on power transformer with their utilization. CommonKADS methodology, a structured development for knowledge and expertise representation, is utilized for designing and developing knowledge based model. A case study of One MVA power transformer of Nepal Electricity Authority is presented. The results show that the reusable knowledge can be categorized, modeled and utilized within the utility company using the proposed methodologies. Moreover, the results depict that utility company can achieve both engineering and financial benefits from its utilization.

An Adversarial Construction of Instability Bounds in LIS Networks

In this work, we study the impact of dynamically changing link slowdowns on the stability properties of packetswitched networks under the Adversarial Queueing Theory framework. Especially, we consider the Adversarial, Quasi-Static Slowdown Queueing Theory model, where each link slowdown may take on values in the two-valued set of integers {1, D} with D > 1 which remain fixed for a long time, under a (w, ¤ü)-adversary. In this framework, we present an innovative systematic construction for the estimation of adversarial injection rate lower bounds, which, if exceeded, cause instability in networks that use the LIS (Longest-in- System) protocol for contention-resolution. In addition, we show that a network that uses the LIS protocol for contention-resolution may result in dropping its instability bound at injection rates ¤ü > 0 when the network size and the high slowdown D take large values. This is the best ever known instability lower bound for LIS networks.

Exploring the Combinatorics of Motif Alignments Foraccurately Computing E-values from P-values

In biological and biomedical research motif finding tools are important in locating regulatory elements in DNA sequences. There are many such motif finding tools available, which often yield position weight matrices and significance indicators. These indicators, p-values and E-values, describe the likelihood that a motif alignment is generated by the background process, and the expected number of occurrences of the motif in the data set, respectively. The various tools often estimate these indicators differently, making them not directly comparable. One approach for comparing motifs from different tools, is computing the E-value as the product of the p-value and the number of possible alignments in the data set. In this paper we explore the combinatorics of the motif alignment models OOPS, ZOOPS, and ANR, and propose a generic algorithm for computing the number of possible combinations accurately. We also show that using the wrong alignment model can give E-values that significantly diverge from their true values.

Design Optimization of Cutting Parameters when Turning Inconel 718 with Cermet Inserts

Inconel 718, a nickel based super-alloy is an extensively used alloy, accounting for about 50% by weight of materials used in an aerospace engine, mainly in the gas turbine compartment. This is owing to their outstanding strength and oxidation resistance at elevated temperatures in excess of 5500 C. Machining is a requisite operation in the aircraft industries for the manufacture of the components especially for gas turbines. This paper is concerned with optimization of the surface roughness when turning Inconel 718 with cermet inserts. Optimization of turning operation is very useful to reduce cost and time for machining. The approach is based on Response Surface Method (RSM). In this work, second-order quadratic models are developed for surface roughness, considering the cutting speed, feed rate and depth of cut as the cutting parameters, using central composite design. The developed models are used to determine the optimum machining parameters. These optimized machining parameters are validated experimentally, and it is observed that the response values are in reasonable agreement with the predicted values.

A Blind Digital Watermark in Hadamard Domain

A new blind gray-level watermarking scheme is described. In the proposed method, the host image is first divided into 4*4 non-overlapping blocks. For each block, two first AC coefficients of its Hadamard transform are then estimated using DC coefficients of its neighbor blocks. A gray-level watermark is then added into estimated values. Since embedding watermark does not change the DC coefficients, watermark extracting could be done by estimating AC coefficients and comparing them with their actual values. Several experiments are made and results suggest the robustness of the proposed algorithm.

A Note on Penalized Power-Divergence Test Statistics

In this paper, penalized power-divergence test statistics have been defined and their exact size properties to test a nested sequence of log-linear models have been compared with ordinary power-divergence test statistics for various penalization, λ and main effect values. Since the ordinary and penalized power-divergence test statistics have the same asymptotic distribution, comparisons have been only made for small and moderate samples. Three-way contingency tables distributed according to a multinomial distribution have been considered. Simulation results reveal that penalized power-divergence test statistics perform much better than their ordinary counterparts.

Design of Digital Differentiator to Optimize Relative Error

It is observed that the Weighted least-square (WLS) technique, including the modifications, results in equiripple error curve. The resultant error as a percent of the ideal value is highly non-uniformly distributed over the range of frequencies for which the differentiator is designed. The present paper proposes a modification to the technique so that the optimization procedure results in lower maximum relative error compared to the ideal values. Simulation results for first order as well as higher order differentiators are given to illustrate the excellent performance of the proposed method.

Software Maintenance Severity Prediction for Object Oriented Systems

As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done in time especially for the critical applications. As, Neural networks, which have been already applied in software engineering applications to build reliability growth models predict the gross change or reusability metrics. Neural networks are non-linear sophisticated modeling techniques that are able to model complex functions. Neural network techniques are used when exact nature of input and outputs is not known. A key feature is that they learn the relationship between input and output through training. In this present work, various Neural Network Based techniques are explored and comparative analysis is performed for the prediction of level of need of maintenance by predicting level severity of faults present in NASA-s public domain defect dataset. The comparison of different algorithms is made on the basis of Mean Absolute Error, Root Mean Square Error and Accuracy Values. It is concluded that Generalized Regression Networks is the best algorithm for classification of the software components into different level of severity of impact of the faults. The algorithm can be used to develop model that can be used for identifying modules that are heavily affected by the faults.

Optimization of Gentamicin Production: Comparison of ANN and RSM Techniques

In this work, statistical experimental design was applied for the optimization of medium constituents for Gentamicin production by Micromsonospora echinospora subs pallida (MTCC 708) in a batch reactor and the results are compared with the ANN predicted values. By central composite design, 50 experiments are carried out for five test variables: Starch, Soya bean meal, K2HPO4, CaCO3 and FeSO4. The optimum condition was found to be: Starch (8.9,g/L), Soya bean meal (3.3 g/L), K2HPO4 (0.8 g/L), CaCO3 (4 g/L) and FeSO4 (0.03 g/L). At these optimized conditions, the yield of gentamicin was found to be 1020 mg/L. The R2 values were found to be 1 for ANN training set, 0.9953 for ANN test set, and 0.9286 for RSM.

Performance Evaluation of an Aboveground LNG Storage Tank Cover using Nondestructive and Destructive Tests

In this study, a new procedure for inspecting damages on LNG storage tanks was proposed with the use of structural diagnostic techniques: i.e., nondestructive inspection techniques such as macrography, the hammer sounding test, the Schmidt hammer test, and the ultrasonic pulse velocity test, and destructive inspection techniques such as the compressive strength test, the chloride penetration test, and the carbonation test. From the analysis of all the test results, it was concluded that the LNG storage tank cover was in good condition. Such results were also compared with the Korean concrete standard specifications and design values. In addition, the remaining life of the LNG storage tank was estimated by using existing models. Based on the results, an LNG storage tank cover performance evaluation procedure was suggested.

A Calibration Approach towards Reducing ASM2d Parameter Subsets in Phosphorus Removal Processes

A novel calibration approach that aims to reduce ASM2d parameter subsets and decrease the model complexity is presented. This approach does not require high computational demand and reduces the number of modeling parameters required to achieve the ASMs calibration by employing a sensitivity and iteration methodology. Parameter sensitivity is a crucial factor and the iteration methodology enables refinement of the simulation parameter values. When completing the iteration process, parameters values are determined in descending order of their sensitivities. The number of iterations required is equal to the number of model parameters of the parameter significance ranking. This approach was used for the ASM2d model to the evaluated EBPR phosphorus removal and it was successful. Results of the simulation provide calibration parameters. These included YPAO, YPO4, YPHA, qPHA, qPP, μPAO, bPAO, bPP, bPHA, KPS, YA, μAUT, bAUT, KO2 AUT, and KNH4 AUT. Those parameters were corresponding to the experimental data available.

Islam in the Context of Political Processes in Modern Kazakhstan

Religion revival including Islam in Kazakhstan represents reaction, first of all on internal social and political change, events after disintegration of the USSR. Process of revival of Kazakhstan Islam was accompanied as positive, so by negative tendencies. Old mosques were restored, were under construction new, Islamic schools and high schools were created, was widely studied religious the dogmatic person, the corresponding literature was published, expanded contacts with foreign Muslim brothers in the faith, the centers of the Arab-Muslim culture extended. At the same time in Kazakhstan, there are religious-political parties and movements, pursuing radical goals down to change the spiritual and cultural identity of Muslims of Kazakhstan by the forcible introduction of non-traditional religious and political, ethnic and cultural values.

Reduced Inventories, High Reliability and Short Throughput Times by Using CONWIP Production Planning System

CONWIP (constant work-in-process) as a pull production system have been widely studied by researchers to date. The CONWIP pull production system is an alternative to pure push and pure pull production systems. It lowers and controls inventory levels which make the throughput better, reduces production lead time, delivery reliability and utilization of work. In this article a CONWIP pull production system was simulated. It was simulated push and pull planning system. To compare these systems via a production planning system (PPS) game were adjusted parameters of each production planning system. The main target was to reduce the total WIP and achieve throughput and delivery reliability to minimum values. Data was recorded and evaluated. A future state was made for real production of plastic components and the setup of the two indicators with CONWIP pull production system which can greatly help the company to be more competitive on the market.

Development of Mathematical Model for Overall Oxygen Transfer Coefficient of an Aerator and Comparison with CFD Modeling

The value of overall oxygen transfer Coefficient (KLa), which is the best measure of oxygen transfer in water through aeration, is obtained by a simple approach, which sufficiently explains the utility of the method to eliminate the discrepancies due to inaccurate assumption of saturation dissolved oxygen concentration. The rate of oxygen transfer depends on number of factors like intensity of turbulence, which in turns depends on the speed of rotation, size, and number of blades, diameter and immersion depth of the rotor, and size and shape of aeration tank, as well as on physical, chemical, and biological characteristic of water. An attempt is made in this paper to correlate the overall oxygen transfer Coefficient (KLa), as an independent parameter with other influencing parameters mentioned above. It has been estimated that the simulation equation developed predicts the values of KLa and power with an average standard error of estimation of 0.0164 and 7.66 respectively and with R2 values of 0.979 and 0.989 respectively, when compared with experimentally determined values. The comparison of this model is done with the model generated using Computational fluid dynamics (CFD) and both the models were found to be in good agreement with each other.

Optimal Control Strategies for Speed Control of Permanent-Magnet Synchronous Motor Drives

The permanent magnet synchronous motor (PMSM) is very useful in many applications. Vector control of PMSM is popular kind of its control. In this paper, at first an optimal vector control for PMSM is designed and then results are compared with conventional vector control. Then, it is assumed that the measurements are noisy and linear quadratic Gaussian (LQG) methodology is used to filter the noises. The results of noisy optimal vector control and filtered optimal vector control are compared to each other. Nonlinearity of PMSM and existence of inverter in its control circuit caused that the system is nonlinear and time-variant. With deriving average model, the system is changed to nonlinear time-invariant and then the nonlinear system is converted to linear system by linearization of model around average values. This model is used to optimize vector control then two optimal vector controls are compared to each other. Simulation results show that the performance and robustness to noise of the control system has been highly improved.

A Model for Estimation of Efforts in Development of Software Systems

Software effort estimation is the process of predicting the most realistic use of effort required to develop or maintain software based on incomplete, uncertain and/or noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets. There are various models like Halstead, Walston-Felix, Bailey-Basili, Doty and GA Based models which have already used to estimate the software effort for projects. In this study Statistical Models, Fuzzy-GA and Neuro-Fuzzy (NF) Inference Systems are experimented to estimate the software effort for projects. The performances of the developed models were tested on NASA software project datasets and results are compared with the Halstead, Walston-Felix, Bailey-Basili, Doty and Genetic Algorithm Based models mentioned in the literature. The result shows that the NF Model has the lowest MMRE and RMSE values. The NF Model shows the best results as compared with the Fuzzy-GA based hybrid Inference System and other existing Models that are being used for the Effort Prediction with lowest MMRE and RMSE values.

Development of a Comprehensive Electricity Generation Simulation Model Using a Mixed Integer Programming Approach

This paper presents the development of an electricity simulation model taking into account electrical network constraints, applied on the Belgian power system. The base of the model is optimizing an extensive Unit Commitment (UC) problem through the use of Mixed Integer Linear Programming (MILP). Electrical constraints are incorporated through the implementation of a DC load flow. The model encloses the Belgian power system in a 220 – 380 kV high voltage network (i.e., 93 power plants and 106 nodes). The model features the use of pumping storage facilities as well as the inclusion of spinning reserves in a single optimization process. Solution times of the model stay below reasonable values.

Region Segmentation based on Gaussian Dirichlet Process Mixture Model and its Application to 3D Geometric Stricture Detection

In general, image-based 3D scenes can now be found in many popular vision systems, computer games and virtual reality tours. So, It is important to segment ROI (region of interest) from input scenes as a preprocessing step for geometric stricture detection in 3D scene. In this paper, we propose a method for segmenting ROI based on tensor voting and Dirichlet process mixture model. In particular, to estimate geometric structure information for 3D scene from a single outdoor image, we apply the tensor voting and Dirichlet process mixture model to a image segmentation. The tensor voting is used based on the fact that homogeneous region in an image are usually close together on a smooth region and therefore the tokens corresponding to centers of these regions have high saliency values. The proposed approach is a novel nonparametric Bayesian segmentation method using Gaussian Dirichlet process mixture model to automatically segment various natural scenes. Finally, our method can label regions of the input image into coarse categories: “ground", “sky", and “vertical" for 3D application. The experimental results show that our method successfully segments coarse regions in many complex natural scene images for 3D.

A New Approach for Prioritization of Failure Modes in Design FMEA using ANOVA

The traditional Failure Mode and Effects Analysis (FMEA) uses Risk Priority Number (RPN) to evaluate the risk level of a component or process. The RPN index is determined by calculating the product of severity, occurrence and detection indexes. The most critically debated disadvantage of this approach is that various sets of these three indexes may produce an identical value of RPN. This research paper seeks to address the drawbacks in traditional FMEA and to propose a new approach to overcome these shortcomings. The Risk Priority Code (RPC) is used to prioritize failure modes, when two or more failure modes have the same RPN. A new method is proposed to prioritize failure modes, when there is a disagreement in ranking scale for severity, occurrence and detection. An Analysis of Variance (ANOVA) is used to compare means of RPN values. SPSS (Statistical Package for the Social Sciences) statistical analysis package is used to analyze the data. The results presented are based on two case studies. It is found that the proposed new methodology/approach resolves the limitations of traditional FMEA approach.