Mode III Interlaminar Fracture in Woven Glass/Epoxy Composite Laminates

In the present study, fracture behavior of woven fabric-reinforced glass/epoxy composite laminates under mode III crack growth was experimentally investigated and numerically modeled. Two methods were used for the calculation of the strain energy release rate: the experimental compliance calibration (CC) method and the Virtual Crack Closure Technique (VCCT). To achieve this aim ECT (Edge Crack Torsion) was used to evaluate fracture toughness in mode III loading (out of plane-shear) at different crack lengths. Load–displacement and associated energy release rates were obtained for various case of interest. To calculate fracture toughness JIII, two criteria were considered including non-linearity and maximum points in load-displacement curve and it is observed that JIII increases with the crack length increase. Both the experimental compliance method and the virtual crack closure technique proved applicable for the interpretation of the fracture mechanics data of woven glass/epoxy laminates in mode III.

IVE: Virtual Humans’ AI Prototyping Toolkit

IVE toolkit has been created for facilitating research,education and development in the field of virtual storytelling and computer games. Primarily, the toolkit is intended for modelling action selection mechanisms of virtual humans, investigating level-of-detail AI techniques for large virtual environments, and for exploring joint behaviour and role-passing technique (Sec. V). Additionally, the toolkit can be used as an AI middleware without any changes. The main facility of IVE is that it serves for prototyping both the AI and virtual worlds themselves. The purpose of this paper is to describe IVE's features in general and to present our current work - including an educational game - on this platform.

Modeling of Material Removal on Machining of Ti-6Al-4V through EDM using Copper Tungsten Electrode and Positive Polarity

This paper deals optimized model to investigate the effects of peak current, pulse on time and pulse off time in EDM performance on material removal rate of titanium alloy utilizing copper tungsten as electrode and positive polarity of the electrode. The experiments are carried out on Ti6Al4V. Experiments were conducted by varying the peak current, pulse on time and pulse off time. A mathematical model is developed to correlate the influences of these variables and material removal rate of workpiece. Design of experiments (DOE) method and response surface methodology (RSM) techniques are implemented. The validity test of the fit and adequacy of the proposed models has been carried out through analysis of variance (ANOVA). The obtained results evidence that as the material removal rate increases as peak current and pulse on time increases. The effect of pulse off time on MRR changes with peak ampere. The optimum machining conditions in favor of material removal rate are verified and compared. The optimum machining conditions in favor of material removal rate are estimated and verified with proposed optimized results. It is observed that the developed model is within the limits of the agreeable error (about 4%) when compared to experimental results. This result leads to desirable material removal rate and economical industrial machining to optimize the input parameters.

Wave Vortex Parameters as an Indicator of Breaking Intensity

The study of the geometric shape of the plunging wave enclosed vortices as a possible indicator for the breaking intensity of ocean waves has been ongoing for almost 50 years with limited success. This paper investigates the validity of using the vortex ratio and vortex angle as methods of predicting breaking intensity. Previously published works on vortex parameters, based on regular wave flume results or solitary wave theory, present contradictory results and conclusions. Through the first complete analysis of field collected irregular wave breaking vortex parameters it is illustrated that the vortex ratio and vortex angle cannot be accurately predicted using standard breaking wave characteristics and hence are not suggested as a possible indicator for breaking intensity.

Software Reliability Prediction Model Analysis

Software reliability prediction gives a great opportunity to measure the software failure rate at any point throughout system test. A software reliability prediction model provides with the technique for improving reliability. Software reliability is very important factor for estimating overall system reliability, which depends on the individual component reliabilities. It differs from hardware reliability in that it reflects the design perfection. Main reason of software reliability problems is high complexity of software. Various approaches can be used to improve the reliability of software. We focus on software reliability model in this article, assuming that there is a time redundancy, the value of which (the number of repeated transmission of basic blocks) can be an optimization parameter. We consider given mathematical model in the assumption that in the system may occur not only irreversible failures, but also a failure that can be taken as self-repairing failures that significantly affect the reliability and accuracy of information transfer. Main task of the given paper is to find a time distribution function (DF) of instructions sequence transmission, which consists of random number of basic blocks. We consider the system software unreliable; the time between adjacent failures has exponential distribution.

Reconstitute Information about Discontinued Water Quality Variables in the Nile Delta Monitoring Network Using Two Record Extension Techniques

The world economic crises and budget constraints have caused authorities, especially those in developing countries, to rationalize water quality monitoring activities. Rationalization consists of reducing the number of monitoring sites, the number of samples, and/or the number of water quality variables measured. The reduction in water quality variables is usually based on correlation. If two variables exhibit high correlation, it is an indication that some of the information produced may be redundant. Consequently, one variable can be discontinued, and the other continues to be measured. Later, the ordinary least squares (OLS) regression technique is employed to reconstitute information about discontinued variable by using the continuously measured one as an explanatory variable. In this paper, two record extension techniques are employed to reconstitute information about discontinued water quality variables, the OLS and the Line of Organic Correlation (LOC). An empirical experiment is conducted using water quality records from the Nile Delta water quality monitoring network in Egypt. The record extension techniques are compared for their ability to predict different statistical parameters of the discontinued variables. Results show that the OLS is better at estimating individual water quality records. However, results indicate an underestimation of the variance in the extended records. The LOC technique is superior in preserving characteristics of the entire distribution and avoids underestimation of the variance. It is concluded from this study that the OLS can be used for the substitution of missing values, while LOC is preferable for inferring statements about the probability distribution.

Improved Power Spectrum Estimation for RR-Interval Time Series

The RR interval series is non-stationary and unevenly spaced in time. For estimating its power spectral density (PSD) using traditional techniques like FFT, require resampling at uniform intervals. The researchers have used different interpolation techniques as resampling methods. All these resampling methods introduce the low pass filtering effect in the power spectrum. The lomb transform is a means of obtaining PSD estimates directly from irregularly sampled RR interval series, thus avoiding resampling. In this work, the superiority of Lomb transform method has been established over FFT based approach, after applying linear and cubicspline interpolation as resampling methods, in terms of reproduction of exact frequency locations as well as the relative magnitudes of each spectral component.

An Exhaustive Review of Die Sinking Electrical Discharge Machining Process and Scope for Future Research

Electrical Discharge Machine (EDM) is especially used for the manufacturing of 3-D complex geometry and hard material parts that are extremely difficult-to-machine by conventional machining processes. In this paper authors review the research work carried out in the development of die-sinking EDM within the past decades for the improvement of machining characteristics such as Material Removal Rate, Surface Roughness and Tool Wear Ratio. In this review various techniques reported by EDM researchers for improving the machining characteristics have been categorized as process parameters optimization, multi spark technique, powder mixed EDM, servo control system and pulse discriminating. At the end, flexible machine controller is suggested for Die Sinking EDM to enhance the machining characteristics and to achieve high-level automation. Thus, die sinking EDM can be integrated with Computer Integrated Manufacturing environment as a need of agile manufacturing systems.

Techniques for Reliability Evaluation in Distribution System Planning

This paper presents reliability evaluation techniques which are applied in distribution system planning studies and operation. Reliability of distribution systems is an important issue in power engineering for both utilities and customers. Reliability is a key issue in the design and operation of electric power distribution systems and load. Reliability evaluation of distribution systems has been the subject of many recent papers and the modeling and evaluation techniques have improved considerably.

Chip Formation during Turning Multiphase Microalloyed Steel

Machining through turning was carried out in a lathe to study the chip formation of Multiphase Ferrite (F-B-M) microalloyed steel. Taguchi orthogonal array was employed to perform the machining. Continuous and discontinuous chips were formed for different cutting parameters like speed, feed and depth of cut. Optical and scanning electron microscope was employed to identify the chip morphology.

Vehicle Gearbox Fault Diagnosis Based On Cepstrum Analysis

Research on damage of gears and gear pairs using vibration signals remains very attractive, because vibration signals from a gear pair are complex in nature and not easy to interpret. Predicting gear pair defects by analyzing changes in vibration signal of gears pairs in operation is a very reliable method. Therefore, a suitable vibration signal processing technique is necessary to extract defect information generally obscured by the noise from dynamic factors of other gear pairs.This article presents the value of cepstrum analysis in vehicle gearbox fault diagnosis. Cepstrum represents the overall power content of a whole family of harmonics and sidebands when more than one family of sidebands is present at the same time. The concept for the measurement and analysis involved in using the technique are briefly outlined. Cepstrum analysis is used for detection of an artificial pitting defect in a vehicle gearbox loaded with different speeds and torques. The test stand is equipped with three dynamometers; the input dynamometer serves asthe internal combustion engine, the output dynamometers introduce the load on the flanges of the output joint shafts. The pitting defect is manufactured on the tooth side of a gear of the fifth speed on the secondary shaft. Also, a method for fault diagnosis of gear faults is presented based on order Cepstrum. The procedure is illustrated with the experimental vibration data of the vehicle gearbox. The results show the effectiveness of Cepstrum analysis in detection and diagnosis of the gear condition.

Motion Prediction and Motion Vector Cost Reduction during Fast Block Motion Estimation in MCTF

In 3D-wavelet video coding framework temporal filtering is done along the trajectory of motion using Motion Compensated Temporal Filtering (MCTF). Hence computationally efficient motion estimation technique is the need of MCTF. In this paper a predictive technique is proposed in order to reduce the computational complexity of the MCTF framework, by exploiting the high correlation among the frames in a Group Of Picture (GOP). The proposed technique applies coarse and fine searches of any fast block based motion estimation, only to the first pair of frames in a GOP. The generated motion vectors are supplied to the next consecutive frames, even to subsequent temporal levels and only fine search is carried out around those predicted motion vectors. Hence coarse search is skipped for all the motion estimation in a GOP except for the first pair of frames. The technique has been tested for different fast block based motion estimation algorithms over different standard test sequences using MC-EZBC, a state-of-the-art scalable video coder. The simulation result reveals substantial reduction (i.e. 20.75% to 38.24%) in the number of search points during motion estimation, without compromising the quality of the reconstructed video compared to non-predictive techniques. Since the motion vectors of all the pair of frames in a GOP except the first pair will have value ±1 around the motion vectors of the previous pair of frames, the number of bits required for motion vectors is also reduced by 50%.

On Combining Support Vector Machines and Fuzzy K-Means in Vision-based Precision Agriculture

One important objective in Precision Agriculture is to minimize the volume of herbicides that are applied to the fields through the use of site-specific weed management systems. In order to reach this goal, two major factors need to be considered: 1) the similar spectral signature, shape and texture between weeds and crops; 2) the irregular distribution of the weeds within the crop's field. This paper outlines an automatic computer vision system for the detection and differential spraying of Avena sterilis, a noxious weed growing in cereal crops. The proposed system involves two processes: image segmentation and decision making. Image segmentation combines basic suitable image processing techniques in order to extract cells from the image as the low level units. Each cell is described by two area-based attributes measuring the relations among the crops and the weeds. From these attributes, a hybrid decision making approach determines if a cell must be or not sprayed. The hybrid approach uses the Support Vector Machines and the Fuzzy k-Means methods, combined through the fuzzy aggregation theory. This makes the main finding of this paper. The method performance is compared against other available strategies.

High Temperature Deformation Behavior of Cr-containing Superplastic Iron Aluminide

Superplastic deformation and high temperature load relaxation behavior of coarse-grained iron aluminides with the composition of Fe-28 at.% Al have been investigated. A series of load relaxation and tensile tests were conducted at temperatures ranging from 600 to 850oC. The flow curves obtained from load relaxation tests were found to have a sigmoidal shape and to exhibit stress vs. strain rate data in a very wide strain rate range from 10-7/s to 10-2/s. Tensile tests have been conducted at various initial strain rates ranging from 3×10-5/s to 1×10-2/s. Maximum elongation of ~500 % was obtained at the initial strain rate of 3×10-5/s and the maximum strain rate sensitivity was found to be 0.68 at 850oC in binary Fe-28Al alloy. Microstructure observation through the optical microscopy (OM) and the electron back-scattered diffraction (EBSD) technique has been carried out on the deformed specimens and it has revealed the evidences for grain boundary migration and grain refinement to occur during superplastic deformation, suggesting the dynamic recrystallization mechanism. The addition of Cr by the amount of 5 at.% appeared to deteriorate the superplasticity of the binary iron aluminide. By applying the internal variable theory of structural superplasticity, the addition of Cr has been revealed to lower the contribution of the frictional resistance to dislocation glide during high temperature deformation of the Fe3Al alloy.

Application of Whole Genome Amplification Technique for Genotype Analysis of Bovine Embryos

In recent years, there has been an increasing interest toward the use of bovine genotyped embryos for commercial embryo transfer programs. Biopsy of a few cells in morulla stage is essential for preimplantation genetic diagnosis (PGD). Low amount of DNA have limited performing the several molecular analyses within PGD analyses. Whole genome amplification (WGA) promises to eliminate this problem. We evaluated the possibility and performance of an improved primer extension preamplification (I-PEP) method with a range of starting bovine genomic DNA from 1-8 cells into the WGA reaction. We optimized a short and simple I-PEP (ssI-PEP) procedure (~3h). This optimized WGA method was assessed by 6 loci specific polymerase chain reactions (PCRs), included restriction fragments length polymorphism (RFLP). Optimized WGA procedure possesses enough sensitivity for molecular genetic analyses through the few input cells. This is a new era for generating characterized bovine embryos in preimplantation stage.

Comparison of S-transform and Wavelet Transform in Power Quality Analysis

In the power quality analysis non-stationary nature of voltage distortions require some precise and powerful analytical techniques. The time-frequency representation (TFR) provides a powerful method for identification of the non-stationary of the signals. This paper investigates a comparative study on two techniques for analysis and visualization of voltage distortions with time-varying amplitudes. The techniques include the Discrete Wavelet Transform (DWT), and the S-Transform. Several power quality problems are analyzed using both the discrete wavelet transform and S–transform, showing clearly the advantage of the S– transform in detecting, localizing, and classifying the power quality problems.

A New Vision of Fractal Geometry with Triangulati on Algorithm

L-system is a tool commonly used for modeling and simulating the growth of fractal plants. The aim of this paper is to join some problems of the computational geometry with the fractal geometry by using the L-system technique to generate fractal plant in 3D. L-system constructs the fractal structure by applying rewriting rules sequentially and this technique depends on recursion process with large number of iterations to get different shapes of 3D fractal plants. Instead, it was reiterated a specific number of iterations up to three iterations. The vertices generated from the last stage of the Lsystem rewriting process are used as input to the triangulation algorithm to construct the triangulation shape of these vertices. The resulting shapes can be used as covers for the architectural objects and in different computer graphics fields. The paper presents a gallery of triangulation forms which application in architecture creates an alternative for domes and other traditional types of roofs.

Identification of Aircraft Gas Turbine Engine's Temperature Condition

Groundlessness of application probability-statistic methods are especially shown at an early stage of the aviation GTE technical condition diagnosing, when the volume of the information has property of the fuzzy, limitations, uncertainty and efficiency of application of new technology Soft computing at these diagnosing stages by using the fuzzy logic and neural networks methods. It is made training with high accuracy of multiple linear and nonlinear models (the regression equations) received on the statistical fuzzy data basis. At the information sufficiency it is offered to use recurrent algorithm of aviation GTE technical condition identification on measurements of input and output parameters of the multiple linear and nonlinear generalized models at presence of noise measured (the new recursive least squares method (LSM)). As application of the given technique the estimation of the new operating aviation engine D30KU-154 technical condition at height H=10600 m was made.

Molecular Dynamics Simulation of Thermal Properties of Au3Ni Nanowire

The aim of this research was to calculate the thermal properties of Au3Ni Nanowire. The molecular dynamics (MD) simulation technique was used to obtain the effect of radius size on the energy, the melting temperature and the latent heat of fusion at the isobaric-isothermal (NPT) ensemble. The Quantum Sutton-Chen (Q-SC) many body interatomic potentials energy have been used for Gold (Au) and Nickel (Ni) elements and a mixing rule has been devised to obtain the parameters of these potentials for nanowire stats. Our MD simulation results show the melting temperature and latent heat of fusion increase upon increasing diameter of nanowire. Moreover, the cohesive energy decreased with increasing diameter of nanowire.

The Wavelet-Based DFT: A New Interpretation, Extensions and Applications

In 1990 [1] the subband-DFT (SB-DFT) technique was proposed. This technique used the Hadamard filters in the decomposition step to split the input sequence into low- and highpass sequences. In the next step, either two DFTs are needed on both bands to compute the full-band DFT or one DFT on one of the two bands to compute an approximate DFT. A combination network with correction factors was to be applied after the DFTs. Another approach was proposed in 1997 [2] for using a special discrete wavelet transform (DWT) to compute the discrete Fourier transform (DFT). In the first step of the algorithm, the input sequence is decomposed in a similar manner to the SB-DFT into two sequences using wavelet decomposition with Haar filters. The second step is to perform DFTs on both bands to obtain the full-band DFT or to obtain a fast approximate DFT by implementing pruning at both input and output sides. In this paper, the wavelet-based DFT (W-DFT) with Haar filters is interpreted as SB-DFT with Hadamard filters. The only difference is in a constant factor in the combination network. This result is very important to complete the analysis of the W-DFT, since all the results concerning the accuracy and approximation errors in the SB-DFT are applicable. An application example in spectral analysis is given for both SB-DFT and W-DFT (with different filters). The adaptive capability of the SB-DFT is included in the W-DFT algorithm to select the band of most energy as the band to be computed. Finally, the W-DFT is extended to the two-dimensional case. An application in image transformation is given using two different types of wavelet filters.