Mechanical Structure Design Optimization by Blind Number Theory: Time-dependent Reliability

In a product development process, understanding the functional behavior of the system, the role of components in achieving functions and failure modes if components/subsystem fails its required function will help develop appropriate design validation and verification program for reliability assessment. The integration of these three issues will help design and reliability engineers in identifying weak spots in design and planning future actions and testing program. This case study demonstrate the advantage of unascertained theory described in the subjective cognition uncertainty, and then applies blind number (BN) theory in describing the uncertainty of the mechanical system failure process and the same time used the same theory in bringing out another mechanical reliability system model. The practical calculations shows the BN Model embodied the characters of simply, small account of calculation but betterforecasting capability, which had the value of macroscopic discussion to some extent.

Using Genetic Algorithm for Distributed Generation Allocation to Reduce Losses and Improve Voltage Profile

This paper presents a method for the optimal allocation of Distributed generation in distribution systems. In this paper, our aim would be optimal distributed generation allocation for voltage profile improvement and loss reduction in distribution network. Genetic Algorithm (GA) was used as the solving tool, which referring two determined aim; the problem is defined and objective function is introduced. Considering to fitness values sensitivity in genetic algorithm process, there is needed to apply load flow for decision-making. Load flow algorithm is combined appropriately with GA, till access to acceptable results of this operation. We used MATPOWER package for load flow algorithm and composed it with our Genetic Algorithm. The suggested method is programmed under MATLAB software and applied ETAP software for evaluating of results correctness. It was implemented on part of Tehran electricity distributing grid. The resulting operation of this method on some testing system is illuminated improvement of voltage profile and loss reduction indexes.

Teager-Huang Analysis Applied to Sonar Target Recognition

In this paper, a new approach for target recognition based on the Empirical mode decomposition (EMD) algorithm of Huang etal. [11] and the energy tracking operator of Teager [13]-[14] is introduced. The conjunction of these two methods is called Teager-Huang analysis. This approach is well suited for nonstationary signals analysis. The impulse response (IR) of target is first band pass filtered into subsignals (components) called Intrinsic mode functions (IMFs) with well defined Instantaneous frequency (IF) and Instantaneous amplitude (IA). Each IMF is a zero-mean AM-FM component. In second step, the energy of each IMF is tracked using the Teager energy operator (TEO). IF and IA, useful to describe the time-varying characteristics of the signal, are estimated using the Energy separation algorithm (ESA) algorithm of Maragos et al .[16]-[17]. In third step, a set of features such as skewness and kurtosis are extracted from the IF, IA and IMF energy functions. The Teager-Huang analysis is tested on set of synthetic IRs of Sonar targets with different physical characteristics (density, velocity, shape,? ). PCA is first applied to features to discriminate between manufactured and natural targets. The manufactured patterns are classified into spheres and cylinders. One hundred percent of correct recognition is achieved with twenty three echoes where sixteen IRs, used for training, are free noise and seven IRs, used for testing phase, are corrupted with white Gaussian noise.

Non-Destructive Evaluation of Launch Tube Welds with Radiography

The non-destructive testing of launch tube weld with radiography was investigated and evaluated with AWS D1.1 standard. The paper started with preparation of launch tube and radiographic inspection. X-Ray inspection then was done and gotten the result. The judgment of inspection results were concluded by certified person and finally, the evaluation with AWS D1.1 standard was conducted as well. The result shown that weld position P1 was not conformed to AWS D1.1 which allowed size of incomplete penetration did not exceed 4 mm. The other welds were corresponded to as mentioned standard. Additionally, the corrective actions for incomplete penetration either provided for future actions.

Vibration Attenuation Using Functionally Graded Material

The aim of the work was to attenuate the vibration amplitude in CESNA 172 airplane wing by using Functionally Graded Material instead of uniform or composite material. Wing strength was achieved by means of stress analysis study, while wing vibration amplitudes and shapes were achieved by means of Modal and Harmonic analysis. Results were verified by applying the methodology in a simple cantilever plate to the simple model and the results were promising and the same methodology can be applied to the airplane wing model. Aluminum models, Titanium models, and functionally graded materials of Aluminum and titanium results were compared to show a great vibration attenuation after using the FGM. Optimization in FGM gradation satisfied our objective of reducing and attenuating the vibration amplitudes to show the effect of using FGM in vibration behavior. Testing the Aluminum rich models, and comparing it with the titanium rich model was an optimization in this paper. Results have shown a significant attenuation in vibration magnitudes when using FGM instead of Titanium Plate, and Aluminium wing with FGM Spurs instead of Aluminium wings. It was also recommended that in future, changing the graphical scale to 1:10 or even 1:1 when the computers- capabilities allow.

Extrapolation of Clinical Data from an Oral Glucose Tolerance Test Using a Support Vector Machine

To extract the important physiological factors related to diabetes from an oral glucose tolerance test (OGTT) by mathematical modeling, highly informative but convenient protocols are required. Current models require a large number of samples and extended period of testing, which is not practical for daily use. The purpose of this study is to make model assessments possible even from a reduced number of samples taken over a relatively short period. For this purpose, test values were extrapolated using a support vector machine. A good correlation was found between reference and extrapolated values in evaluated 741 OGTTs. This result indicates that a reduction in the number of clinical test is possible through a computational approach.

Fatigue Failure of Structural Steel – Analysis Using Fracture Mechanics

Fatigue is the major threat in service of steel structure subjected to fluctuating loads. With the additional effect of corrosion and presence of weld joints the fatigue failure may become more critical in structural steel. One of the apt examples of such structural is the sailing ship. This is experiencing a constant stress due to floating and a pulsating bending load due to the waves. This paper describes an attempt to verify theory of fatigue in fracture mechanics approach with experimentation to determine the constants of crack growth curve. For this, specimen is prepared from the ship building steel and it is subjected to a pulsating bending load with a known defect. Fatigue crack and its nature is observed in this experiment. Application of fracture mechanics approach in fatigue with a simple practical experiment is conducted and constants of crack growth equation are investigated.

An Agent-Based Scheduling Framework for Flexible Manufacturing Systems

The concept of flexible manufacturing is highly appealing in gaining a competitive edge in the market by quickly adapting to the changing customer needs. Scheduling jobs on flexible manufacturing systems (FMSs) is a challenging task of managing the available flexibility on the shop floor to react to the dynamics of the environment in real-time. In this paper, an agent-oriented scheduling framework that can be integrated with a real or a simulated FMS is proposed. This framework works in stochastic environments with a dynamic model of job arrival. It supports a hierarchical cooperative scheduling that builds on the available flexibility of the shop floor. Testing the framework on a model of a real FMS showed the capability of the proposed approach to overcome the drawbacks of the conventional approaches and maintain a near optimal solution despite the dynamics of the operational environment.

Experimental Testing of Composite Tubes with Different Corrugation Profile Subjected to Lateral Compression Load

This paper presents the effect of corrugation profile geometry on the crushing behavior, energy absorption, failure mechanism, and failure mode of woven roving glass fibre/epoxy laminated composite tube. Experimental investigations were carried out on composite tubes with three different profile shapes: sinusoidal, triangular and trapezoidal. The tubes were subjected to lateral compressive loading. On the addition to a radial corrugated composite tube, cylindrical composite tube, were fabricated and tested under the same condition in order to know the effect of corrugation geometry. Typical histories of their deformation are presented. Behavior of tubes as regards the peak crushing load, energy absorbed and mode of crushing has been discussed. The results show that the behavior of the tube under lateral compression load is influenced by the geometry of the tube itself.

MITAutomatic ECG Beat Tachycardia Detection Using Artificial Neural Network

The application of Neural Network for disease diagnosis has made great progress and is widely used by physicians. An Electrocardiogram carries vital information about heart activity and physicians use this signal for cardiac disease diagnosis which was the great motivation towards our study. In our work, tachycardia features obtained are used for the training and testing of a Neural Network. In this study we are using Fuzzy Probabilistic Neural Networks as an automatic technique for ECG signal analysis. As every real signal recorded by the equipment can have different artifacts, we needed to do some preprocessing steps before feeding it to our system. Wavelet transform is used for extracting the morphological parameters of the ECG signal. The outcome of the approach for the variety of arrhythmias shows the represented approach is superior than prior presented algorithms with an average accuracy of about %95 for more than 7 tachy arrhythmias.

Detection of Action Potentials in the Presence of Noise Using Phase-Space Techniques

Emerging Bio-engineering fields such as Brain Computer Interfaces, neuroprothesis devices and modeling and simulation of neural networks have led to increased research activity in algorithms for the detection, isolation and classification of Action Potentials (AP) from noisy data trains. Current techniques in the field of 'unsupervised no-prior knowledge' biosignal processing include energy operators, wavelet detection and adaptive thresholding. These tend to bias towards larger AP waveforms, AP may be missed due to deviations in spike shape and frequency and correlated noise spectrums can cause false detection. Also, such algorithms tend to suffer from large computational expense. A new signal detection technique based upon the ideas of phasespace diagrams and trajectories is proposed based upon the use of a delayed copy of the AP to highlight discontinuities relative to background noise. This idea has been used to create algorithms that are computationally inexpensive and address the above problems. Distinct AP have been picked out and manually classified from real physiological data recorded from a cockroach. To facilitate testing of the new technique, an Auto Regressive Moving Average (ARMA) noise model has been constructed bases upon background noise of the recordings. Along with the AP classification means this model enables generation of realistic neuronal data sets at arbitrary signal to noise ratio (SNR).

Mining News Sites to Create Special Domain News Collections

We present a method to create special domain collections from news sites. The method only requires a single sample article as a seed. No prior corpus statistics are needed and the method is applicable to multiple languages. We examine various similarity measures and the creation of document collections for English and Japanese. The main contributions are as follows. First, the algorithm can build special domain collections from as little as one sample document. Second, unlike other algorithms it does not require a second “general" corpus to compute statistics. Third, in our testing the algorithm outperformed others in creating collections made up of highly relevant articles.

Investigations on the Influence of Process Parameters on the Sliding Wear Behavior of Components Produced by Direct Metal Laser Sintering (DMLS)

This work presents the results of a study carried out to determine the sliding wear behavior and its effect on the process parameters of components manufactured by direct metal laser sintering (DMLS). A standard procedure and specimen had been used in the present study to find the wear behavior. Using Taguchi-s experimental technique, an orthogonal array of modified L8 had been developed. Sliding wear testing using pin-on-disk machine was carried out and analysis of variance (ANOVA) technique was used to investigate the effect of process parameters and to identify the main process parameter that influences the properties of wear behavior on the DMLS components. It has been found that part orientation, one of the selected process parameter had more influence on wear as compared to other selected process parameters.

The Future Regulatory Challenges of Liquidity Risk Management

Liquidity risk management ranks to key concepts applied in finance. Liquidity is defined as a capacity to obtain funding when needed, while liquidity risk means as a threat to this capacity to generate cash at fair costs. In the paper we present challenges of liquidity risk management resulting from the 2007- 2009 global financial upheaval. We see five main regulatory liquidity risk management issues requiring revision in coming years: liquidity measurement, intra-day and intra-group liquidity management, contingency planning and liquidity buffers, liquidity systems, controls and governance, and finally models testing the viability of business liquidity models.

Text-independent Speaker Identification Based on MAP Channel Compensation and Pitch-dependent Features

One major source of performance decline in speaker recognition system is channel mismatch between training and testing. This paper focuses on improving channel robustness of speaker recognition system in two aspects of channel compensation technique and channel robust features. The system is text-independent speaker identification system based on two-stage recognition. In the aspect of channel compensation technique, this paper applies MAP (Maximum A Posterior Probability) channel compensation technique, which was used in speech recognition, to speaker recognition system. In the aspect of channel robust features, this paper introduces pitch-dependent features and pitch-dependent speaker model for the second stage recognition. Based on the first stage recognition to testing speech using GMM (Gaussian Mixture Model), the system uses GMM scores to decide if it needs to be recognized again. If it needs to, the system selects a few speakers from all of the speakers who participate in the first stage recognition for the second stage recognition. For each selected speaker, the system obtains 3 pitch-dependent results from his pitch-dependent speaker model, and then uses ANN (Artificial Neural Network) to unite the 3 pitch-dependent results and 1 GMM score for getting a fused result. The system makes the second stage recognition based on these fused results. The experiments show that the correct rate of two-stage recognition system based on MAP channel compensation technique and pitch-dependent features is 41.7% better than the baseline system for closed-set test.

Italians- Social and Emotional Loneliness: The Results of Five Studies

Subjective loneliness describes people who feel a disagreeable or unacceptable lack of meaningful social relationships, both at the quantitative and qualitative level. The studies to be presented tested an Italian 18-items self-report loneliness measure, that included items adapted from scales previously developed, namely a short version of the UCLA (Russell, Peplau and Cutrona, 1980), and the 11-items Loneliness scale by De Jong-Gierveld & Kamphuis (JGLS; 1985). The studies aimed at testing the developed scale and at verifying whether loneliness is better conceptualized as a unidimensional (so-called 'general loneliness') or a bidimensional construct, namely comprising the distinct facets of social and emotional loneliness. The loneliness questionnaire included 2 singleitem criterion measures of sad mood, and social contact, and asked participants to supply information on a number of socio-demographic variables. Factorial analyses of responses obtained in two preliminary studies, with 59 and 143 Italian participants respectively, showed good factor loadings and subscale reliability and confirmed that perceived loneliness has clearly two components, a social and an emotional one, the latter measured by two subscales, a 7-item 'general' loneliness subscale derived from UCLA, and a 6–item 'emotional' scale included in the JGLS. Results further showed that type and amount of loneliness are related, negatively, to frequency of social contacts, and, positively, to sad mood. In a third study data were obtained from a nation-wide sample of 9.097 Italian subjects, 12 to about 70 year-olds, who filled the test on-line, on the Italian web site of a large-audience magazine, Focus. The results again confirmed the reliability of the component subscales, namely social, emotional, and 'general' loneliness, and showed that they were highly correlated with each other, especially the latter two. Loneliness scores were significantly predicted by sex, age, education level, sad mood and social contact, and, less so, by other variables – e.g., geographical area and profession. The scale validity was confirmed by the results of a fourth study, with elderly men and women (N 105) living at home or in residential care units. The three subscales were significantly related, among others, to depression, and to various measures of the extension of, and satisfaction with, social contacts with relatives and friends. Finally, a fifth study with 315 career-starters showed that social and emotional loneliness correlate with life satisfaction, and with measures of emotional intelligence. Altogether the results showed a good validity and reliability in the tested samples of the entire scale, and of its components.

Investigation of the Effectiveness of Siloxane Hydrophobic Injection for Renovation of Damp Brick Masonry

Experimental investigation of the effect of hydrophobic injection on siloxane basis on the properties of oldfashioned type of ceramic brick is presented in the paper. At the experimental testing, the matrix density, total open porosity, pore size distribution, sorptivity, water absorption coefficient, sorption and desorption isotherms are measured for the original, as well as the hydrophobic-injection treated brick. On the basis of measured data, the functionality of the hydrophobic injection for the moisture ingress prevention into the studied ceramic brick is assessed.

An Approach for Reducing the Computational Complexity of LAMSTAR Intrusion Detection System using Principal Component Analysis

The security of computer networks plays a strategic role in modern computer systems. Intrusion Detection Systems (IDS) act as the 'second line of defense' placed inside a protected network, looking for known or potential threats in network traffic and/or audit data recorded by hosts. We developed an Intrusion Detection System using LAMSTAR neural network to learn patterns of normal and intrusive activities, to classify observed system activities and compared the performance of LAMSTAR IDS with other classification techniques using 5 classes of KDDCup99 data. LAMSAR IDS gives better performance at the cost of high Computational complexity, Training time and Testing time, when compared to other classification techniques (Binary Tree classifier, RBF classifier, Gaussian Mixture classifier). we further reduced the Computational Complexity of LAMSTAR IDS by reducing the dimension of the data using principal component analysis which in turn reduces the training and testing time with almost the same performance.

Multilevel Classifiers in Recognition of Handwritten Kannada Numerals

The recognition of handwritten numeral is an important area of research for its applications in post office, banks and other organizations. This paper presents automatic recognition of handwritten Kannada numerals based on structural features. Five different types of features, namely, profile based 10-segment string, water reservoir; vertical and horizontal strokes, end points and average boundary length from the minimal bounding box are used in the recognition of numeral. The effect of each feature and their combination in the numeral classification is analyzed using nearest neighbor classifiers. It is common to combine multiple categories of features into a single feature vector for the classification. Instead, separate classifiers can be used to classify based on each visual feature individually and the final classification can be obtained based on the combination of separate base classification results. One popular approach is to combine the classifier results into a feature vector and leaving the decision to next level classifier. This method is extended to extract a better information, possibility distribution, from the base classifiers in resolving the conflicts among the classification results. Here, we use fuzzy k Nearest Neighbor (fuzzy k-NN) as base classifier for individual feature sets, the results of which together forms the feature vector for the final k Nearest Neighbor (k-NN) classifier. Testing is done, using different features, individually and in combination, on a database containing 1600 samples of different numerals and the results are compared with the results of different existing methods.

Modeling and Simulations of Complex Low- Dimensional systems: Testing the Efficiency of Parallelization

The deterministic quantum transfer-matrix (QTM) technique and its mathematical background are presented. This important tool in computational physics can be applied to a class of the real physical low-dimensional magnetic systems described by the Heisenberg hamiltonian which includes the macroscopic molecularbased spin chains, small size magnetic clusters embedded in some supramolecules and other interesting compounds. Using QTM, the spin degrees of freedom are accurately taken into account, yielding the thermodynamical functions at finite temperatures. In order to test the application for the susceptibility calculations to run in the parallel environment, the speed-up and efficiency of parallelization are analyzed on our platform SGI Origin 3800 with p = 128 processor units. Using Message Parallel Interface (MPI) system libraries we find the efficiency of the code of 94% for p = 128 that makes our application highly scalable.