A Combined Practical Approach to Condition Monitoring of Reciprocating Compressors using IAS and Dynamic Pressure

A Comparison and evaluation of the different condition monitoring (CM) techniques was applied experimentally on RC e.g. Dynamic cylinder pressure and crankshaft Instantaneous Angular Speed (IAS), for the detection and diagnosis of valve faults in a two - stage reciprocating compressor for a programme of condition monitoring which can successfully detect and diagnose a fault in machine. Leakage in the valve plate was introduced experimentally into a two-stage reciprocating compressor. The effect of the faults on compressor performance was monitored and the differences with the normal, healthy performance noted as a fault signature been used for the detection and diagnosis of faults. The paper concludes with what is considered to be a unique approach to condition monitoring. First, each of the two most useful techniques is used to produce a Truth Table which details the circumstances in which each method can be used to detect and diagnose a fault. The two Truth Tables are then combined into a single Decision Table to provide a unique and reliable method of detection and diagnosis of each of the individual faults introduced into the compressor. This gives accurate diagnosis of compressor faults.

Atomic Force Microscopy (AFM)Topographical Surface Characterization of Multilayer-Coated and Uncoated Carbide Inserts

In recent years, scanning probe atomic force microscopy SPM AFM has gained acceptance over a wide spectrum of research and science applications. Most fields focuses on physical, chemical, biological while less attention is devoted to manufacturing and machining aspects. The purpose of the current study is to assess the possible implementation of the SPM AFM features and its NanoScope software in general machining applications with special attention to the tribological aspects of cutting tool. The surface morphology of coated and uncoated as-received carbide inserts is examined, analyzed, and characterized through the determination of the appropriate scanning setting, the suitable data type imaging techniques and the most representative data analysis parameters using the MultiMode SPM AFM in contact mode. The NanoScope operating software is used to capture realtime three data types images: “Height", “Deflection" and “Friction". Three scan sizes are independently performed: 2, 6, and 12 μm with a 2.5 μm vertical range (Z). Offline mode analysis includes the determination of three functional topographical parameters: surface “Roughness", power spectral density “PSD" and “Section". The 12 μm scan size in association with “Height" imaging is found efficient to capture every tiny features and tribological aspects of the examined surface. Also, “Friction" analysis is found to produce a comprehensive explanation about the lateral characteristics of the scanned surface. Configuration of many surface defects and drawbacks has been precisely detected and analyzed.

Adaptive Car Safety System

Car accident is one of the major causes of death in many countries. Many researchers have attempted to design and develop techniques to increase car safety in the past recent years. In spite of all the efforts, it is still challenging to design a system adaptive to the driver rather than the automotive characteristics. In this paper, the adaptive car safety system is explained which attempts to find a balance.

Wood Species Recognition System

The proposed system identifies the species of the wood using the textural features present in its barks. Each species of a wood has its own unique patterns in its bark, which enabled the proposed system to identify it accurately. Automatic wood recognition system has not yet been well established mainly due to lack of research in this area and the difficulty in obtaining the wood database. In our work, a wood recognition system has been designed based on pre-processing techniques, feature extraction and by correlating the features of those wood species for their classification. Texture classification is a problem that has been studied and tested using different methods due to its valuable usage in various pattern recognition problems, such as wood recognition, rock classification. The most popular technique used for the textural classification is Gray-level Co-occurrence Matrices (GLCM). The features from the enhanced images are thus extracted using the GLCM is correlated, which determines the classification between the various wood species. The result thus obtained shows a high rate of recognition accuracy proving that the techniques used in suitable to be implemented for commercial purposes.

Simulation of Series Compensated Transmission Lines Protected with Mov

In this paper the behavior of fixed series compensated extra high voltage transmission lines during faults is simulated. Many over-voltage protection schemes for series capacitors are limited in terms of size and performance, and are easily affected by environmental conditions. While the need for more compact and environmentally robust equipment is required. use of series capacitors for compensating part of the inductive reactance of long transmission lines increases the power transmission capacity. Emphasis is given on the impact of modern capacitor protection techniques (MOV protection). The simulation study is performed using MATLAB/SIMULINK®and results are given for a three phase and a single phase to ground fault.

Theoretical Analysis of Capacities in Dynamic Spatial Multiplexing MIMO Systems

In this paper, we investigate the study of techniques for scheduling users for resource allocation in the case of multiple input and multiple output (MIMO) packet transmission systems. In these systems, transmit antennas are assigned to one user or dynamically to different users using spatial multiplexing. The allocation of all transmit antennas to one user cannot take full advantages of multi-user diversity. Therefore, we developed the case when resources are allocated dynamically. At each time slot users have to feed back their channel information on an uplink feedback channel. Channel information considered available in the schedulers is the zero forcing (ZF) post detection signal to interference plus noise ratio. Our analysis study concerns the round robin and the opportunistic schemes. In this paper, we present an overview and a complete capacity analysis of these schemes. The main results in our study are to give an analytical form of system capacity using the ZF receiver at the user terminal. Simulations have been carried out to validate all proposed analytical solutions and to compare the performance of these schemes.

PTH Moment Exponential Stability of Stochastic Recurrent Neural Networks with Distributed Delays

In this paper, the issue of pth moment exponential stability of stochastic recurrent neural network with distributed time delays is investigated. By using the method of variation parameters, inequality techniques, and stochastic analysis, some sufficient conditions ensuring pth moment exponential stability are obtained. The method used in this paper does not resort to any Lyapunov function, and the results derived in this paper generalize some earlier criteria reported in the literature. One numerical example is given to illustrate the main results.

A Unified Framework for a Robust Conflict-Free Robot Navigation

Many environment specific methods and systems for Robot Navigation exist. However vast strides in the evolution of navigation technologies and system techniques create the need for a general unified framework that is scalable, modular and dynamic. In this paper a Unified Framework for a Robust Conflict-free Robot Navigation System that can be used for either a structured or unstructured and indoor or outdoor environments has been proposed. The fundamental design aspects and implementation issues encountered during the development of the module are discussed. The results of the deployment of three major peripheral modules of the framework namely the GSM based communication module, GIS Module and GPS module are reported in this paper.

Integrating Agents and Computational Intelligence Techniques in E-learning Environments

In this contribution a newly developed elearning environment is presented, which incorporates Intelligent Agents and Computational Intelligence Techniques. The new e-learning environment is constituted by three parts, the E-learning platform Front-End, the Student Questioner Reasoning and the Student Model Agent. These parts are distributed geographically in dispersed computer servers, with main focus on the design and development of these subsystems through the use of new and emerging technologies. These parts are interconnected in an interoperable way, using web services for the integration of the subsystems, in order to enhance the user modelling procedure and achieve the goals of the learning process.

Unsupervised Clustering Methods for Identifying Rare Events in Anomaly Detection

It is important problems to increase the detection rates and reduce false positive rates in Intrusion Detection System (IDS). Although preventative techniques such as access control and authentication attempt to prevent intruders, these can fail, and as a second line of defence, intrusion detection has been introduced. Rare events are events that occur very infrequently, detection of rare events is a common problem in many domains. In this paper we propose an intrusion detection method that combines Rough set and Fuzzy Clustering. Rough set has to decrease the amount of data and get rid of redundancy. Fuzzy c-means clustering allow objects to belong to several clusters simultaneously, with different degrees of membership. Our approach allows us to recognize not only known attacks but also to detect suspicious activity that may be the result of a new, unknown attack. The experimental results on Knowledge Discovery and Data Mining-(KDDCup 1999) Dataset show that the method is efficient and practical for intrusion detection systems.

An Efficient Architecture for Interleaved Modular Multiplication

Modular multiplication is the basic operation in most public key cryptosystems, such as RSA, DSA, ECC, and DH key exchange. Unfortunately, very large operands (in order of 1024 or 2048 bits) must be used to provide sufficient security strength. The use of such big numbers dramatically slows down the whole cipher system, especially when running on embedded processors. So far, customized hardware accelerators - developed on FPGAs or ASICs - were the best choice for accelerating modular multiplication in embedded environments. On the other hand, many algorithms have been developed to speed up such operations. Examples are the Montgomery modular multiplication and the interleaved modular multiplication algorithms. Combining both customized hardware with an efficient algorithm is expected to provide a much faster cipher system. This paper introduces an enhanced architecture for computing the modular multiplication of two large numbers X and Y modulo a given modulus M. The proposed design is compared with three previous architectures depending on carry save adders and look up tables. Look up tables should be loaded with a set of pre-computed values. Our proposed architecture uses the same carry save addition, but replaces both look up tables and pre-computations with an enhanced version of sign detection techniques. The proposed architecture supports higher frequencies than other architectures. It also has a better overall absolute time for a single operation.

Locating Center Points for Radial Basis Function Networks Using Instance Reduction Techniques

The behavior of Radial Basis Function (RBF) Networks greatly depends on how the center points of the basis functions are selected. In this work we investigate the use of instance reduction techniques, originally developed to reduce the storage requirements of instance based learners, for this purpose. Five Instance-Based Reduction Techniques were used to determine the set of center points, and RBF networks were trained using these sets of centers. The performance of the RBF networks is studied in terms of classification accuracy and training time. The results obtained were compared with two Radial Basis Function Networks: RBF networks that use all instances of the training set as center points (RBF-ALL) and Probabilistic Neural Networks (PNN). The former achieves high classification accuracies and the latter requires smaller training time. Results showed that RBF networks trained using sets of centers located by noise-filtering techniques (ALLKNN and ENN) rather than pure reduction techniques produce the best results in terms of classification accuracy. The results show that these networks require smaller training time than that of RBF-ALL and higher classification accuracy than that of PNN. Thus, using ALLKNN and ENN to select center points gives better combination of classification accuracy and training time. Our experiments also show that using the reduced sets to train the networks is beneficial especially in the presence of noise in the original training sets.

Heuristic Search Algorithms for Tuning PUMA 560 Fuzzy PID Controller

This paper compares the heuristic Global Search Techniques; Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Generalized Pattern Search, genetic algorithm hybridized with Nelder–Mead and Generalized pattern search technique for tuning of fuzzy PID controller for Puma 560. Since the actual control is in joint space ,inverse kinematics is used to generate various joint angles correspoding to desired cartesian space trajectory. Efficient dynamics and kinematics are modeled on Matlab which takes very less simulation time. Performances of all the tuning methods with and without disturbance are compared in terms of ITSE in joint space and ISE in cartesian space for spiral trajectory tracking. Genetic Algorithm hybridized with Generalized Pattern Search is showing best performance.

Optimising Business Rules in the Services Sector

Business rules are widely used within the services sector. They provide consistency and allow relatively unskilled staff to process complex transactions correctly. But there are many examples where the rules themselves have an impact on the costs and profits of an organisation. Financial services, transport and human services are areas where the rules themselves can impact the bottom line in a predictable way. If this is the case, how can we find that set of rules that maximise profit, performance or customer service, or any other key performance indicators? The manufacturing, energy and process industries have embraced mathematical optimisation techniques to improve efficiency, increase production and so on. This paper explores several real world (but simplified) problems in the services sector and shows how business rules can be optimised. It also examines the similarities and differences between the service and other sectors, and how optimisation techniques could be used to deliver similar benefits.

Sweethearting: The Complicity Relatives Theft CRT in Saudi Arabia

The study will search the level of existence of the sweethearting in Saudi Arabia's Supermarkets in Riyadh. Sweethearting occurs when frontline workers give unauthorized free or uncounted goods and services to customer-s conspirators. The store managers and /or security managers were asked about the sweethearting that occurs in the supermarkets. The characteristics of sweethearting in Riyadh stores were investigated. Two independent variables were related to the report of sweethearting. These independent variables are: The effect of store environment on sweethearting and the security techniques and loss prevention electronics techniques used. This study expected to shed the light about the level of sweethearting in Saudi Arabia and the factors behind it. This study will serve as an exploratory study for such phenomenon in Saudi Arabia as well as both descriptive for the characteristics of sweethearting and explanatory study to link between the environmental and security systems factors to sweethearting.

Identification of Most Frequently Occurring Lexis in Winnings-announcing Unsolicited Bulke-mails

e-mail has become an important means of electronic communication but the viability of its usage is marred by Unsolicited Bulk e-mail (UBE) messages. UBE consists of many types like pornographic, virus infected and 'cry-for-help' messages as well as fake and fraudulent offers for jobs, winnings and medicines. UBE poses technical and socio-economic challenges to usage of e-mails. To meet this challenge and combat this menace, we need to understand UBE. Towards this end, the current paper presents a content-based textual analysis of nearly 3000 winnings-announcing UBE. Technically, this is an application of Text Parsing and Tokenization for an un-structured textual document and we approach it using Bag Of Words (BOW) and Vector Space Document Model techniques. We have attempted to identify the most frequently occurring lexis in the winnings-announcing UBE documents. The analysis of such top 100 lexis is also presented. We exhibit the relationship between occurrence of a word from the identified lexisset in the given UBE and the probability that the given UBE will be the one announcing fake winnings. To the best of our knowledge and survey of related literature, this is the first formal attempt for identification of most frequently occurring lexis in winningsannouncing UBE by its textual analysis. Finally, this is a sincere attempt to bring about alertness against and mitigate the threat of such luring but fake UBE.

Accurate Time Domain Method for Simulation of Microstructured Electromagnetic and Photonic Structures

A time-domain numerical model within the framework of transmission line modeling (TLM) is developed to simulate electromagnetic pulse propagation inside multiple microcavities forming photonic crystal (PhC) structures. The model developed is quite general and is capable of simulating complex electromagnetic problems accurately. The field quantities can be mapped onto a passive electrical circuit equivalent what ensures that TLM is provably stable and conservative at a local level. Furthermore, the circuit representation allows a high level of hybridization of TLM with other techniques and lumped circuit models of components and devices. A photonic crystal structure formed by rods (or blocks) of high-permittivity dieletric material embedded in a low-dielectric background medium is simulated as an example. The model developed gives vital spatio-temporal information about the signal, and also gives spectral information over a wide frequency range in a single run. The model has wide applications in microwave communication systems, optical waveguides and electromagnetic materials simulations.

Application of Data Mining Tools to Predicate Completion Time of a Project

Estimation time and cost of work completion in a project and follow up them during execution are contributors to success or fail of a project, and is very important for project management team. Delivering on time and within budgeted cost needs to well managing and controlling the projects. To dealing with complex task of controlling and modifying the baseline project schedule during execution, earned value management systems have been set up and widely used to measure and communicate the real physical progress of a project. But it often fails to predict the total duration of the project. In this paper data mining techniques is used predicting the total project duration in term of Time Estimate At Completion-EAC (t). For this purpose, we have used a project with 90 activities, it has updated day by day. Then, it is used regular indexes in literature and applied Earned Duration Method to calculate time estimate at completion and set these as input data for prediction and specifying the major parameters among them using Clem software. By using data mining, the effective parameters on EAC and the relationship between them could be extracted and it is very useful to manage a project with minimum delay risks. As we state, this could be a simple, safe and applicable method in prediction the completion time of a project during execution.

Use of Radial Basis Function Neural Network for Bearing Pressure Prediction of Strip Footing on Reinforced Granular Bed Overlying Weak Soil

Earth reinforcing techniques have become useful and economical to solve problems related to difficult grounds and provide satisfactory foundation performance. In this context, this paper uses radial basis function neural network (RBFNN) for predicting the bearing pressure of strip footing on reinforced granular bed overlying weak soil. The inputs for the neural network models included plate width, thickness of granular bed and number of layers of reinforcements, settlement ratio, water content, dry density, cohesion and angle of friction. The results indicated that RBFNN model exhibited more than 84 % prediction accuracy, thereby demonstrating its application in a geotechnical problem.

Selective Harmonic Elimination of PWM AC/AC Voltage Controller Using Hybrid RGA-PS Approach

Selective harmonic elimination-pulse width modulation techniques offer a tight control of the harmonic spectrum of a given voltage waveform generated by a power electronic converter along with a low number of switching transitions. Traditional optimization methods suffer from various drawbacks, such as prolonged and tedious computational steps and convergence to local optima; thus, the more the number of harmonics to be eliminated, the larger the computational complexity and time. This paper presents a novel method for output voltage harmonic elimination and voltage control of PWM AC/AC voltage converters using the principle of hybrid Real-Coded Genetic Algorithm-Pattern Search (RGA-PS) method. RGA is the primary optimizer exploiting its global search capabilities, PS is then employed to fine tune the best solution provided by RGA in each evolution. The proposed method enables linear control of the fundamental component of the output voltage and complete elimination of its harmonic contents up to a specified order. Theoretical studies have been carried out to show the effectiveness and robustness of the proposed method of selective harmonic elimination. Theoretical results are validated through simulation studies using PSIM software package.