Road Extraction Using Stationary Wavelet Transform

In this paper, a novel road extraction method using Stationary Wavelet Transform is proposed. To detect road features from color aerial satellite imagery, Mexican hat Wavelet filters are used by applying the Stationary Wavelet Transform in a multiresolution, multi-scale, sense and forming the products of Wavelet coefficients at a different scales to locate and identify road features at a few scales. In addition, the shifting of road features locations is considered through multiple scales for robust road extraction in the asymmetry road feature profiles. From the experimental results, the proposed method leads to a useful technique to form the basis of road feature extraction. Also, the method is general and can be applied to other features in imagery.

A Computer Model of Quantum Field Theory

This paper describes a computer model of Quantum Field Theory (QFT), referred to in this paper as QTModel. After specifying the initial configuration for a QFT process (e.g. scattering) the model generates the possible applicable processes in terms of Feynman diagrams, the equations for the scattering matrix, and evaluates probability amplitudes for the scattering matrix and cross sections. The computations of probability amplitudes are performed numerically. The equations generated by QTModel are provided for demonstration purposes only. They are not directly used as the base for the computations of probability amplitudes. The computer model supports two modes for the computation of the probability amplitudes: (1) computation according to standard QFT, and (2) computation according to a proposed functional interpretation of quantum theory.

Design of Low-Area HEVC Core Transform Architecture

This paper proposes and implements an core transform architecture, which is one of the major processes in HEVC video compression standard. The proposed core transform architecture is implemented with only adders and shifters instead of area-consuming multipliers. Shifters in the proposed core transform architecture are implemented in wires and multiplexers, which significantly reduces chip area. Also, it can process from 4×4 to 16×16 blocks with common hardware by reusing processing elements. Designed core transform architecture in 0.13um technology can process a 16×16 block with 2-D transform in 130 cycles, and its gate count is 101,015 gates.

Syntactic Recognition of Distorted Patterns

In syntactic pattern recognition a pattern can be represented by a graph. Given an unknown pattern represented by a graph g, the problem of recognition is to determine if the graph g belongs to a language L(G) generated by a graph grammar G. The so-called IE graphs have been defined in [1] for a description of patterns. The IE graphs are generated by so-called ETPL(k) graph grammars defined in [1]. An efficient, parsing algorithm for ETPL(k) graph grammars for syntactic recognition of patterns represented by IE graphs has been presented in [1]. In practice, structural descriptions may contain pattern distortions, so that the assignment of a graph g, representing an unknown pattern, to a graph language L(G) generated by an ETPL(k) graph grammar G is rejected by the ETPL(k) type parsing. Therefore, there is a need for constructing effective parsing algorithms for recognition of distorted patterns. The purpose of this paper is to present a new approach to syntactic recognition of distorted patterns. To take into account all variations of a distorted pattern under study, a probabilistic description of the pattern is needed. A random IE graph approach is proposed here for such a description ([2]).

A Parametric Study of an Inverse Electrostatics Problem (IESP) Using Simulated Annealing, Hooke & Jeeves and Sequential Quadratic Programming in Conjunction with Finite Element and Boundary Element Methods

The aim of the current work is to present a comparison among three popular optimization methods in the inverse elastostatics problem (IESP) of flaw detection within a solid. In more details, the performance of a simulated annealing, a Hooke & Jeeves and a sequential quadratic programming algorithm was studied in the test case of one circular flaw in a plate solved by both the boundary element (BEM) and the finite element method (FEM). The proposed optimization methods use a cost function that utilizes the displacements of the static response. The methods were ranked according to the required number of iterations to converge and to their ability to locate the global optimum. Hence, a clear impression regarding the performance of the aforementioned algorithms in flaw identification problems was obtained. Furthermore, the coupling of BEM or FEM with these optimization methods was investigated in order to track differences in their performance.

An Efficient Run Time Interface for Heterogeneous Architecture of Large Scale Supercomputing System

In this paper we propose a novel Run Time Interface (RTI) technique to provide an efficient environment for MPI jobs on the heterogeneous architecture of PARAM Padma. It suggests an innovative, unified framework for the job management interface system in parallel and distributed computing. This approach employs proxy scheme. The implementation shows that the proposed RTI is highly scalable and stable. Moreover RTI provides the storage access for the MPI jobs in various operating system platforms and improve the data access performance through high performance C-DAC Parallel File System (C-PFS). The performance of the RTI is evaluated by using the standard HPC benchmark suites and the simulation results show that the proposed RTI gives good performance on large scale supercomputing system.

Control Improvement of a C Sugar Cane Crystallization Using an Auto-Tuning PID Controller Based on Linearization of a Neural Network

The industrial process of the sugar cane crystallization produces a residual that still contains a lot of soluble sucrose and the objective of the factory is to improve its extraction. Therefore, there are substantial losses justifying the search for the optimization of the process. Crystallization process studied on the industrial site is based on the “three massecuites process". The third step of this process constitutes the final stage of exhaustion of the sucrose dissolved in the mother liquor. During the process of the third step of crystallization (Ccrystallization), the phase that is studied and whose control is to be improved, is the growing phase (crystal growth phase). The study of this process on the industrial site is a problem in its own. A control scheme is proposed to improve the standard PID control law used in the factory. An auto-tuning PID controller based on instantaneous linearization of a neural network is then proposed.

Design and Analysis of an Automobile Bumper with the Capacity of Energy Release Using GMT Materials

Bumpers play an important role in preventing the impact energy from being transferred to the automobile and passengers. Saving the impact energy in the bumper to be released in the environment reduces the damages of the automobile and passengers. The goal of this paper is to design a bumper with minimum weight by employing the Glass Material Thermoplastic (GMT) materials. This bumper either absorbs the impact energy with its deformation or transfers it perpendicular to the impact direction. To reach this aim, a mechanism is designed to convert about 80% of the kinetic impact energy to the spring potential energy and release it to the environment in the low impact velocity according to American standard1. In addition, since the residual kinetic energy will be damped with the infinitesimal elastic deformation of the bumper elements, the passengers will not sense any impact. It should be noted that in this paper, modeling, solving and result-s analysis are done in CATIA, LS-DYNA and ANSYS V8.0 software respectively.

Big Bang – Big Crunch Learning Method for Fuzzy Cognitive Maps

Modeling of complex dynamic systems, which are very complicated to establish mathematical models, requires new and modern methodologies that will exploit the existing expert knowledge, human experience and historical data. Fuzzy cognitive maps are very suitable, simple, and powerful tools for simulation and analysis of these kinds of dynamic systems. However, human experts are subjective and can handle only relatively simple fuzzy cognitive maps; therefore, there is a need of developing new approaches for an automated generation of fuzzy cognitive maps using historical data. In this study, a new learning algorithm, which is called Big Bang-Big Crunch, is proposed for the first time in literature for an automated generation of fuzzy cognitive maps from data. Two real-world examples; namely a process control system and radiation therapy process, and one synthetic model are used to emphasize the effectiveness and usefulness of the proposed methodology.

Robust Features for Impulsive Noisy Speech Recognition Using Relative Spectral Analysis

The goal of speech parameterization is to extract the relevant information about what is being spoken from the audio signal. In speech recognition systems Mel-Frequency Cepstral Coefficients (MFCC) and Relative Spectral Mel-Frequency Cepstral Coefficients (RASTA-MFCC) are the two main techniques used. It will be shown in this paper that it presents some modifications to the original MFCC method. In our work the effectiveness of proposed changes to MFCC called Modified Function Cepstral Coefficients (MODFCC) were tested and compared against the original MFCC and RASTA-MFCC features. The prosodic features such as jitter and shimmer are added to baseline spectral features. The above-mentioned techniques were tested with impulsive signals under various noisy conditions within AURORA databases.

Intelligent Agents for Distributed Intrusion Detection System

This paper presents a distributed intrusion detection system IDS, based on the concept of specialized distributed agents community representing agents with the same purpose for detecting distributed attacks. The semantic of intrusion events occurring in a predetermined network has been defined. The correlation rules referring the process which our proposed IDS combines the captured events that is distributed both spatially and temporally. And then the proposed IDS tries to extract significant and broad patterns for set of well-known attacks. The primary goal of our work is to provide intrusion detection and real-time prevention capability against insider attacks in distributed and fully automated environments.

Massive Lesions Classification using Features based on Morphological Lesion Differences

Purpose of this work is the development of an automatic classification system which could be useful for radiologists in the investigation of breast cancer. The software has been designed in the framework of the MAGIC-5 collaboration. In the automatic classification system the suspicious regions with high probability to include a lesion are extracted from the image as regions of interest (ROIs). Each ROI is characterized by some features based on morphological lesion differences. Some classifiers as a Feed Forward Neural Network, a K-Nearest Neighbours and a Support Vector Machine are used to distinguish the pathological records from the healthy ones. The results obtained in terms of sensitivity (percentage of pathological ROIs correctly classified) and specificity (percentage of non-pathological ROIs correctly classified) will be presented through the Receive Operating Characteristic curve (ROC). In particular the best performances are 88% ± 1 of area under ROC curve obtained with the Feed Forward Neural Network.

Improved Modulo 2n +1 Adder Design

Efficient modulo 2n+1 adders are important for several applications including residue number system, digital signal processors and cryptography algorithms. In this paper we present a novel modulo 2n+1 addition algorithm for a recently represented number system. The proposed approach is introduced for the reduction of the power dissipated. In a conventional modulo 2n+1 adder, all operands have (n+1)-bit length. To avoid using (n+1)-bit circuits, the diminished-1 and carry save diminished-1 number systems can be effectively used in applications. In the paper, we also derive two new architectures for designing modulo 2n+1 adder, based on n-bit ripple-carry adder. The first architecture is a faster design whereas the second one uses less hardware. In the proposed method, the special treatment required for zero operands in Diminished-1 number system is removed. In the fastest modulo 2n+1 adders in normal binary system, there are 3-operand adders. This problem is also resolved in this paper. The proposed architectures are compared with some efficient adders based on ripple-carry adder and highspeed adder. It is shown that the hardware overhead and power consumption will be reduced. As well as power reduction, in some cases, power-delay product will be also reduced.

Effect of Clustering on Energy Efficiency and Network Lifetime in Wireless Sensor Networks

Wireless Sensor Network is Multi hop Self-configuring Wireless Network consisting of sensor nodes. The deployment of wireless sensor networks in many application areas, e.g., aggregation services, requires self-organization of the network nodes into clusters. Efficient way to enhance the lifetime of the system is to partition the network into distinct clusters with a high energy node as cluster head. The different methods of node clustering techniques have appeared in the literature, and roughly fall into two families; those based on the construction of a dominating set and those which are based solely on energy considerations. Energy optimized cluster formation for a set of randomly scattered wireless sensors is presented. Sensors within a cluster are expected to be communicating with cluster head only. The energy constraint and limited computing resources of the sensor nodes present the major challenges in gathering the data. In this paper we propose a framework to study how partially correlated data affect the performance of clustering algorithms. The total energy consumption and network lifetime can be analyzed by combining random geometry techniques and rate distortion theory. We also present the relation between compression distortion and data correlation.

The Risk and Value Engineering Structures and their Integration with Industrial Projects Management (A Case Study on I. K.Corporation)

Value engineering is an efficacious contraption for administrators to make up their minds. Value perusals proffer the gaffers a suitable instrument to decrease the expenditures of the life span, quality amelioration, structural improvement, curtailment of the construction schedule, longevity prolongation or a merging of the aforementioned cases. Subjecting organizers to pressures on one hand and their accountability towards their pertinent fields together with inherent risks and ambiguities of other options on the other hand set some comptrollers in a dilemma utilization of risk management and the value engineering in projects manipulation with regard to complexities of implementing projects can be wielded as a contraption to identify and efface each item which wreaks unnecessary expenses and time squandering sans inflicting any damages upon the essential project applications. Of course It should be noted that implementation of risk management and value engineering with regard to the betterment of efficiency and functions may lead to the project implementation timing elongation. Here time revamping does not refer to time diminishing in the whole cases. his article deals with risk and value engineering conceptualizations at first. The germane reverberations effectuated due to its execution in Iran Khodro Corporation are regarded together with the joint features and amalgamation of the aforesaid entia; hence the proposed blueprint is submitted to be taken advantage of in engineering and industrial projects including Iran Khodro Corporation.

Effects of Superheating on Thermodynamic Performance of Organic Rankine Cycles

Recently ORC(Organic Rankine Cycle) has attracted much attention due to its potential in reducing consumption of fossil fuels and its favorable characteristics to exploit low-grade heat sources. In this work thermodynamic performance of ORC with superheating of vapor is comparatively assessed for various working fluids. Special attention is paid to the effects of system parameters such as the evaporating temperature and the turbine inlet temperature on the characteristics of the system such as maximum possible work extraction from the given source, volumetric flow rate per 1 kW of net work and quality of the working fluid at turbine exit as well as thermal and exergy efficiencies. Results show that for a given source the thermal efficiency increases with decrease of the superheating but exergy efficiency may have a maximum value with respect to the superheating of the working fluid. Results also show that in selection of working fluid it is required to consider various criteria of performance characteristics as well as thermal efficiency.

Thermal Post-buckling of Shape Memory Alloy Composite Plates under Non-uniform Temperature Distribution

Aerospace vehicles are subjected to non-uniform thermal loading that may cause thermal buckling. A study was conducted on the thermal post-buckling of shape memory alloy composite plates subjected to the non-uniform tent-like temperature field. The shape memory alloy wires were embedded within the laminated composite plates to add recovery stress to the plates. The non-linear finite element model that considered the recovery stress of the shape memory alloy and temperature dependent properties of the shape memory alloy and composite matrix along with its source codes were developed. It was found that the post-buckling paths of the shape memory alloy composite plates subjected to various tentlike temperature fields were stable within the studied temperature range. The addition of shape memory alloy wires to the composite plates was found to significantly improve the post-buckling behavior of laminated composite plates under non-uniform temperature distribution.

A Low Complexity Frequency Offset Estimation for MB-OFDM based UWB Systems

A low-complexity, high-accurate frequency offset estimation for multi-band orthogonal frequency division multiplexing (MB-OFDM) based ultra-wide band systems is presented regarding different carrier frequency offsets, different channel frequency responses, different preamble patterns in different bands. Utilizing a half-cycle Constant Amplitude Zero Auto Correlation (CAZAC) sequence as the preamble sequence, the estimator with a semi-cross contrast scheme between two successive OFDM symbols is proposed. The CRLB and complexity of the proposed algorithm are derived. Compared to the reference estimators, the proposed method achieves significantly less complexity (about 50%) for all preamble patterns of the MB-OFDM systems. The CRLBs turn out to be of well performance.

Information Filtering using Index Word Selection based on the Topics

We have proposed an information filtering system using index word selection from a document set based on the topics included in a set of documents. This method narrows down the particularly characteristic words in a document set and the topics are obtained by Sparse Non-negative Matrix Factorization. In information filtering, a document is often represented with the vector in which the elements correspond to the weight of the index words, and the dimension of the vector becomes larger as the number of documents is increased. Therefore, it is possible that useless words as index words for the information filtering are included. In order to address the problem, the dimension needs to be reduced. Our proposal reduces the dimension by selecting index words based on the topics included in a document set. We have applied the Sparse Non-negative Matrix Factorization to the document set to obtain these topics. The filtering is carried out based on a centroid of the learning document set. The centroid is regarded as the user-s interest. In addition, the centroid is represented with a document vector whose elements consist of the weight of the selected index words. Using the English test collection MEDLINE, thus, we confirm the effectiveness of our proposal. Hence, our proposed selection can confirm the improvement of the recommendation accuracy from the other previous methods when selecting the appropriate number of index words. In addition, we discussed the selected index words by our proposal and we found our proposal was able to select the index words covered some minor topics included in the document set.

Rule-Based Message Passing for Collaborative Application in Distributed Environments

In this paper, we describe a rule-based message passing method to support developing collaborative applications, in which multiple users share resources in distributed environments. Message communications of applications in collaborative environments tend to be very complex because of the necessity to manage context situations such as sharing events, access controlling of users, and network places. In this paper, we propose a message communications method based on unification of artificial intelligence and logic programming for defining rules of such context information in a procedural object-oriented programming language. We also present an implementation of the method as java classes.