A New Method for Detection of Artificial Objects and Materials from Long Distance Environmental Images

The article presents a new method for detection of artificial objects and materials from images of the environmental (non-urban) terrain. Our approach uses the hue and saturation (or Cb and Cr) components of the image as the input to the segmentation module that uses the mean shift method. The clusters obtained as the output of this stage have been processed by the decision-making module in order to find the regions of the image with the significant possibility of representing human. Although this method will detect various non-natural objects, it is primarily intended and optimized for detection of humans; i.e. for search and rescue purposes in non-urban terrain where, in normal circumstances, non-natural objects shouldn-t be present. Real world images are used for the evaluation of the method.

Context Aware Navigation System for Using Public Transport on Smartphone

Recently, many web services to provide information for public transport are developed and released. They are optimized for mobile devices such a smartphone. We are also developing better path planning system for route buses and trains called “Bus-Net"[1]. However these systems only provide paths and related information before the user start moving. So we propose a context aware navigation to change the way to support public transport users. If we go to somewhere using many kinds of public transport, we have to know how to use them. In addition, public transport is dynamic system, and these have different characteristic by type. So we need information at real-time. Therefore we suggest the system that can support on user-s state. It has a variety of ways to help public transport users by each state, like turn-by-turn navigation. Context aware navigation will be able to reduce anxiety for using public transport.

Sweet Corn Water Productivity under Several Deficit Irrigation Regimes Applied during Vegetative Growth Stage using Treated Wastewater as Water Irrigation Source

Yield and Crop Water Productivity are crucial issues in sustainable agriculture, especially in high-demand resource crops such as sweet corn. This study was conducted to investigate agronomic responses such as plant growth, yield and soil parameters (EC and Nitrate accumulation) to several deficit irrigation treatments (100, 75, 50, 25 and 0% of ETm) applied during vegetative growth stage, rainfed treatment was also tested. The finding of this research indicates that under deficit irrigation during vegetative growth stage applying 75% of ETm lead to increasing of 19.4% in terms of fresh ear yield, 9.4% in terms of dry grain yield, 10.5% in terms of number of ears per plant, 11.5% for the 1000 grains weight and 19% in terms of crop water productivity compared with fully irrigated treatment. While those parameters in addition to root, shoot and plant height has been affected by deficit irrigation during vegetative growth stage when increasing water stress degree more than 50% of ETm.

Applications of Artificial Neural Network to Building Statistical Models for Qualifying and Indexing Radiation Treatment Plans

The main goal in this paper is to quantify the quality of different techniques for radiation treatment plans, a back-propagation artificial neural network (ANN) combined with biomedicine theory was used to model thirteen dosimetric parameters and to calculate two dosimetric indices. The correlations between dosimetric indices and quality of life were extracted as the features and used in the ANN model to make decisions in the clinic. The simulation results show that a trained multilayer back-propagation neural network model can help a doctor accept or reject a plan efficiently. In addition, the models are flexible and whenever a new treatment technique enters the market, the feature variables simply need to be imported and the model re-trained for it to be ready for use.

Finite Element Analysis of Sheet Metal Airbending Using Hyperform LS-DYNA

Air bending is one of the important metal forming processes, because of its simplicity and large field application. Accuracy of analytical and empirical models reported for the analysis of bending processes is governed by simplifying assumption and do not consider the effect of dynamic parameters. Number of researches is reported on the finite element analysis (FEA) of V-bending, Ubending, and air V-bending processes. FEA of bending is found to be very sensitive to many physical and numerical parameters. FE models must be computationally efficient for practical use. Reported work shows the 3D FEA of air bending process using Hyperform LSDYNA and its comparison with, published 3D FEA results of air bending in Ansys LS-DYNA and experimental results. Observing the planer symmetry and based on the assumption of plane strain condition, air bending problem was modeled in 2D with symmetric boundary condition in width. Stress-strain results of 2D FEA were compared with 3D FEA results and experiments. Simplification of air bending problem from 3D to 2D resulted into tremendous reduction in the solution time with only marginal effect on stressstrain results. FE model simplification by studying the problem symmetry is more efficient and practical approach for solution of more complex large dimensions slow forming processes.

New Gate Stack Double Diffusion MOSFET Design to Improve the Electrical Performances for Power Applications

In this paper, we have developed an explicit analytical drain current model comprising surface channel potential and threshold voltage in order to explain the advantages of the proposed Gate Stack Double Diffusion (GSDD) MOSFET design over the conventional MOSFET with the same geometric specifications that allow us to use the benefits of the incorporation of the high-k layer between the oxide layer and gate metal aspect on the immunity of the proposed design against the self-heating effects. In order to show the efficiency of our proposed structure, we propose the simulation of the power chopper circuit. The use of the proposed structure to design a power chopper circuit has showed that the (GSDD) MOSFET can improve the working of the circuit in terms of power dissipation and self-heating effect immunity. The results so obtained are in close proximity with the 2D simulated results thus confirming the validity of the proposed model.

A Hybrid Neural Network and Gravitational Search Algorithm (HNNGSA) Method to Solve well known Wessinger's Equation

This study presents a hybrid neural network and Gravitational Search Algorithm (HNGSA) method to solve well known Wessinger's equation. To aim this purpose, gravitational search algorithm (GSA) technique is applied to train a multi-layer perceptron neural network, which is used as approximation solution of the Wessinger's equation. A trial solution of the differential equation is written as sum of two parts. The first part satisfies the initial/ boundary conditions and does not contain any adjustable parameters and the second part which is constructed so as not to affect the initial/boundary conditions. The second part involves adjustable parameters (the weights and biases) for a multi-layer perceptron neural network. In order to demonstrate the presented method, the obtained results of the proposed method are compared with some known numerical methods. The given results show that presented method can introduce a closer form to the analytic solution than other numerical methods. Present method can be easily extended to solve a wide range of problems.

Optimal Planning of Ground Grid Based on Particle Swam Algorithm

This paper presents an application of particle swarm optimization (PSO) to the grounding grid planning which compares to the application of genetic algorithm (GA). Firstly, based on IEEE Std.80, the cost function of the grounding grid and the constraints of ground potential rise, step voltage and touch voltage are constructed for formulating the optimization problem of grounding grid planning. Secondly, GA and PSO algorithms for obtaining optimal solution of grounding grid are developed. Finally, a case of grounding grid planning is shown the superiority and availability of the PSO algorithm and proposal planning results of grounding grid in cost and computational time.

Teachers Learning about Sustainability while Co-Constructing Digital Games

Teaching and learning about sustainability is a pedagogical endeavour with various innate difficulties and increased demands. Higher education has a dual role to play in addressing this challenge: to identify and explore innovative approaches and tools for addressing the complex and value-laden nature of sustainability in more meaningful ways, and to help teachers to integrate these approaches into their practice through appropriate professional development programs. The study reported here was designed and carried out within the context of a Masters course in Environmental Education. Eight teachers were collaboratively engaged in reconstructing a digital game microworld which was deliberately designed by the researchers to be questioned and evoke critical discussion on the idea of ‘sustainable city’. The study was based on the design-based research method. The findings indicate that the teachers’ involvement in processes of co-constructing the microworld initiated discussion and reflection upon the concepts of sustainability and sustainable lifestyles.

Representing Collective Unconsciousness Using Neural Networks

Instead of representing individual cognition only, population cognition is represented using artificial neural networks whilst maintaining individuality. This population network trains continuously, simulating adaptation. An implementation of two coexisting populations is compared to the Lotka-Volterra model of predator-prey interaction. Applications include multi-agent systems such as artificial life or computer games.

Introducing Sequence-Order Constraint into Prediction of Protein Binding Sites with Automatically Extracted Templates

Search for a tertiary substructure that geometrically matches the 3D pattern of the binding site of a well-studied protein provides a solution to predict protein functions. In our previous work, a web server has been built to predict protein-ligand binding sites based on automatically extracted templates. However, a drawback of such templates is that the web server was prone to resulting in many false positive matches. In this study, we present a sequence-order constraint to reduce the false positive matches of using automatically extracted templates to predict protein-ligand binding sites. The binding site predictor comprises i) an automatically constructed template library and ii) a local structure alignment algorithm for querying the library. The sequence-order constraint is employed to identify the inconsistency between the local regions of the query protein and the templates. Experimental results reveal that the sequence-order constraint can largely reduce the false positive matches and is effective for template-based binding site prediction.

Performance Evaluation of Wavelet Based Coders on Brain MRI Volumetric Medical Datasets for Storage and Wireless Transmission

In this paper, we evaluate the performance of some wavelet based coding algorithms such as 3D QT-L, 3D SPIHT and JPEG2K. In the first step we achieve an objective comparison between three coders, namely 3D SPIHT, 3D QT-L and JPEG2K. For this purpose, eight MRI head scan test sets of 256 x 256x124 voxels have been used. Results show superior performance of 3D SPIHT algorithm, whereas 3D QT-L outperforms JPEG2K. The second step consists of evaluating the robustness of 3D SPIHT and JPEG2K coding algorithm over wireless transmission. Compressed dataset images are then transmitted over AWGN wireless channel or over Rayleigh wireless channel. Results show the superiority of JPEG2K over these two models. In fact, it has been deduced that JPEG2K is more robust regarding coding errors. Thus we may conclude the necessity of using corrector codes in order to protect the transmitted medical information.

Dynamics and Control of Bouncing Ball

This paper investigates the control of a bouncing ball using Model Predictive Control. Bouncing ball is a benchmark problem for various rhythmic tasks such as juggling, walking, hopping and running. Humans develop intentions which may be perceived as our reference trajectory and tries to track it. The human brain optimizes the control effort needed to track its reference; this forms the central theme for control of bouncing ball in our investigations.

Mixtures of Monotone Networks for Prediction

In many data mining applications, it is a priori known that the target function should satisfy certain constraints imposed by, for example, economic theory or a human-decision maker. In this paper we consider partially monotone prediction problems, where the target variable depends monotonically on some of the input variables but not on all. We propose a novel method to construct prediction models, where monotone dependences with respect to some of the input variables are preserved by virtue of construction. Our method belongs to the class of mixture models. The basic idea is to convolute monotone neural networks with weight (kernel) functions to make predictions. By using simulation and real case studies, we demonstrate the application of our method. To obtain sound assessment for the performance of our approach, we use standard neural networks with weight decay and partially monotone linear models as benchmark methods for comparison. The results show that our approach outperforms partially monotone linear models in terms of accuracy. Furthermore, the incorporation of partial monotonicity constraints not only leads to models that are in accordance with the decision maker's expertise, but also reduces considerably the model variance in comparison to standard neural networks with weight decay.

MITAutomatic ECG Beat Tachycardia Detection Using Artificial Neural Network

The application of Neural Network for disease diagnosis has made great progress and is widely used by physicians. An Electrocardiogram carries vital information about heart activity and physicians use this signal for cardiac disease diagnosis which was the great motivation towards our study. In our work, tachycardia features obtained are used for the training and testing of a Neural Network. In this study we are using Fuzzy Probabilistic Neural Networks as an automatic technique for ECG signal analysis. As every real signal recorded by the equipment can have different artifacts, we needed to do some preprocessing steps before feeding it to our system. Wavelet transform is used for extracting the morphological parameters of the ECG signal. The outcome of the approach for the variety of arrhythmias shows the represented approach is superior than prior presented algorithms with an average accuracy of about %95 for more than 7 tachy arrhythmias.

Study of EEGs from Somatosensory Cortex and Alzheimer's Disease Sources

This study is to investigate the electroencephalogram (EEG) differences generated from a normal and Alzheimer-s disease (AD) sources. We also investigate the effects of brain tissue distortions due to AD on EEG. We develop a realistic head model from T1 weighted magnetic resonance imaging (MRI) using finite element method (FEM) for normal source (somatosensory cortex (SC) in parietal lobe) and AD sources (right amygdala (RA) and left amygdala (LA) in medial temporal lobe). Then, we compare the AD sourced EEGs to the SC sourced EEG for studying the nature of potential changes due to sources and 5% to 20% brain tissue distortions. We find an average of 0.15 magnification errors produced by AD sourced EEGs. Different brain tissue distortion models also generate the maximum 0.07 magnification. EEGs obtained from AD sources and different brain tissue distortion levels vary scalp potentials from normal source, and the electrodes residing in parietal and temporal lobes are more sensitive than other electrodes for AD sourced EEG.

Combining Variable Ordering Heuristics for Improving Search Algorithms Performance

Variable ordering heuristics are used in constraint satisfaction algorithms. Different characteristics of various variable ordering heuristics are complementary. Therefore we have tried to get the advantages of all heuristics to improve search algorithms performance for solving constraint satisfaction problems. This paper considers combinations based on products and quotients, and then a newer form of combination based on weighted sums of ratings from a set of base heuristics, some of which result in definite improvements in performance.

Detection of Action Potentials in the Presence of Noise Using Phase-Space Techniques

Emerging Bio-engineering fields such as Brain Computer Interfaces, neuroprothesis devices and modeling and simulation of neural networks have led to increased research activity in algorithms for the detection, isolation and classification of Action Potentials (AP) from noisy data trains. Current techniques in the field of 'unsupervised no-prior knowledge' biosignal processing include energy operators, wavelet detection and adaptive thresholding. These tend to bias towards larger AP waveforms, AP may be missed due to deviations in spike shape and frequency and correlated noise spectrums can cause false detection. Also, such algorithms tend to suffer from large computational expense. A new signal detection technique based upon the ideas of phasespace diagrams and trajectories is proposed based upon the use of a delayed copy of the AP to highlight discontinuities relative to background noise. This idea has been used to create algorithms that are computationally inexpensive and address the above problems. Distinct AP have been picked out and manually classified from real physiological data recorded from a cockroach. To facilitate testing of the new technique, an Auto Regressive Moving Average (ARMA) noise model has been constructed bases upon background noise of the recordings. Along with the AP classification means this model enables generation of realistic neuronal data sets at arbitrary signal to noise ratio (SNR).

Grouping-Based Job Scheduling Model In Grid Computing

Grid computing is a high performance computing environment to solve larger scale computational applications. Grid computing contains resource management, job scheduling, security problems, information management and so on. Job scheduling is a fundamental and important issue in achieving high performance in grid computing systems. However, it is a big challenge to design an efficient scheduler and its implementation. In Grid Computing, there is a need of further improvement in Job Scheduling algorithm to schedule the light-weight or small jobs into a coarse-grained or group of jobs, which will reduce the communication time, processing time and enhance resource utilization. This Grouping strategy considers the processing power, memory-size and bandwidth requirements of each job to realize the real grid system. The experimental results demonstrate that the proposed scheduling algorithm efficiently reduces the processing time of jobs in comparison to others.

Modeling the Vapor Pressure of Biodiesel Fuels

The composition, vapour pressure, and heat capacity of nine biodiesel fuels from different sources were measured. The vapour pressure of the biodiesel fuels is modeled assuming an ideal liquid phase of the fatty acid methyl esters constituting the fuel. New methodologies to calculate the vapour pressure and ideal gas and liquid heat capacities of the biodiesel fuel constituents are proposed. Two alternative optimization scenarios are evaluated: 1) vapour pressure only; 2) vapour pressure constrained with liquid heat capacity. Without physical constraints, significant errors in liquid heat capacity predictions were found whereas the constrained correlation accurately fit both vapour pressure and liquid heat capacity.