Uniform Distribution of Ductility Demand in Irregular Bridges using Shape Memory Alloy

Excessive ductility demand on shorter piers is a common problem for irregular bridges subjected to strong ground motion. Various techniques have been developed to reduce the likelihood of collapse of bridge due to failure of shorter piers. This paper presents the new approach to improve the seismic behavior of such bridges using Nitinol shape memory alloys (SMAs). Superelastic SMAs have the ability to remain elastic under very large deformation due to martensitic transformation. This unique property leads to enhanced performance of controlled bridge compared with the performance of the reference bridge. To evaluate the effectiveness of the devices, nonlinear time history analysis is performed on a RC single column bent highway bridge using a suite of representative ground motions. The results show that this method is very effective in limiting the ductility demand of shorter pier.

Spread Spectrum Code Estimation by Genetic Algorithm

In the context of spectrum surveillance, a method to recover the code of spread spectrum signal is presented, whereas the receiver has no knowledge of the transmitter-s spreading sequence. The approach is based on a genetic algorithm (GA), which is forced to model the received signal. Genetic algorithms (GAs) are well known for their robustness in solving complex optimization problems. Experimental results show that the method provides a good estimation, even when the signal power is below the noise power.

A Cross-Layer Approach for Cooperative MIMO Multi-hop Wireless Sensor Networks

In this work, we study the problem of determining the minimum scheduling length that can satisfy end-to-end (ETE) traffic demand in scheduling-based multihop WSNs with cooperative multiple-input multiple-output (MIMO) transmission scheme. Specifically, we present a cross-layer formulation for the joint routing, scheduling and stream control problem by incorporating various power and rate adaptation schemes, and taking into account an antenna beam pattern model and the signal-to-interference-and-noise (SINR) constraint at the receiver. In the context, we also propose column generation (CG) solutions to get rid of the complexity requiring the enumeration of all possible sets of scheduling links.

Modelling of Energy Consumption in Wheat Production Using Neural Networks “Case Study in Canterbury Province, New Zealand“

An artificial neural network (ANN) approach was used to model the energy consumption of wheat production. This study was conducted over 35,300 hectares of irrigated and dry land wheat fields in Canterbury in the 2007-2008 harvest year.1 In this study several direct and indirect factors have been used to create an artificial neural networks model to predict energy use in wheat production. The final model can predict energy consumption by using farm condition (size of wheat area and number paddocks), farmers- social properties (education), and energy inputs (N and P use, fungicide consumption, seed consumption, and irrigation frequency), it can also predict energy use in Canterbury wheat farms with error margin of ±7% (± 1600 MJ/ha).

Image Contrast Enhancement based Sub-histogram Equalization Technique without Over-equalization Noise

In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes these regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrasts in the images and the results are compared to the conventional approaches to show its superiority.

Perspectives of Financial Reporting Harmonization

In the current context of globalization, accountability has become a key subject of real interest for both, national and international business areas, due to the need for comparability and transparency of the economic situation, so we can speak about the harmonization and convergence of international accounting. The paper presents a qualitative research through content analysis of several reports concerning the roadmap for convergence. First, we develop a conceptual framework for the evolution of standards’ convergence and further we discuss the degree of standards harmonization and convergence between US GAAP and IAS/IFRS as to October 2012. We find that most topics did not follow the expected progress. Furthermore there are still some differences in the long-term project that are in process to be completed and other that were reassessed as a lower priority project.

Modeling of Reusability of Object Oriented Software System

Automatic reusability appraisal is helpful in evaluating the quality of developed or developing reusable software components and in identification of reusable components from existing legacy systems; that can save cost of developing the software from scratch. But the issue of how to identify reusable components from existing systems has remained relatively unexplored. In this research work, structural attributes of software components are explored using software metrics and quality of the software is inferred by different Neural Network based approaches, taking the metric values as input. The calculated reusability value enables to identify a good quality code automatically. It is found that the reusability value determined is close to the manual analysis used to be performed by the programmers or repository managers. So, the developed system can be used to enhance the productivity and quality of software development.

Enhancement Throughput of Unplanned Wireless Mesh Networks Deployment Using Partitioning Hierarchical Cluster (PHC)

Wireless mesh networks based on IEEE 802.11 technology are a scalable and efficient solution for next generation wireless networking to provide wide-area wideband internet access to a significant number of users. The deployment of these wireless mesh networks may be within different authorities and without any planning, they are potentially overlapped partially or completely in the same service area. The aim of the proposed model is design a new model to Enhancement Throughput of Unplanned Wireless Mesh Networks Deployment Using Partitioning Hierarchical Cluster (PHC), the unplanned deployment of WMNs are determinates there performance. We use throughput optimization approach to model the unplanned WMNs deployment problem based on partitioning hierarchical cluster (PHC) based architecture, in this paper the researcher used bridge node by allowing interworking traffic between these WMNs as solution for performance degradation.

Calculus-based Runtime Verification

In this paper, a uniform calculus-based approach for synthesizing monitors checking correctness properties specified by a large variety of logics at runtime is provided, including future and past time logics, interval logics, state machine and parameterized temporal logics. We present a calculus mechanism to synthesize monitors from the logical specification for the incremental analysis of execution traces during test and real run. The monitor detects both good and bad prefix of a particular kind, namely those that are informative for the property under investigation. We elaborate the procedure of calculus as monitors.

NGN and WiMAX: Putting the Pieces Together

With the exponential rise in the number of multimedia applications available, the best-effort service provided by the Internet today is insufficient. Researchers have been working on new architectures like the Next Generation Network (NGN) which, by definition, will ensure Quality of Service (QoS) in an all-IP based network [1]. For this approach to become a reality, reservation of bandwidth is required per application per user. WiMAX (Worldwide Interoperability for Microwave Access) is a wireless communication technology which has predefined levels of QoS which can be provided to the user [4]. IPv6 has been created as the successor for IPv4 and resolves issues like the availability of IP addresses and QoS. This paper provides a design to use the power of WiMAX as an NSP (Network Service Provider) for NGN using IPv6. The use of the Traffic Class (TC) field and the Flow Label (FL) field of IPv6 has been explained for making QoS requests and grants [6], [7]. Using these fields, the processing time is reduced and routing is simplified. Also, we define the functioning of the ASN gateway and the NGN gateway (NGNG) which are edge node interfaces in the NGNWiMAX design. These gateways ensure QoS management through built in functions and by certain physical resources and networking capabilities.

Fusion of Colour and Depth Information to Enhance Wound Tissue Classification

Patients with diabetes are susceptible to chronic foot wounds which may be difficult to manage and slow to heal. Diagnosis and treatment currently rely on the subjective judgement of experienced professionals. An objective method of tissue assessment is required. In this paper, a data fusion approach was taken to wound tissue classification. The supervised Maximum Likelihood and unsupervised Multi-Modal Expectation Maximisation algorithms were used to classify tissues within simulated wound models by weighting the contributions of both colour and 3D depth information. It was found that, at low weightings, depth information could show significant improvements in classification accuracy when compared to classification by colour alone, particularly when using the maximum likelihood method. However, larger weightings were found to have an entirely negative effect on accuracy.

Modeling Language for Constructing Solvers in Machine Learning: Reductionist Perspectives

For a given specific problem an efficient algorithm has been the matter of study. However, an alternative approach orthogonal to this approach comes out, which is called a reduction. In general for a given specific problem this reduction approach studies how to convert an original problem into subproblems. This paper proposes a formal modeling language to support this reduction approach in order to make a solver quickly. We show three examples from the wide area of learning problems. The benefit is a fast prototyping of algorithms for a given new problem. It is noted that our formal modeling language is not intend for providing an efficient notation for data mining application, but for facilitating a designer who develops solvers in machine learning.

Wireless Distributed Load-Shedding Management System for Non-Emergency Cases

In this paper, we present a cost-effective wireless distributed load shedding system for non-emergency scenarios. In power transformer locations where SCADA system cannot be used, the proposed solution provides a reasonable alternative that combines the use of microcontrollers and existing GSM infrastructure to send early warning SMS messages to users advising them to proactively reduce their power consumption before system capacity is reached and systematic power shutdown takes place. A novel communication protocol and message set have been devised to handle the messaging between the transformer sites, where the microcontrollers are located and where the measurements take place, and the central processing site where the database server is hosted. Moreover, the system sends warning messages to the endusers mobile devices that are used as communication terminals. The system has been implemented and tested via different experimental results.

An Empirical Analysis of Arabic WebPages Classification using Fuzzy Operators

In this study, a fuzzy similarity approach for Arabic web pages classification is presented. The approach uses a fuzzy term-category relation by manipulating membership degree for the training data and the degree value for a test web page. Six measures are used and compared in this study. These measures include: Einstein, Algebraic, Hamacher, MinMax, Special case fuzzy and Bounded Difference approaches. These measures are applied and compared using 50 different Arabic web pages. Einstein measure was gave best performance among the other measures. An analysis of these measures and concluding remarks are drawn in this study.

Biodegradation of Lignocellulosic Residues of Water Hyacinth (Eichhornia crassipes) and Response Surface Methodological Approach to Optimize Bioethanol Production Using Fermenting Yeast Pachysolen tannophilus NRRL Y-2460

The objective of this research was to investigate biodegradation of water hyacinth (Eichhornia crassipes) to produce bioethanol using dilute-acid pretreatment (1% sulfuric acid) results in high hemicellulose decomposition and using yeast (Pachysolen tannophilus) as bioethanol producing strain. A maximum ethanol yield of 1.14g/L with coefficient, 0.24g g-1; productivity, 0.015g l-1h-1 was comparable to predicted value 32.05g/L obtained by Central Composite Design (CCD). Maximum ethanol yield coefficient was comparable to those obtained through enzymatic saccharification and fermentation of acid hydrolysate using fully equipped fermentor. Although maximum ethanol concentration was low in lab scale, the improvement of lignocellulosic ethanol yield is necessary for large scale production.

Visual Object Tracking in 3D with Color Based Particle Filter

This paper addresses the problem of determining the current 3D location of a moving object and robustly tracking it from a sequence of camera images. The approach presented here uses a particle filter and does not perform any explicit triangulation. Only the color of the object to be tracked is required, but not any precisemotion model. The observation model we have developed avoids the color filtering of the entire image. That and the Monte Carlotechniques inside the particle filter provide real time performance.Experiments with two real cameras are presented and lessons learned are commented. The approach scales easily to more than two cameras and new sensor cues.

Coordinated Design of TCSC Controller and PSS Employing Particle Swarm Optimization Technique

This paper investigates the application of Particle Swarm Optimization (PSO) technique for coordinated design of a Power System Stabilizer (PSS) and a Thyristor Controlled Series Compensator (TCSC)-based controller to enhance the power system stability. The design problem of PSS and TCSC-based controllers is formulated as a time domain based optimization problem. PSO algorithm is employed to search for optimal controller parameters. By minimizing the time-domain based objective function, in which the deviation in the oscillatory rotor speed of the generator is involved; stability performance of the system is improved. To compare the capability of PSS and TCSC-based controller, both are designed independently first and then in a coordinated manner for individual and coordinated application. The proposed controllers are tested on a weakly connected power system. The eigenvalue analysis and non-linear simulation results are presented to show the effectiveness of the coordinated design approach over individual design. The simulation results show that the proposed controllers are effective in damping low frequency oscillations resulting from various small disturbances like change in mechanical power input and reference voltage setting.

Transmitter Macrodiversity in Multihopping- SFN Based Algorithm for Improved Node Reachability and Robust Routing

A novel idea presented in this paper is to combine multihop routing with single-frequency networks (SFNs) for a broadcasting scenario. An SFN is a set of multiple nodes that transmit the same data simultaneously, resulting in transmitter macrodiversity. Two of the most important performance factors of multihop networks, node reachability and routing robustness, are analyzed. Simulation results show that our proposed SFN-D routing algorithm improves the node reachability by 37 percentage points as compared to non-SFN multihop routing. It shows a diversity gain of 3.7 dB, meaning that 3.7 dB lower transmission powers are required for the same reachability. Even better results are possible for larger networks. If an important node becomes inactive, this algorithm can find new routes that a non-SFN scheme would not be able to find. Thus, two of the major problems in multihopping are addressed; achieving robust routing as well as improving node reachability or reducing transmission power.

Clustering Multivariate Empiric Characteristic Functions for Multi-Class SVM Classification

A dissimilarity measure between the empiric characteristic functions of the subsamples associated to the different classes in a multivariate data set is proposed. This measure can be efficiently computed, and it depends on all the cases of each class. It may be used to find groups of similar classes, which could be joined for further analysis, or it could be employed to perform an agglomerative hierarchical cluster analysis of the set of classes. The final tree can serve to build a family of binary classification models, offering an alternative approach to the multi-class SVM problem. We have tested this dendrogram based SVM approach with the oneagainst- one SVM approach over four publicly available data sets, three of them being microarray data. Both performances have been found equivalent, but the first solution requires a smaller number of binary SVM models.

Comparative Survey of Object Serialization Techniques and the Programming Supports

This paper compares six approaches of object serialization from qualitative and quantitative aspects. Those are object serialization in Java, IDL, XStream, Protocol Buffers, Apache Avro, and MessagePack. Using each approach, a common example is serialized to a file and the size of the file is measured. The qualitative comparison works are investigated in the way of checking whether schema definition is required or not, whether schema compiler is required or not, whether serialization is based on ascii or binary, and which programming languages are supported. It is clear that there is no best solution. Each solution makes good in the context it was developed.