Modeling and Investigation of Volume Strain at Large Deformation under Uniaxial Cyclic Loading in Semi Crystalline Polymer

This study deals with the experimental investigation and theoretical modeling of Semi crystalline polymeric materials with a rubbery amorphous phase (HDPE) subjected to a uniaxial cyclic tests with various maximum strain levels, even at large deformation. Each cycle is loaded in tension up to certain maximum strain and then unloaded down to zero stress with N number of cycles. This work is focuses on the measure of the volume strain due to the phenomena of damage during this kind of tests. On the basis of thermodynamics of relaxation processes, a constitutive model for large strain deformation has been developed, taking into account the damage effect, to predict the complex elasto-viscoelastic-viscoplastic behavior of material. A direct comparison between the model predictions and the experimental data show that the model accurately captures the material response. The model is also capable of predicting the influence damage causing volume variation.

Removal of Basic Blue 3 from Aqueous Solution by Adsorption Onto Durio Ziberthinus

Durian husk (DH), a fruit waste, was studied for its ability to remove Basic blue 3 (BB3) from aqueous solutions. Batch kinetic studies were carried out to study the sorption characteristics under various experimental conditions. The optimum pH for the dye removal occurred in the pH range of 3-10. Sorption was found to be concentration and agitation dependent. The kinetics of dye sorption fitted a pseudo-second order rate expression. Both Langmuir and Freundlich models appeared to provide reasonable fittings for the sorption data of BB3 on durian husk. Maximum sorption capacity calculated from the Langmuir model is 49.50 mg g-1.

Implementation of Security Algorithms for u-Health Monitoring System

Data security in u-Health system can be an important issue because wireless network is vulnerable to hacking. However, it is not easy to implement a proper security algorithm in an embedded u-health monitoring because of hardware constraints such as low performance, power consumption and limited memory size and etc. To secure data that contain personal and biosignal information, we implemented several security algorithms such as Blowfish, data encryption standard (DES), advanced encryption standard (AES) and Rivest Cipher 4 (RC4) for our u-Health monitoring system and the results were successful. Under the same experimental conditions, we compared these algorithms. RC4 had the fastest execution time. Memory usage was the most efficient for DES. However, considering performance and safety capability, however, we concluded that AES was the most appropriate algorithm for a personal u-Health monitoring system.

Experimental Study of Adsorption Properties of Acid and Thermal Treated Bentonite from Tehran (Iran)

The Iranian bentonite was first characterized by Scanning Electron Microscopy (SEM), Inductively Coupled Plasma mass spectrometry (ICP-MS), X-ray fluorescence (XRF), X-ray Diffraction (XRD) and BET. The bentonite was then treated thermally between 150°C-250°C at 15min, 45min and 90min and also was activated chemically with different concentration of sulphuric acid (3N, 5N and 10N). Although the results of thermal activated-bentonite didn-t show any considerable changes in specific surface area and Cation Exchange Capacity (CEC), but the results of chemical treated bentonite demonstrated that such properties have been improved by acid activation process.

Spatio-Temporal Patterns and Dynamics in Motion of Pathogenic Spirochetes: Implications toward Virulence and Treatment of Leptospirosis

We apply a particle tracking technique to track the motion of individual pathogenic Leptospira. We observe and capture images of motile Leptospira by means of CCD and darkfield microscope. Image processing, statistical theories and simulations are used for data analysis. Based on trajectory patterns, mean square displacement, and power spectral density characteristics, we found that the motion modes are most likely to be directed motion mode (70%) and the rest are either normal diffusion or unidentified mode. Our findings may support the fact that why leptospires are very well efficient toward targeting internal tissues as a result of increase in virulence factor.

Numerical Simulations of Cross-Flow around Four Square Cylinders in an In-Line Rectangular Configuration

A two-dimensional numerical simulation of crossflow around four cylinders in an in-line rectangular configuration is studied by using the lattice Boltzmann method (LBM). Special attention is paid to the effect of the spacing between the cylinders. The Reynolds number ( Re ) is chosen to be e 100 R = and the spacing ratio L / D is set at 0.5, 1.5, 2.5, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0 and 10.0. Results show that, as in the case of four cylinders in an inline rectangular configuration , flow fields show four different features depending on the spacing (single square cylinder, stable shielding flow, wiggling shielding flow and a vortex shedding flow) are observed in this study. The effects of spacing ratio on physical quantities such as mean drag coefficient, Strouhal number and rootmean- square value of the drag and lift coefficients are also presented. There is more than one shedding frequency at small spacing ratios. The mean drag coefficients for downstream cylinders are less than that of the single cylinder for all spacing ratios. The present results using the LBM are compared with some existing experimental data and numerical studies. The comparison shows that the LBM can capture the characteristics of the bluff body flow reasonably well and is a good tool for bluff body flow studies.

Modeling Strategy and Numerical Validation of the Turbulent Flow over a two-Dimensional Flat Roof

The construction of a civil structure inside a urban area inevitably modifies the outdoor microclimate at the building site. Wind speed, wind direction, air pollution, driving rain, radiation and daylight are some of the main physical aspects that are subjected to the major changes. The quantitative amount of these modifications depends on the shape, size and orientation of the building and on its interaction with the surrounding environment.The flow field over a flat roof model building has been numerically investigated in order to determine two-dimensional CFD guidelines for the calculation of the turbulent flow over a structure immersed in an atmospheric boundary layer. To this purpose, a complete validation campaign has been performed through a systematic comparison of numerical simulations with wind tunnel experimental data.Several turbulence models and spatial node distributions have been tested for five different vertical positions, respectively from the upstream leading edge to the downstream bottom edge of the analyzed model. Flow field characteristics in the neighborhood of the building model have been numerically investigated, allowing a quantification of the capabilities of the CFD code to predict the flow separation and the extension of the recirculation regions.The proposed calculations have allowed the development of a preliminary procedure to be used as a guidance in selecting the appropriate grid configuration and corresponding turbulence model for the prediction of the flow field over a twodimensional roof architecture dominated by flow separation.

Application of 0-1 Fuzzy Programming in Optimum Project Selection

In this article, a mathematical programming model for choosing an optimum portfolio of investments is developed. The investments are considered as investment projects. The uncertainties of the real world are associated through fuzzy concepts for coefficients of the proposed model (i. e. initial investment costs, profits, resource requirement, and total available budget). Model has been coded by using LINGO 11.0 solver. The results of a full analysis of optimistic and pessimistic derivative models are promising for selecting an optimum portfolio of projects in presence of uncertainty.

Data Transformation Services (DTS): Creating Data Mart by Consolidating Multi-Source Enterprise Operational Data

Trends in business intelligence, e-commerce and remote access make it necessary and practical to store data in different ways on multiple systems with different operating systems. As business evolve and grow, they require efficient computerized solution to perform data update and to access data from diverse enterprise business applications. The objective of this paper is to demonstrate the capability of DTS [1] as a database solution for automatic data transfer and update in solving business problem. This DTS package is developed for the sales of variety of plants and eventually expanded into commercial supply and landscaping business. Dimension data modeling is used in DTS package to extract, transform and load data from heterogeneous database systems such as MySQL, Microsoft Access and Oracle that consolidates into a Data Mart residing in SQL Server. Hence, the data transfer from various databases is scheduled to run automatically every quarter of the year to review the efficient sales analysis. Therefore, DTS is absolutely an attractive solution for automatic data transfer and update which meeting today-s business needs.

Optimizing of Fuzzy C-Means Clustering Algorithm Using GA

Fuzzy C-means Clustering algorithm (FCM) is a method that is frequently used in pattern recognition. It has the advantage of giving good modeling results in many cases, although, it is not capable of specifying the number of clusters by itself. In FCM algorithm most researchers fix weighting exponent (m) to a conventional value of 2 which might not be the appropriate for all applications. Consequently, the main objective of this paper is to use the subtractive clustering algorithm to provide the optimal number of clusters needed by FCM algorithm by optimizing the parameters of the subtractive clustering algorithm by an iterative search approach and then to find an optimal weighting exponent (m) for the FCM algorithm. In order to get an optimal number of clusters, the iterative search approach is used to find the optimal single-output Sugenotype Fuzzy Inference System (FIS) model by optimizing the parameters of the subtractive clustering algorithm that give minimum least square error between the actual data and the Sugeno fuzzy model. Once the number of clusters is optimized, then two approaches are proposed to optimize the weighting exponent (m) in the FCM algorithm, namely, the iterative search approach and the genetic algorithms. The above mentioned approach is tested on the generated data from the original function and optimal fuzzy models are obtained with minimum error between the real data and the obtained fuzzy models.

Concepts for Designing Low Power Wireless Sensor Network

Wireless sensor networks have been used in wide areas of application and become an attractive area for researchers in recent years. Because of the limited energy storage capability of sensor nodes, Energy consumption is one of the most challenging aspects of these networks and different strategies and protocols deals with this area. This paper presents general methods for designing low power wireless sensor network. Different sources of energy consumptions in these networks are discussed here and techniques for alleviating the consumption of energy are presented.

Ec-A: A Task Allocation Algorithm for Energy Minimization in Multiprocessor Systems

With the necessity of increased processing capacity with less energy consumption; power aware multiprocessor system has gained more attention in the recent future. One of the additional challenges that is to be solved in a multi-processor system when compared to uni-processor system is job allocation. This paper presents a novel task dependent job allocation algorithm: Energy centric- Allocation (Ec-A) and Rate Monotonic (RM) scheduling to minimize energy consumption in a multiprocessor system. A simulation analysis is carried out to verify the performance increase with reduction in energy consumption and required number of processors in the system.

An Amalgam Approach for DICOM Image Classification and Recognition

This paper describes about the process of recognition and classification of brain images such as normal and abnormal based on PSO-SVM. Image Classification is becoming more important for medical diagnosis process. In medical area especially for diagnosis the abnormality of the patient is classified, which plays a great role for the doctors to diagnosis the patient according to the severeness of the diseases. In case of DICOM images it is very tough for optimal recognition and early detection of diseases. Our work focuses on recognition and classification of DICOM image based on collective approach of digital image processing. For optimal recognition and classification Particle Swarm Optimization (PSO), Genetic Algorithm (GA) and Support Vector Machine (SVM) are used. The collective approach by using PSO-SVM gives high approximation capability and much faster convergence.

MJPEG Real-Time Transmission in Industrial Environments Using a CBR Channel

Currently, there are many local area industrial networks that can give guaranteed bandwidth to synchronous traffic, particularly providing CBR channels (Constant Bit Rate), which allow improved bandwidth management. Some of such networks operate over Ethernet, delivering channels with enough capacity, specially with compressors, to integrate multimedia traffic in industrial monitoring and image processing applications with many sources. In these industrial environments where a low latency is an essential requirement, JPEG is an adequate compressing technique but it generates VBR traffic (Variable Bit Rate). Transmitting VBR traffic in CBR channels is inefficient and current solutions to this problem significantly increase the latency or further degrade the quality. In this paper an R(q) model is used which allows on-line calculation of the JPEG quantification factor. We obtained increased quality, a lower requirement for the CBR channel with reduced number of discarded frames along with better use of the channel bandwidth.

Modeling and Optimization of Abrasive Waterjet Parameters using Regression Analysis

Abrasive waterjet is a novel machining process capable of processing wide range of hard-to-machine materials. This research addresses modeling and optimization of the process parameters for this machining technique. To model the process a set of experimental data has been used to evaluate the effects of various parameter settings in cutting 6063-T6 aluminum alloy. The process variables considered here include nozzle diameter, jet traverse rate, jet pressure and abrasive flow rate. Depth of cut, as one of the most important output characteristics, has been evaluated based on different parameter settings. The Taguchi method and regression modeling are used in order to establish the relationships between input and output parameters. The adequacy of the model is evaluated using analysis of variance (ANOVA) technique. The pairwise effects of process parameters settings on process response outputs are also shown graphically. The proposed model is then embedded into a Simulated Annealing algorithm to optimize the process parameters. The optimization is carried out for any desired values of depth of cut. The objective is to determine proper levels of process parameters in order to obtain a certain level of depth of cut. Computational results demonstrate that the proposed solution procedure is quite effective in solving such multi-variable problems.

Integrate Communication Modeling into the Design Modeling at Early Stages of the Design Flow Case Study: Unmanned Aerial Vehicle (UAV)

This paper shows how we can integrate communication modeling into the design modeling at early stages of the design flow. We consider effect of incorporating noise such as impulsive noise on system stability. We show that with change of the system model and investigate the system performance under the different communication effects. We modeled a unmanned aerial vehicle (UAV) as a demonstration using SystemC methodology. Moreover the system is modeled by joining the capabilities of UML and SystemC to operate at system level.

LAYMOD; A Layered and Modular Platform for CAx Collaboration Management and Supporting Product data Integration based on STEP Standard

Nowadays companies strive to survive in a competitive global environment. To speed up product development/modifications, it is suggested to adopt a collaborative product development approach. However, despite the advantages of new IT improvements still many CAx systems work separately and locally. Collaborative design and manufacture requires a product information model that supports related CAx product data models. To solve this problem many solutions are proposed, which the most successful one is adopting the STEP standard as a product data model to develop a collaborative CAx platform. However, the improvement of the STEP-s Application Protocols (APs) over the time, huge number of STEP AP-s and cc-s, the high costs of implementation, costly process for conversion of older CAx software files to the STEP neutral file format; and lack of STEP knowledge, that usually slows down the implementation of the STEP standard in collaborative data exchange, management and integration should be considered. In this paper the requirements for a successful collaborative CAx system is discussed. The STEP standard capability for product data integration and its shortcomings as well as the dominant platforms for supporting CAx collaboration management and product data integration are reviewed. Finally a platform named LAYMOD to fulfil the requirements of CAx collaborative environment and integrating the product data is proposed. The platform is a layered platform to enable global collaboration among different CAx software packages/developers. It also adopts the STEP modular architecture and the XML data structures to enable collaboration between CAx software packages as well as overcoming the STEP standard limitations. The architecture and procedures of LAYMOD platform to manage collaboration and avoid contradicts in product data integration are introduced.

Role and Effect of Temperature on LPG Sweetening Process

In the gas refineries of Iran-s South Pars Gas Complex, Sulfrex demercaptanization process is used to remove volatile and corrosive mercaptans from liquefied petroleum gases by caustic solution. This process consists of two steps. Removing low molecular weight mercaptans and regeneration exhaust caustic. Some parameters such as LPG feed temperature, caustic concentration and feed-s mercaptan in extraction step and sodium mercaptide content in caustic, catalyst concentration, caustic temperature, air injection rate in regeneration step are effective factors. In this paper was focused on temperature factor that play key role in mercaptans extraction and caustic regeneration. The experimental results demonstrated by optimization of temperature, sodium mercaptide content in caustic because of good oxidation minimized and sulfur impurities in product reduced.

Face Recognition using a Kernelization of Graph Embedding

Linearization of graph embedding has been emerged as an effective dimensionality reduction technique in pattern recognition. However, it may not be optimal for nonlinearly distributed real world data, such as face, due to its linear nature. So, a kernelization of graph embedding is proposed as a dimensionality reduction technique in face recognition. In order to further boost the recognition capability of the proposed technique, the Fisher-s criterion is opted in the objective function for better data discrimination. The proposed technique is able to characterize the underlying intra-class structure as well as the inter-class separability. Experimental results on FRGC database validate the effectiveness of the proposed technique as a feature descriptor.

An Agent Oriented Architecture to Supply Dynamic Document Generation in ERP Systems

One of the most important aspects expected from an ERP system is to mange user\administrator manual documents dynamically. Since an ERP package is frequently changed during its implementation in customer sites, it is often needed to add new documents and/or apply required changes to existing documents in order to cover new or changed capabilities. The worse is that since these changes occur continuously, the corresponding documents should be updated dynamically; otherwise, implementing the ERP package in the organization encounters serious risks. In this paper, we propose a new architecture which is based on the agent oriented vision and supplies the dynamic document generation expected from ERP systems using several independent but cooperative agents. Beside the dynamic document generation which is the main issue of this paper, the presented architecture will address some aspects of intelligence and learning capabilities existing in ERP.