FEM Analysis of the Interaction between a Piezoresistive Tactile Sensor and Biological Tissues

The present paper presents a finite element model and analysis for the interaction between a piezoresistive tactile sensor and biological tissues. The tactile sensor is proposed for use in minimally invasive surgery to deliver tactile information of biological tissues to surgeons. The proposed sensor measures the relative hardness of soft contact objects as well as the contact force. Silicone rubbers were used as the phantom of biological tissues. Finite element analysis of the silicone rubbers and the mechanical structure of the sensor were performed using COMSOL Multiphysics (v3.4) environment. The simulation results verify the capability of the sensor to be used to differentiate between different kinds of silicone rubber materials.

Creep Constitutive Equation for 2- Materials of Weldment-304L Stainless Steel

In this paper, creep constitutive equations of base (Parent) and weld materials of the weldment for cold-drawn 304L stainless steel have been obtained experimentally. For this purpose, test samples have been generated from cold drawn bars and weld material according to the ASTM standard. The creep behavior and properties have been examined for these materials by conducting uniaxial creep tests. Constant temperatures and constant load uni-axial creep tests have been carried out at two high temperatures, 680 and 720 oC, subjected to constant loads, which produce initial stresses ranging from 240 to 360 MPa. The experimental data have been used to obtain the creep constitutive parameters using numerical optimization techniques.

Transient Analysis of a Single-Server Queue with Batch Arrivals Using Modeling and Functions Akin to the Modified Bessel Functions

The paper considers a single-server queue with fixedsize batch Poisson arrivals and exponential service times, a model that is useful for a buffer that accepts messages arriving as fixed size batches of packets and releases them one packet at time. Transient performance measures for queues have long been recognized as being complementary to the steady-state analysis. The focus of the paper is on the use of the functions that arise in the analysis of the transient behaviour of the queuing system. The paper exploits practical modelling to obtain a solution to the integral equation encountered in the analysis. Results obtained indicate that under heavy load conditions, there is significant disparity in the statistics between the transient and steady state values.

Evaluating Complexity – Ethical Challenges in Computational Design Processes

Complexity, as a theoretical background has made it easier to understand and explain the features and dynamic behavior of various complex systems. As the common theoretical background has confirmed, borrowing the terminology for design from the natural sciences has helped to control and understand urban complexity. Phenomena like self-organization, evolution and adaptation are appropriate to describe the formerly inaccessible characteristics of the complex environment in unpredictable bottomup systems. Increased computing capacity has been a key element in capturing the chaotic nature of these systems. A paradigm shift in urban planning and architectural design has forced us to give up the illusion of total control in urban environment, and consequently to seek for novel methods for steering the development. New methods using dynamic modeling have offered a real option for more thorough understanding of complexity and urban processes. At best new approaches may renew the design processes so that we get a better grip on the complex world via more flexible processes, support urban environmental diversity and respond to our needs beyond basic welfare by liberating ourselves from the standardized minimalism. A complex system and its features are as such beyond human ethics. Self-organization or evolution is either good or bad. Their mechanisms are by nature devoid of reason. They are common in urban dynamics in both natural processes and gas. They are features of a complex system, and they cannot be prevented. Yet their dynamics can be studied and supported. The paradigm of complexity and new design approaches has been criticized for a lack of humanity and morality, but the ethical implications of scientific or computational design processes have not been much discussed. It is important to distinguish the (unexciting) ethics of the theory and tools from the ethics of computer aided processes based on ethical decisions. Urban planning and architecture cannot be based on the survival of the fittest; however, the natural dynamics of the system cannot be impeded on grounds of being “non-human". In this paper the ethical challenges of using the dynamic models are contemplated in light of a few examples of new architecture and dynamic urban models and literature. It is suggested that ethical challenges in computational design processes could be reframed under the concepts of responsibility and transparency.

Design of the Production Line Based On RFID through 3D Modeling

Radio-frequency identification has entered as a beneficial means with conforming GS1 standards to provide the best solutions in the manufacturing area. It competes with other automated identification technologies e.g. barcodes and smart cards with regard to high speed scanning, reliability and accuracy as well. The purpose of this study is to improve production line-s performance by implementing RFID system in the manufacturing area on the basis of radio-frequency identification (RFID) system by 3D modeling in the program Cinema 4D R13 which provides obvious graphical scenes for users to portray their applications. Finally, with regard to improving system performance, it shows how RFID appears as a well-suited technology in a comparison of the barcode scanner to handle different kinds of raw materials in the production line base on logical process.

Effect of Open-Ended Laboratory toward Learners Performance in Environmental Engineering Course: Case Study of Civil Engineering at Universiti Malaysia Sabah

Laboratory activities have produced benefits in student learning. With current drives of new technology resources and evolving era of education methods, renewal status of learning and teaching in laboratory methods are in progress, for both learners and the educators. To enhance learning outcomes in laboratory works particularly in engineering practices and testing, learning via handson by instruction may not sufficient. This paper describes and compares techniques and implementation of traditional (expository) with open-ended laboratory (problem-based) for two consecutive cohorts studying environmental laboratory course in civil engineering program. The transition of traditional to problem-based findings and effect were investigated in terms of course assessment student feedback survey, course outcome learning measurement and student performance grades. It was proved that students have demonstrated better performance in their grades and 12% increase in the course outcome (CO) in problem-based open-ended laboratory style than traditional method; although in perception, students has responded less favorable in their feedback.

A Multi-Criteria Evaluation Incorporating Linguistic Computing for Service Innovation Performance

The growing influence of service industries has prompted greater attention being paid to service operations management. However, service managers often have difficulty articulating the veritable effects of their service innovation. Especially, the performance evaluation process of service innovation problems generally involves uncertain and imprecise data. This paper presents a 2-tuple fuzzy linguistic computing approach to dealing with heterogeneous information and information loss problems while the processes of subjective evaluation integration. The proposed method based on group decision-making scenario to assist business managers in measuring performance of service innovation manipulates the heterogeneity integration processes and avoids the information loss effectively.

Agent-based Simulation for Blood Glucose Control in Diabetic Patients

This paper employs a new approach to regulate the blood glucose level of type I diabetic patient under an intensive insulin treatment. The closed-loop control scheme incorporates expert knowledge about treatment by using reinforcement learning theory to maintain the normoglycemic average of 80 mg/dl and the normal condition for free plasma insulin concentration in severe initial state. The insulin delivery rate is obtained off-line by using Qlearning algorithm, without requiring an explicit model of the environment dynamics. The implementation of the insulin delivery rate, therefore, requires simple function evaluation and minimal online computations. Controller performance is assessed in terms of its ability to reject the effect of meal disturbance and to overcome the variability in the glucose-insulin dynamics from patient to patient. Computer simulations are used to evaluate the effectiveness of the proposed technique and to show its superiority in controlling hyperglycemia over other existing algorithms

A Family of Entropies on Interval-valued Intuitionistic Fuzzy Sets and Their Applications in Multiple Attribute Decision Making

The entropy of intuitionistic fuzzy sets is used to indicate the degree of fuzziness of an interval-valued intuitionistic fuzzy set(IvIFS). In this paper, we deal with the entropies of IvIFS. Firstly, we propose a family of entropies on IvIFS with a parameter λ ∈ [0, 1], which generalize two entropy measures defined independently by Zhang and Wei, for IvIFS, and then we prove that the new entropy is an increasing function with respect to the parameter λ. Furthermore, a new multiple attribute decision making (MADM) method using entropy-based attribute weights is proposed to deal with the decision making situations where the alternatives on attributes are expressed by IvIFS and the attribute weights information is unknown. Finally, a numerical example is given to illustrate the applications of the proposed method.

Stress Analysis for Two Fitted Thin Walled Cylinder with High Angular Velocity

In this paper stress and strain for two rotating thin wall cylinder fitted together with initial interference and overlap are computed. Also stress value for variation of initial interference is calculated. At first problem is considered without rotation and next angular velocity increased from 0 to 50000 rev/min and stress in each stage is calculated. The important point is that when stress become very small in magnitude the angular velocity is critical and two cylinders will separate. The critical speed i.e. speed of separation is calculated in each step.

Speaker Identification using Neural Networks

The speech signal conveys information about the identity of the speaker. The area of speaker identification is concerned with extracting the identity of the person speaking the utterance. As speech interaction with computers becomes more pervasive in activities such as the telephone, financial transactions and information retrieval from speech databases, the utility of automatically identifying a speaker is based solely on vocal characteristic. This paper emphasizes on text dependent speaker identification, which deals with detecting a particular speaker from a known population. The system prompts the user to provide speech utterance. System identifies the user by comparing the codebook of speech utterance with those of the stored in the database and lists, which contain the most likely speakers, could have given that speech utterance. The speech signal is recorded for N speakers further the features are extracted. Feature extraction is done by means of LPC coefficients, calculating AMDF, and DFT. The neural network is trained by applying these features as input parameters. The features are stored in templates for further comparison. The features for the speaker who has to be identified are extracted and compared with the stored templates using Back Propogation Algorithm. Here, the trained network corresponds to the output; the input is the extracted features of the speaker to be identified. The network does the weight adjustment and the best match is found to identify the speaker. The number of epochs required to get the target decides the network performance.

Energy and Distance Based Clustering: An Energy Efficient Clustering Method for Wireless Sensor Networks

In this paper, we propose an energy efficient cluster based communication protocol for wireless sensor network. Our protocol considers both the residual energy of sensor nodes and the distance of each node from the BS when selecting cluster-head. This protocol can successfully prolong the network-s lifetime by 1) reducing the total energy dissipation on the network and 2) evenly distributing energy consumption over all sensor nodes. In this protocol, the nodes with more energy and less distance from the BS are probable to be selected as cluster-head. Simulation results with MATLAB show that proposed protocol could increase the lifetime of network more than 94% for first node die (FND), and more than 6% for the half of the nodes alive (HNA) factor as compared with conventional protocols.

A Novel, Cost-effective Design to Harness Ocean Energy in the Developing Countries

The world's population continues to grow at a quarter of a million people per day, increasing the consumption of energy. This has made the world to face the problem of energy crisis now days. In response to the energy crisis, the principles of renewable energy gained popularity. There are much advancement made in developing the wind and solar energy farms across the world. These energy farms are not enough to meet the energy requirement of world. This has attracted investors to procure new sources of energy to be substituted. Among these sources, extraction of energy from the waves is considered as best option. The world oceans contain enough energy to meet the requirement of world. Significant advancements in design and technology are being made to make waves as a continuous source of energy. One major hurdle in launching wave energy devices in a developing country like Pakistan is the initial cost. A simple, reliable and cost effective wave energy converter (WEC) is required to meet the nation-s energy need. This paper will present a novel design proposed by team SAS for harnessing wave energy. This paper has three major sections. The first section will give a brief and concise view of ocean wave creation, propagation and the energy carried by them. The second section will explain the designing of SAS-2. A gear chain mechanism is used for transferring the energy from the buoy to a rotary generator. The third section will explain the manufacturing of scaled down model for SAS-2 .Many modifications are made in the trouble shooting stage. The design of SAS-2 is simple and very less maintenance is required. SAS-2 is producing electricity at Clifton. The initial cost of SAS-2 is very low. This has proved SAS- 2 as one of the cost effective and reliable source of harnessing wave energy for developing countries.

Production of WGHs and AFPHs using Protease Combinations at High and Ambient Pressure

Wheat gluten hydrolyzates (WGHs) and anchovy fine powder hydrolyzates (AFPHs) were produced at 300 MPa using combinations of Flavourzyme 500MG (F), Alcalase 2.4L (A), Marugoto E (M) and Protamex (P), and then were compared to those produced at ambient pressure concerning the contents of soluble solid (SS), soluble nitrogen and electrophoretic profiles. The contents of SS in the WGHs and AFPHs increased up to 87.2% according to the increase in enzyme number both at high and ambient pressure. Based on SS content, the optimum enzyme combinations for one-, two-, three- and four-enzyme hydrolysis were determined as F, FA, FAM and FAMP, respectively. Similar trends were found for the contents of total soluble nitrogen (TSN) and TCA-soluble nitrogen (TCASN). The contents of SS, TSN and TCASN in the hydrolyzates together with electrophoretic mobility maps indicates that the high-pressure treatment of this study accelerated protein hydrolysis compared to ambient-pressure treatment.

Defect Prevention and Detection of DSP-software

The users are now expecting higher level of DSP(Digital Signal Processing) software quality than ever before. Prevention and detection of defect are critical elements of software quality assurance. In this paper, principles and rules for prevention and detection of defect are suggested, which are not universal guidelines, but are useful for both novice and experienced DSP software developers.

Multi-Agent Systems Applied in the Modeling and Simulation of Biological Problems: A Case Study in Protein Folding

Multi-agent system approach has proven to be an effective and appropriate abstraction level to construct whole models of a diversity of biological problems, integrating aspects which can be found both in "micro" and "macro" approaches when modeling this type of phenomena. Taking into account these considerations, this paper presents the important computational characteristics to be gathered into a novel bioinformatics framework built upon a multiagent architecture. The version of the tool presented herein allows studying and exploring complex problems belonging principally to structural biology, such as protein folding. The bioinformatics framework is used as a virtual laboratory to explore a minimalist model of protein folding as a test case. In order to show the laboratory concept of the platform as well as its flexibility and adaptability, we studied the folding of two particular sequences, one of 45-mer and another of 64-mer, both described by an HP model (only hydrophobic and polar residues) and coarse grained 2D-square lattice. According to the discussion section of this piece of work, these two sequences were chosen as breaking points towards the platform, in order to determine the tools to be created or improved in such a way to overcome the needs of a particular computation and analysis of a given tough sequence. The backwards philosophy herein is that the continuous studying of sequences provides itself important points to be added into the platform, to any time improve its efficiency, as is demonstrated herein.

Financing - Scheduling Optimization for Construction Projects by using Genetic Algorithms

Investment in a constructed facility represents a cost in the short term that returns benefits only over the long term use of the facility. Thus, the costs occur earlier than the benefits, and the owners of facilities must obtain the capital resources to finance the costs of construction. A project cannot proceed without an adequate financing, and the cost of providing an adequate financing can be quite large. For these reasons, the attention to the project finance is an important aspect of project management. Finance is also a concern to the other organizations involved in a project such as the general contractor and material suppliers. Unless an owner immediately and completely covers the costs incurred by each participant, these organizations face financing problems of their own. At a more general level, the project finance is the only one aspect of the general problem of corporate finance. If numerous projects are considered and financed together, then the net cash flow requirements constitute the corporate financing problem for capital investment. Whether project finance is performed at the project or at the corporate level does not alter the basic financing problem .In this paper, we will first consider facility financing from the owner's perspective, with due consideration for its interaction with other organizations involved in a project. Later, we discuss the problems of construction financing which are crucial to the profitability and solvency of construction contractors. The objective of this paper is to present the steps utilized to determine the best combination of minimum project financing. The proposed model considers financing; schedule and maximum net area .The proposed model is called Project Financing and Schedule Integration using Genetic Algorithms "PFSIGA". This model intended to determine more steps (maximum net area) for any project with a subproject. An illustrative example will demonstrate the feature of this technique. The model verification and testing are put into consideration.

A New Extended Group Mutual Exclusion Algorithm with Low Message Complexity in Distributed Systems

The group mutual exclusion (GME) problem is an interesting generalization of the mutual exclusion problem. In the group mutual exclusion, multiple processes can enter a critical section simultaneously if they belong to the same group. In the extended group mutual exclusion, each process is a member of multiple groups at the same time. As a result, after the process by selecting a group enter critical section, other processes can select the same group with its belonging group and can enter critical section at the moment, so that it avoids their unnecessary blocking. This paper presents a quorum-based distributed algorithm for the extended group mutual exclusion problem. The message complexity of our algorithm is O(4Q ) in the best case and O(5Q) in the worst case, where Q is a quorum size.

Survey of Impact of Production and Adoption of Nanocrops on Food Security

Perspective of food security in 21 century showed shortage of food that production is faced to vital problem. Food security strategy is applied longtime method to assess required food. Meanwhile, nanotechnology revolution changes the world face. Nanotechnology is adequate method utilize of its characteristics to decrease environmental problems and possible further access to food for small farmers. This article will show impact of production and adoption of nanocrops on food security. Population is researchers of agricultural research center of Esfahan province. The results of study show that there was a relationship between uses, conversion, distribution, and production of nanocrops, operative human resources, operative circumstance, and constrains of usage of nanocrops and food security. Multivariate regression analysis by enter model shows that operative circumstance, use, production and constrains of usage of nanocrops had positive impact on food security and they determine in four steps 20 percent of it.

Detection of Max. Optical Gain by Erbium Doped Fiber Amplifier

The technical realization of data transmission using glass fiber began after the development of diode laser in year 1962. The erbium doped fiber amplifiers (EDFA's) in high speed networks allow information to be transmitted over longer distances without using of signal amplification repeaters. These kinds of fibers are doped with erbium atoms which have energy levels in its atomic structure for amplifying light at 1550nm. When a carried signal wave at 1550nm enters the erbium fiber, the light stimulates the excited erbium atoms which pumped with laser beam at 980nm as additional light. The wavelength and intensity of the semiconductor lasers depend on the temperature of active zone and the injection current. The present paper shows the effect of the diode lasers temperature and injection current on the optical amplification. From the results of in- and output power one may calculate the max. optical gain by erbium doped fiber amplifier.