Contributions to Design of Systems Actuated by Shape Memory Active Elements

Even it has been recognized that Shape Memory Alloys (SMA) have a significant potential for deployment actuators, the number of applications of SMA-based actuators to the present day is still quite small, due to the need of deep understanding of the thermo-mechanical behavior of SMA, causing an important need for a mathematical model able to describe all thermo-mechanical properties of SMA by relatively simple final set of constitutive equations. SMAs offer attractive potentials such as: reversible strains of several percent, generation of high recovery stresses and high power / weight ratios. The paper tries to provide an overview of the shape memory functions and a presentation of the designed and developed temperature control system used for a gripper actuated by two pairs of differential SMA active springs. An experimental setup was established, using electrical energy for actuator-s springs heating process. As for holding the temperature of the SMA springs at certain level for a long time was developed a control system in order to avoid the active elements overheating.

3D Segmentation, Compression and Wireless Transmission of Volumetric Brain MR Images

The main objective of this paper is to provide an efficient tool for delineating brain tumors in three-dimensional magnetic resonance images and set up compression-transmit schemes to distribute result to the remote doctor. To achieve this goal, we use basically a level-sets approach to delineating brain tumors in threedimensional. Then introduce a new compression and transmission plan of 3D brain structures based for the meshes simplification, adapted for time to the specific needs of the telemedicine and to the capacities restricted by wireless network communication. We present here the main stages of our system, and preliminary results which are very encouraging for clinical practice.

Consideration Factors of Moving to a New Destination for Coastland Residents Under Global Warming

Because of the global warming and the rising sea level, residents living in southwestern coastland, Taiwan are faced with the submerged land and may move to higher elevation area. It is desirable to discuss the key consideration factor for selecting the migration location under five dimensions of ಯ security”, “health”, “convenience”, “comfort” and “socio-economic” based on the document reviews. This paper uses the Structural Equation Modeling (SEM) and the questionnaire survey. The analysis results show that the convenience is the most key factor for residents in Taiwan. 

The Current Awareness of Just-In-Time Techniques within the Libyan Textile Private Industry: A Case Study

Almost all Libyan industries (both private and public) have struggled with many difficulties during the past three decades due to many problems. These problems have created a strongly negative impact on the productivity and utilization of many companies within Libya. This paper studies the current awareness and implementation levels of Just-In-Time (JIT) within the Libyan Textile private industry. A survey has been applied in this study using an intensive detailed questionnaire. Based on the analysis of the survey responses, the results show that the management body within the surveyed companies has a modest strategy towards most of the areas that are considered as being very crucial in any successful implementation of JIT. The results also show a variation within the implementation levels of the JIT elements as these varies between Low and Acceptable levels. The paper has also identified limitations within the investigated areas within this industry, and has pointed to areas where senior managers within the Libyan textile industry should take immediate actions in order to achieve effective implementation of JIT within their companies.

Assessment of the Effect of Feed Plate Location on Interactions for a Binary Distillation Column

The paper considers the effect of feed plate location on the interactions in a seven plate binary distillation column. The mathematical model of the distillation column is deduced based on the equations of mass and energy balances for each stage, detailed model for both reboiler and condenser, and heat transfer equations. The Dynamic Relative Magnitude Criterion, DRMC is used to assess the interactions in different feed plate locations for a seven plate (Benzene-Toluene) binary distillation column ( the feed plate is originally at stage 4). The results show that whenever we go far from the optimum feed plate position, the level of interaction augments.

Sterility Examination and Comparative Analyses of Inhibitory Effect of Honey on Some Gram Negative and Gram Positive Food Borne Pathogens in South West Nigeria

Food borne illnesses have been reported to be a global health challenge. Annual incidences of food–related diseases involve 76 million cases, of which only 14 million can be traced to known pathogens. Poor hygienic practices have contributed greatly to this. It has been reported that in the year 2000 about 2.1 million people died from diarrheal diseases, hence, there is a need to ensure food safety at all level. This study focused on the sterility examination and inhibitory effect of honey samples on selected gram negative and gram positive food borne pathogen from South West Nigeria. The laboratory examinations revealed the presence of some bacterial and fungal contaminations of honey samples and that inhibitory activity of the honey sample was more pronounced on the gram negative bacteria than the gram positive bacterial isolates. Antibiotic sensitivity test conducted on the different bacterial isolates also showed that honey was able to inhibit the proliferation of the tested bacteria than the employed antibiotics.

The Optimization of an Intelligent Traffic Congestion Level Classification from Motorists- Judgments on Vehicle's Moving Patterns

We proposed a technique to identify road traffic congestion levels from velocity of mobile sensors with high accuracy and consistent with motorists- judgments. The data collection utilized a GPS device, a webcam, and an opinion survey. Human perceptions were used to rate the traffic congestion levels into three levels: light, heavy, and jam. Then the ratings and velocity were fed into a decision tree learning model (J48). We successfully extracted vehicle movement patterns to feed into the learning model using a sliding windows technique. The parameters capturing the vehicle moving patterns and the windows size were heuristically optimized. The model achieved accuracy as high as 99.68%. By implementing the model on the existing traffic report systems, the reports will cover comprehensive areas. The proposed method can be applied to any parts of the world.

Diagnosing the Cause and its Timing of Changes in Multivariate Process Mean Vector from Quality Control Charts using Artificial Neural Network

Quality control charts are very effective in detecting out of control signals but when a control chart signals an out of control condition of the process mean, searching for a special cause in the vicinity of the signal time would not always lead to prompt identification of the source(s) of the out of control condition as the change point in the process parameter(s) is usually different from the signal time. It is very important to manufacturer to determine at what point and which parameters in the past caused the signal. Early warning of process change would expedite the search for the special causes and enhance quality at lower cost. In this paper the quality variables under investigation are assumed to follow a multivariate normal distribution with known means and variance-covariance matrix and the process means after one step change remain at the new level until the special cause is being identified and removed, also it is supposed that only one variable could be changed at the same time. This research applies artificial neural network (ANN) to identify the time the change occurred and the parameter which caused the change or shift. The performance of the approach was assessed through a computer simulation experiment. The results show that neural network performs effectively and equally well for the whole shift magnitude which has been considered.

Ventilation Efficiency in the Subway Environment for the Indoor Air Quality

Clean air in subway station is important to passengers. The Platform Screen Doors (PSDs) can improve indoor air quality in the subway station; however the air quality in the subway tunnel is degraded. The subway tunnel has high CO2 concentration and indoor particulate matter (PM) value. The Indoor Air Quality (IAQ) level in subway environment degrades by increasing the frequency of the train operation and the number of the train. The ventilation systems of the subway tunnel need improvements to have better air-quality. Numerical analyses might be effective tools to analyze the performance of subway twin-track tunnel ventilation systems. An existing subway twin-track tunnel in the metropolitan Seoul subway system is chosen for the numerical simulations. The ANSYS CFX software is used for unsteady computations of the airflow inside the twin-track tunnel when the train moves. The airflow inside the tunnel is simulated when one train runs and two trains run at the same time in the tunnel. The piston-effect inside the tunnel is analyzed when all shafts function as the natural ventilation shaft. The supplied air through the shafts is mixed with the pollutant air in the tunnel. The pollutant air is exhausted by the mechanical ventilation shafts. The supplied and discharged airs are balanced when only one train runs in the twin-track tunnel. The pollutant air in the tunnel is high when two trains run simultaneously in opposite direction and all shafts functioned as the natural shaft cases when there are no electrical power supplies in the shafts. The remained pollutant air inside the tunnel enters into the station platform when the doors are opened.

Environmental Capacity and Sustainability of European Regional Airports: A Case Study

Airport capacity has always been perceived in the traditional sense as the number of aircraft operations during a specified time corresponding to a tolerable level of average delay and it mostly depends on the airside characteristics, on the fleet mix variability and on the ATM. The adoption of the Directive 2002/30/EC in the EU countries drives the stakeholders to conceive airport capacity in a different way though. Airport capacity in this sense is fundamentally driven by environmental criteria, and since acoustical externalities represent the most important factors, those are the ones that could pose a serious threat to the growth of airports and to aviation market itself in the short-medium term. The importance of the regional airports in the deregulated market grew fast during the last decade since they represent spokes for network carriers and a preferential destination for low-fares carriers. Not only regional airports have witnessed a fast and unexpected growth in traffic but also a fast growth in the complaints for the nuisance by the people living near those airports. In this paper the results of a study conducted in cooperation with the airport of Bologna G. Marconi are presented in order to investigate airport acoustical capacity as a defacto constraint of airport growth.

Detection of Ultrasonic Images in the Presence of a Random Number of Scatterers: A Statistical Learning Approach

Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of medical ultrasound images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to clinical ultrasound images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected ultrasound images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (detection hypotheses) in the original images.

Dynamic Adaptability Using Reflexivity for Mobile Agent Protection

The paradigm of mobile agent provides a promising technology for the development of distributed and open applications. However, one of the main obstacles to widespread adoption of the mobile agent paradigm seems to be security. This paper treats the security of the mobile agent against malicious host attacks. It describes generic mobile agent protection architecture. The proposed approach is based on the dynamic adaptability and adopts the reflexivity as a model of conception and implantation. In order to protect it against behaviour analysis attempts, the suggested approach supplies the mobile agent with a flexibility faculty allowing it to present an unexpected behaviour. Furthermore, some classical protective mechanisms are used to reinforce the level of security.

Optimization of the Characteristic Straight Line Method by a “Best Estimate“ of Observed, Normal Orthometric Elevation Differences

In this paper, to optimize the “Characteristic Straight Line Method" which is used in the soil displacement analysis, a “best estimate" of the geodetic leveling observations has been achieved by taking in account the concept of 'Height systems'. This concept has been discussed in detail and consequently the concept of “height". In landslides dynamic analysis, the soil is considered as a mosaic of rigid blocks. The soil displacement has been monitored and analyzed by using the “Characteristic Straight Line Method". Its characteristic components have been defined constructed from a “best estimate" of the topometric observations. In the measurement of elevation differences, we have used the most modern leveling equipment available. Observational procedures have also been designed to provide the most effective method to acquire data. In addition systematic errors which cannot be sufficiently controlled by instrumentation or observational techniques are minimized by applying appropriate corrections to the observed data: the level collimation correction minimizes the error caused by nonhorizontality of the leveling instrument's line of sight for unequal sight lengths, the refraction correction is modeled to minimize the refraction error caused by temperature (density) variation of air strata, the rod temperature correction accounts for variation in the length of the leveling rod' s Invar/LO-VAR® strip which results from temperature changes, the rod scale correction ensures a uniform scale which conforms to the international length standard and the introduction of the concept of the 'Height systems' where all types of height (orthometric, dynamic, normal, gravity correction, and equipotential surface) have been investigated. The “Characteristic Straight Line Method" is slightly more convenient than the “Characteristic Circle Method". It permits to evaluate a displacement of very small magnitude even when the displacement is of an infinitesimal quantity. The inclination of the landslide is given by the inverse of the distance reference point O to the “Characteristic Straight Line". Its direction is given by the bearing of the normal directed from point O to the Characteristic Straight Line (Fig..6). A “best estimate" of the topometric observations was used to measure the elevation of points carefully selected, before and after the deformation. Gross errors have been eliminated by statistical analyses and by comparing the heights within local neighborhoods. The results of a test using an area where very interesting land surface deformation occurs are reported. Monitoring with different options and qualitative comparison of results based on a sufficient number of check points are presented.

Estimation of Time -Varying Linear Regression with Unknown Time -Volatility via Continuous Generalization of the Akaike Information Criterion

The problem of estimating time-varying regression is inevitably concerned with the necessity to choose the appropriate level of model volatility - ranging from the full stationarity of instant regression models to their absolute independence of each other. In the stationary case the number of regression coefficients to be estimated equals that of regressors, whereas the absence of any smoothness assumptions augments the dimension of the unknown vector by the factor of the time-series length. The Akaike Information Criterion is a commonly adopted means of adjusting a model to the given data set within a succession of nested parametric model classes, but its crucial restriction is that the classes are rigidly defined by the growing integer-valued dimension of the unknown vector. To make the Kullback information maximization principle underlying the classical AIC applicable to the problem of time-varying regression estimation, we extend it onto a wider class of data models in which the dimension of the parameter is fixed, but the freedom of its values is softly constrained by a family of continuously nested a priori probability distributions.

A Sub-Pixel Image Registration Technique with Applications to Defect Detection

This paper presents a useful sub-pixel image registration method using line segments and a sub-pixel edge detector. In this approach, straight line segments are first extracted from gray images at the pixel level before applying the sub-pixel edge detector. Next, all sub-pixel line edges are mapped onto the orientation-distance parameter space to solve for line correspondence between images. Finally, the registration parameters with sub-pixel accuracy are analytically solved via two linear least-square problems. The present approach can be applied to various fields where fast registration with sub-pixel accuracy is required. To illustrate, the present approach is applied to the inspection of printed circuits on a flat panel. Numerical example shows that the present approach is effective and accurate when target images contain a sufficient number of line segments, which is true in many industrial problems.

Genetic Algorithm Parameters Optimization for Bi-Criteria Multiprocessor Task Scheduling Using Design of Experiments

Multiprocessor task scheduling is a NP-hard problem and Genetic Algorithm (GA) has been revealed as an excellent technique for finding an optimal solution. In the past, several methods have been considered for the solution of this problem based on GAs. But, all these methods consider single criteria and in the present work, minimization of the bi-criteria multiprocessor task scheduling problem has been considered which includes weighted sum of makespan & total completion time. Efficiency and effectiveness of genetic algorithm can be achieved by optimization of its different parameters such as crossover, mutation, crossover probability, selection function etc. The effects of GA parameters on minimization of bi-criteria fitness function and subsequent setting of parameters have been accomplished by central composite design (CCD) approach of response surface methodology (RSM) of Design of Experiments. The experiments have been performed with different levels of GA parameters and analysis of variance has been performed for significant parameters for minimisation of makespan and total completion time simultaneously.

Drafting the Design and Development of Micro- Controller Based Portable Soil Moisture Sensor for Advancement in Agro Engineering

Moisture is an important consideration in many aspects ranging from irrigation, soil chemistry, golf course, corrosion and erosion, road conditions, weather predictions, livestock feed moisture levels, water seepage etc. Vegetation and crops always depend more on the moisture available at the root level than on precipitation occurrence. In this paper, design of an instrument is discussed which tells about the variation in the moisture contents of soil. This is done by measuring the amount of water content in soil by finding the variation in capacitance of soil with the help of a capacitive sensor. The greatest advantage of soil moisture sensor is reduced water consumption. The sensor is also be used to set lower and upper threshold to maintain optimum soil moisture saturation and minimize water wilting, contributes to deeper plant root growth ,reduced soil run off /leaching and less favorable condition for insects and fungal diseases. Capacitance method is preferred because, it provides absolute amount of water content and also measures water content at any depth.

Gap Analysis of Cassava Sector in Cameroon

Recently, Cassava has been the driving force of many developing countries- economic progress. To attain this level, prerequisites were put in place enabling cassava sector to become an industrial and a highly competitive crop. Cameroon can achieve the same results. Moreover, it can upgrade the living conditions of both rural and urban dwellers and stimulate the development of the whole economy. Achieving this outcome calls for agricultural policy reforms. The adoption and implementation of adequate policies go along with efficient strategies. To choose effective strategies, an indepth investigation of the sector-s problems is highly recommended. This paper uses gap analysis method to evaluate cassava sector in Cameroon. It studies the present situation (where it is now), interrogates the future (where it should be) and finally proposes solutions to fill the gap.

Body Composition Index Predict Children’s Motor Skills Proficiency

Failure in mastery of motor skills proficiency during childhood has been seen as a detrimental factor for children to be physically active. Lack of motor skills proficiency tends to reduce children’s competency and confidence level to participate in physical activity. As a consequence of less participation in physical activity, children will turn to be overweight and obese. It has been suggested that children who master motor skill proficiency will be more involved in physical activity thus preventing them from being overweight. Obesity has become a serious childhood health issues worldwide. Previous studies have found that children who were overweight and obese were generally less active however these studies focused on one gender. This study aims to compare motor skill proficiency of underweight, normal-weight, overweight and obese young boys as well as to determine the relationship between motor skills proficiency and body composition. 112 boys aged between 8 to 10 years old participated in this study. Participants were assigned to four groups; underweight, normal-weight, overweight and obese using BMI-age percentile chart for children. Bruininks- Oseretsky Test Second Edition-Short Form was administered to assess their motor skill proficiency. Meanwhile, body composition was determined by the skinfold thickness measurement. Result indicated that underweight and normal children were superior in motor skills proficiency compared to overweight and obese children (p < 0.05). A significant strong inverse correlation between motor skills proficiency and body composition (r = -0.849) is noted. The findings of this study could be explained by non-contributory mass that carried by overweight and obese children leads to biomechanical movement inefficiency which will become detrimental to motor skills proficiency. It can be concluded that motor skills proficiency is inversely correlated with body composition.

Dynamic Load Balancing Strategy for Grid Computing

Workload and resource management are two essential functions provided at the service level of the grid software infrastructure. To improve the global throughput of these software environments, workloads have to be evenly scheduled among the available resources. To realize this goal several load balancing strategies and algorithms have been proposed. Most strategies were developed in mind, assuming homogeneous set of sites linked with homogeneous and fast networks. However for computational grids we must address main new issues, namely: heterogeneity, scalability and adaptability. In this paper, we propose a layered algorithm which achieve dynamic load balancing in grid computing. Based on a tree model, our algorithm presents the following main features: (i) it is layered; (ii) it supports heterogeneity and scalability; and, (iii) it is totally independent from any physical architecture of a grid.