Improving Worm Detection with Artificial Neural Networks through Feature Selection and Temporal Analysis Techniques

Computer worm detection is commonly performed by antivirus software tools that rely on prior explicit knowledge of the worm-s code (detection based on code signatures). We present an approach for detection of the presence of computer worms based on Artificial Neural Networks (ANN) using the computer's behavioral measures. Identification of significant features, which describe the activity of a worm within a host, is commonly acquired from security experts. We suggest acquiring these features by applying feature selection methods. We compare three different feature selection techniques for the dimensionality reduction and identification of the most prominent features to capture efficiently the computer behavior in the context of worm activity. Additionally, we explore three different temporal representation techniques for the most prominent features. In order to evaluate the different techniques, several computers were infected with five different worms and 323 different features of the infected computers were measured. We evaluated each technique by preprocessing the dataset according to each one and training the ANN model with the preprocessed data. We then evaluated the ability of the model to detect the presence of a new computer worm, in particular, during heavy user activity on the infected computers.

Towards Design of Context-Aware Sensor Grid Framework for Agriculture

This paper is to present context-aware sensor grid framework for agriculture and its design challenges. Use of sensor networks in the domain of agriculture is not new. However, due to the unavailability of any common framework, solutions that are developed in this domain are location, environment and problem dependent. Keeping the need of common framework for agriculture, Context-Aware Sensor Grid Framework is proposed. It will be helpful in developing solutions for majority of the problems related to irrigation, pesticides spray, use of fertilizers, regular monitoring of plot and yield etc. due to the capability of adjusting according to location and environment. The proposed framework is composed of three layer architecture including context-aware application layer, grid middleware layer and sensor network layer.

Effect of Electromagnetic Fields on Structure and Pollen Grains Development in Chenopodium album L

The role of the pollen grain, with to the reproductive process of higher plants, is to deliver the spermatic cells to the embryo sac for egg fertilization. The aim of this project was study the effect of electromagnetic fields on structure and pollen grains development in Chenopodium album. Anthers of Chenopodium album L. were collected at different stages of development from control (without electromagnetic field) and plants grown at 10m from the field sources. Structure and development of pollen grains were studied and compared. The studying pollen structure by Light and Scanning electron microscopy showed that electromagnetic fields reduction of pollen grains number and male sterility, thus , in some anthers, pollen grains were attached together and deformed compared to control ones. The data presented suggest that prolonged exposures of plants to magnetic field may cause different biological effects at the cellular tissue and organ levels.

A Similarity Metric for Assessment of Image Fusion Algorithms

In this paper, we present a novel objective nonreference performance assessment algorithm for image fusion. It takes into account local measurements to estimate how well the important information in the source images is represented by the fused image. The metric is based on the Universal Image Quality Index and uses the similarity between blocks of pixels in the input images and the fused image as the weighting factors for the metrics. Experimental results confirm that the values of the proposed metrics correlate well with the subjective quality of the fused images, giving a significant improvement over standard measures based on mean squared error and mutual information.

Stress Analysis of Non-persistent Rock Joints under Biaxial Loading

Two-dimensional finite element model was created in this work to investigate the stresses distribution within rock-like samples with offset open non-persistent joints under biaxial loading. The results of this study have explained the fracture mechanisms observed in tests on rock-like material with open non-persistent offset joints [1]. Finite element code SAP2000 was used to study the stresses distribution within the specimens. Four-nodded isoperimetric plain strain element with two degree of freedom per node, and the three-nodded constant strain triangular element with two degree of freedom per node were used in the present study.The results of the present study explained the formation of wing cracks at the tip of the joints for low confining stress as well as the formation of wing cracks at the middle of the joint for the higher confining stress. High shear stresses found in the numerical study at the tip of the joints explained the formation of secondary cracks at the tip of the joints in the experimental study. The study results coincide with the experimental observations which showed that for bridge inclination of 0o, the coalescence occurred due to shear failure and for bridge inclination of 90o the coalescence occurred due to tensile failure while for the other bridge inclinations coalescence occurred due to mixed tensile and shear failure.

Processing the Medical Sensors Signals Using Fuzzy Inference System

Sensors possess several properties of physical measures. Whether devices that convert a sensed signal into an electrical signal, chemical sensors and biosensors, thus all these sensors can be considered as an interface between the physical and electrical equipment. The problem is the analysis of the multitudes of saved settings as input variables. However, they do not all have the same level of influence on the outputs. In order to identify the most sensitive parameters, those that can guide users in gathering information on the ground and in the process of model calibration and sensitivity analysis for the effect of each change made. Mathematical models used for processing become very complex. In this paper a fuzzy rule-based system is proposed as a solution for this problem. The system collects the available signals information from sensors. Moreover, the system allows the study of the influence of the various factors that take part in the decision system. Since its inception fuzzy set theory has been regarded as a formalism suitable to deal with the imprecision intrinsic to many problems. At the same time, fuzzy sets allow to use symbolic models. In this study an example was applied for resolving variety of physiological parameters that define human health state. The application system was done for medical diagnosis help. The inputs are the signals expressed the cardiovascular system parameters, blood pressure, Respiratory system paramsystem was done, it will be able to predict the state of patient according any input values.

On the Design of Shape Memory Alloy Locking Mechanism: A Novel Solution for Laparoscopic Ligation Process

The blood ducts must be occluded to avoid loss of blood from vessels in laparoscopic surgeries. This paper presents a locking mechanism to be used in a ligation laparoscopic procedure (LigLAP I), as an alternative solution for a stapling procedure. Currently, stapling devices are being used to occlude vessels. Using these devices may result in some problems, including injury of bile duct, taking up a great deal of space behind the vessel, and bile leak. In this new procedure, a two-layer suture occludes a vessel. A locking mechanism is also required to hold the suture. Since there is a limited space at the device tip, a Shape Memory Alloy (SMA) actuator is used in this mechanism. Suitability for cleanroom applications, small size, and silent performance are among the advantages of SMA actuators in biomedical applications. An experimental study is conducted to examine the function of the locking mechanism. To set up the experiment, a prototype of a locking mechanism is built using nitinol, which is a nickel-titanium shape memory alloy. The locking mechanism successfully locks a polymer suture for all runs of the experiment. In addition, the effects of various surface materials on the applied pulling forces are studied. Various materials are mounted at the mechanism tip to compare the maximum pulling forces applied to the suture for each material. The results show that the various surface materials on the device tip provide large differences in the applied pulling forces.

Areas of Lean Manufacturing for Productivity Improvement in a Manufacturing Unit

Many organisations are nowadays interested to adopt lean manufacturing strategy that would enable them to compete in this competitive globalisation market. In this respect, it is necessary to assess the implementation of lean manufacturing in different organisations so that the important best practices can be identified. This paper describes the development of key areas which will be used to assess the adoption and implementation of lean manufacturing practices. There are some key areas developed to evaluate and reduce the most optimal projects so as to enhance their production efficiency and increase the purpose of the economic benefits of the manufacturing unit. Lean manufacturing is becoming lean enterprise by treating its customers and suppliers as partners. This gives the extra edge in today-s cost and time competitive markets. The organisation is becoming strong in all the conventional competition points. They are Price, Quality and Delivery. Lean enterprise owners can deliver high quality products quickly, with low price.

Aggressive Interactions in Hospital Emergency Units

International literature emphasizes on the concern regarding the phenomenon of aggression in hospital. This paper focuses on the reality of aggressive interactions reigning within an emergency triage involving three chaps of protagonists: the professionals, the patients and their carers. The data collection was made from a grid of observation, in which the various variables exposed in the literature were integrated. They observations took place around the clock, for three weeks, at the rate of one week a month. In this research 331 aggressive interactions have been listed and analyzed by means of the software SPSS. This research is one of the very few continuous observation surveys in the literature. It shows the various human factors at play in the emergence of aggressive interaction. The data may be used both for taking steps in primary prevention, thanks to the analysis of interaction modes, and in secondary prevention by integrating the useful results in situational prevention.

Evaluation of Linear and Geometrically Nonlinear Static and Dynamic Analysis of Thin Shells by Flat Shell Finite Elements

The choice of finite element to use in order to predict nonlinear static or dynamic response of complex structures becomes an important factor. Then, the main goal of this research work is to focus a study on the effect of the in-plane rotational degrees of freedom in linear and geometrically non linear static and dynamic analysis of thin shell structures by flat shell finite elements. In this purpose: First, simple triangular and quadrilateral flat shell finite elements are implemented in an incremental formulation based on the updated lagrangian corotational description for geometrically nonlinear analysis. The triangular element is a combination of DKT and CST elements, while the quadrilateral is a combination of DKQ and the bilinear quadrilateral membrane element. In both elements, the sixth degree of freedom is handled via introducing fictitious stiffness. Secondly, in the same code, the sixth degrees of freedom in these elements is handled differently where the in-plane rotational d.o.f is considered as an effective d.o.f in the in-plane filed interpolation. Our goal is to compare resulting shell elements. Third, the analysis is enlarged to dynamic linear analysis by direct integration using Newmark-s implicit method. Finally, the linear dynamic analysis is extended to geometrically nonlinear dynamic analysis where Newmark-s method is used to integrate equations of motion and the Newton-Raphson method is employed for iterating within each time step increment until equilibrium is achieved. The obtained results demonstrate the effectiveness and robustness of the interpolation of the in-plane rotational d.o.f. and present deficiencies of using fictitious stiffness in dynamic linear and nonlinear analysis.

A Development of Home Service Robot using Omni-Wheeled Mobility and Task-Based Manipulation

In this paper, a Smart Home Service Robot, McBot II, which performs mess-cleanup function etc. in house, is designed much more optimally than other service robots. It is newly developed in much more practical system than McBot I which we had developed two years ago. One characteristic attribute of mobile platforms equipped with a set of dependent wheels is their omni- directionality and the ability to realize complex translational and rotational trajectories for agile navigation in door. An accurate coordination of steering angle and spinning rate of each wheel is necessary for a consistent motion. This paper develops trajectory controller of 3-wheels omni-directional mobile robot using fuzzy azimuth estimator. A specialized anthropomorphic robot manipulator which can be attached to the housemaid robot McBot II, is developed in this paper. This built-in type manipulator consists of both arms with 3 DOF (Degree of Freedom) each and both hands with 3 DOF each. The robotic arm is optimally designed to satisfy both the minimum mechanical size and the maximum workspace. Minimum mass and length are required for the built-in cooperated-arms system. But that makes the workspace so small. This paper proposes optimal design method to overcome the problem by using neck joint to move the arms horizontally forward/backward and waist joint to move them vertically up/down. The robotic hand, which has two fingers and a thumb, is also optimally designed in task-based concept. Finally, the good performance of the developed McBot II is confirmed through live tests of the mess-cleanup task.

Assessing the Global Water Productivity of Some Irrigation Command Areas in Iran

The great challenge of the agricultural sector is to produce more crop from less water, which can be achieved by increasing crop water productivity. The modernization of the irrigation systems offers a number of possibilities to expand the economic productivity of water and improve the virtual water status. The objective of the present study is to assess the global water productivity (GWP) within the major irrigation command areas of I.R. Iran. For this purpose, fourteen irrigation command areas where located in different areas of Iran were selected. In order to calculate the global water productivity of irrigation command areas, all data on the delivered water to cropping pattern, cultivated area, crops water requirement, and yield production rate during 2002-2006 were gathered. In each of the command areas it seems that the cultivated crops have a higher amount of virtual water and thus can be replaced by crops with less virtual water. This is merely suggested due to crop water consumption and at the time of replacing crops, economic value as well as cultural and political factors must be considered. The results indicated that the lowest GWP belongs to Mahyar and Borkhar irrigation areas, 0.24 kg m-3, and the highest is that of the Dez irrigation area, 0.81 kg m-3. The findings demonstrated that water management in the two irrigation areas is just efficient. The difference in the GWP of irrigation areas is due to variations in the cropping pattern, amount of crop productions, in addition to the effective factors in the water use efficiency in the irrigation areas.

DCGA Based-Transmission Network Expansion Planning Considering Network Adequacy

Transmission network expansion planning (TNEP) is an important component of power system planning that its task is to minimize the network construction and operational cost while satisfying the demand increasing, imposed technical and economic conditions. Up till now, various methods have been presented to solve the static transmission network expansion planning (STNEP) problem. But in all of these methods, the lines adequacy rate has not been studied after the planning horizon, i.e. when the expanded network misses its adequacy and needs to be expanded again. In this paper, in order to take transmission lines condition after expansion in to account from the line loading view point, the adequacy of transmission network is considered for solution of STNEP problem. To obtain optimal network arrangement, a decimal codification genetic algorithm (DCGA) is being used for minimizing the network construction and operational cost. The effectiveness of the proposed idea is tested on the Garver's six-bus network. The results evaluation reveals that the annual worth of network adequacy has a considerable effect on the network arrangement. In addition, the obtained network, based on the DCGA, has lower investment cost and higher adequacy rate. Thus, the network satisfies the requirements of delivering electric power more safely and reliably to load centers.

Modeling of Normal and Atherosclerotic Blood Vessels using Finite Element Methods and Artificial Neural Networks

Analysis of blood vessel mechanics in normal and diseased conditions is essential for disease research, medical device design and treatment planning. In this work, 3D finite element models of normal vessel and atherosclerotic vessel with 50% plaque deposition were developed. The developed models were meshed using finite number of tetrahedral elements. The developed models were simulated using actual blood pressure signals. Based on the transient analysis performed on the developed models, the parameters such as total displacement, strain energy density and entropy per unit volume were obtained. Further, the obtained parameters were used to develop artificial neural network models for analyzing normal and atherosclerotic blood vessels. In this paper, the objectives of the study, methodology and significant observations are presented.

Investigating Different Options for Reheating the First Converter Inlet Stream of Sulfur Recovery Units (SRUs)

The modified Claus process is the major technology for the recovery of elemental sulfur from hydrogen sulfide. The chemical reactions that can occur in the reaction furnace are numerous and many byproducts such as carbon disulfide and carbon carbonyl sulfide are produced. These compounds can often contribute from 20 to 50% of the pollutants and therefore, should be hydrolyzed in the catalytic converter. The inlet temperature of the first catalytic reactor should be maintained over than 250 °C, to hydrolyze COS and CS2. In this paper, the various configurations for the first converter reheating of sulfur recovery unit are investigated. As a result, the performance of each method is presented for a typical clause unit. The results show that the hot gas method seems to be better than the other methods.

Using Data Mining Techniques for Finding Cardiac Outlier Patients

In this paper we used data mining techniques to identify outlier patients who are using large amount of drugs over a long period of time. Any healthcare or health insurance system should deal with the quantities of drugs utilized by chronic diseases patients. In Kingdom of Bahrain, about 20% of health budget is spent on medications. For the managers of healthcare systems, there is no enough information about the ways of drug utilization by chronic diseases patients, is there any misuse or is there outliers patients. In this work, which has been done in cooperation with information department in the Bahrain Defence Force hospital; we select the data for Cardiac patients in the period starting from 1/1/2008 to December 31/12/2008 to be the data for the model in this paper. We used three techniques for finding the drug utilization for cardiac patients. First we applied a clustering technique, followed by measuring of clustering validity, and finally we applied a decision tree as classification algorithm. The clustering results is divided into three clusters according to the drug utilization, for 1603 patients, who received 15,806 prescriptions during this period can be partitioned into three groups, where 23 patients (2.59%) who received 1316 prescriptions (8.32%) are classified to be outliers. The classification algorithm shows that the use of average drug utilization and the age, and the gender of the patient can be considered to be the main predictive factors in the induced model.

Scatterer Density in Nonlinear Diffusion for Speckle Reduction in Ultrasound Imaging: The Isotropic Case

This paper proposes a method for speckle reduction in medical ultrasound imaging while preserving the edges with the added advantages of adaptive noise filtering and speed. A nonlinear image diffusion method that incorporates local image parameter, namely, scatterer density in addition to gradient, to weight the nonlinear diffusion process, is proposed. The method was tested for the isotropic case with a contrast detail phantom and varieties of clinical ultrasound images, and then compared to linear and some other diffusion enhancement methods. Different diffusion parameters were tested and tuned to best reduce speckle noise and preserve edges. The method showed superior performance measured both quantitatively and qualitatively when incorporating scatterer density into the diffusivity function. The proposed filter can be used as a preprocessing step for ultrasound image enhancement before applying automatic segmentation, automatic volumetric calculations, or 3D ultrasound volume rendering.

Biomass and Pigment Production by Monascus during Miniaturized Submerged Culture on Adlay

Three reactor types were explored and successfully used for pigment production by Monascus: shake flasks, and shaken and stirred miniaturized reactors. Also, the use of dielectric spectroscopy for the on-line measurement of biomass levels was explored. Shake flasks gave good pigment yields, but scale up is difficult, and they cannot be automated. Shaken bioreactors were less successful with pigment production than stirred reactors. Experiments with different impeller speeds in different volumes of liquid in the reactor confirmed that this is most likely due oxygen availability. The availability of oxygen appeared to affect biomass levels less than pigment production; red pigment production in particular needed very high oxygen levels. Dielectric spectroscopy was effectively used to continuously measure biomass levels during the submerged fungal fermentation in the shaken and stirred miniaturized bioreactors, despite the presence of the solid substrate particles. Also, the capacitance signal gave useful information about the viability of the cells in the culture.

Improving Quality of Business Networks for Information Systems

Computer networks are essential part in computerbased information systems. The performance of these networks has a great influence on the whole information system. Measuring the usability criteria and customers satisfaction on small computer network is very important. In this article, an effective approach for measuring the usability of business network in an information system is introduced. The usability process for networking provides us with a flexible and a cost-effective way to assess the usability of a network and its products. In addition, the proposed approach can be used to certify network product usability late in the development cycle. Furthermore, it can be used to help in developing usable interfaces very early in the cycle and to give a way to measure, track, and improve usability. Moreover, a new approach for fast information processing over computer networks is presented. The entire data are collected together in a long vector and then tested as a one input pattern. Proposed fast time delay neural networks (FTDNNs) use cross correlation in the frequency domain between the tested data and the input weights of neural networks. It is proved mathematically and practically that the number of computation steps required for the presented time delay neural networks is less than that needed by conventional time delay neural networks (CTDNNs). Simulation results using MATLAB confirm the theoretical computations.

The Dividend Payments for General Claim Size Distributions under Interest Rate

This paper evaluates the dividend payments for general claim size distributions in the presence of a dividend barrier. The surplus of a company is modeled using the classical risk process perturbed by diffusion, and in addition, it is assumed to accrue interest at a constant rate. After presenting the integro-differential equation with initial conditions that dividend payments satisfies, the paper derives a useful expression of the dividend payments by employing the theory of Volterra equation. Furthermore, the optimal value of dividend barrier is found. Finally, numerical examples illustrate the optimality of optimal dividend barrier and the effects of parameters on dividend payments.