Influence of Different Mixing Ratios of Adhesives for Wood Bondline Quality

The research study was based on an evaluation of the ability of glued test samples to pass the criterion of sufficient bondline adhesion under the exposure conditions defined in EN 302- 1. Additionally, an infrared spectroscopic analysis of the evaluated adhesives (phenol-resorcinol-formaldehyde PRF and melamine-ureaformaldehyde MUF) with different mix ratios was carried out to evaluate the possible effects of a faulty technological process.

Numerical Simulation of Convective Heat Transfer and Fluid Flow through Porous Media with Different Moving and Heated Walls

The present study is concerned with the free convective two dimensional flow and heat transfer, within the framework of Boussinesq approximation, in anisotropic fluid filled porous rectangular enclosure subjected to end-to-end temperature difference have been investigated using Lattice Boltzmann method fornon-Darcy flow model. Effects of the moving lid direction (top, bottom, left, and right wall moving in the negative and positive x&ydirections), number of moving walls (one or two opposite walls), the sliding wall velocity, and four different constant temperatures opposite walls cases (two surfaces are being insulated and the twoother surfaces areimposed to be at constant hot and cold temperature)have been conducted. The results obtained are discussed in terms of the Nusselt number, vectors, contours, and isotherms.

Incorporation of Long-Term Redundancy in ECG Time Domain Compression Methods through Curve Simplification and Block-Sorting

We suggest a novel method to incorporate longterm redundancy (LTR) in signal time domain compression methods. The proposition is based on block-sorting and curve simplification. The proposition is illustrated on the ECG signal as a post-processor for the FAN method. Test applications on the new so-obtained FAN+ method using the MIT-BIH database show substantial improvement of the compression ratio-distortion behavior for a higher quality reconstructed signal.

Robust Face Recognition Using Eigen Faces and Karhunen-Loeve Algorithm

The current research paper is an implementation of Eigen Faces and Karhunen-Loeve Algorithm for face recognition. The designed program works in a manner where a unique identification number is given to each face under trial. These faces are kept in a database from where any particular face can be matched and found out of the available test faces. The Karhunen –Loeve Algorithm has been implemented to find out the appropriate right face (with same features) with respect to given input image as test data image having unique identification number. The procedure involves usage of Eigen faces for the recognition of faces.

Matching Pursuit based Removal of Cardiac Pulse-Related Artifacts in EEG/fMRI

Cardiac pulse-related artifacts in the EEG recorded simultaneously with fMRI are complex and highly variable. Their effective removal is an unsolved problem. Our aim is to develop an adaptive removal algorithm based on the matching pursuit (MP) technique and to compare it to established methods using a visual evoked potential (VEP). We recorded the VEP inside the static magnetic field of an MR scanner (with artifacts) as well as in an electrically shielded room (artifact free). The MP-based artifact removal outperformed average artifact subtraction (AAS) and optimal basis set removal (OBS) in terms of restoring the EEG field map topography of the VEP. Subsequently, a dipole model was fitted to the VEP under each condition using a realistic boundary element head model. The source location of the VEP recorded inside the MR scanner was closest to that of the artifact free VEP after cleaning with the MP-based algorithm as well as with AAS. While none of the tested algorithms offered complete removal, MP showed promising results due to its ability to adapt to variations of latency, frequency and amplitude of individual artifact occurrences while still utilizing a common template.

Security Risk Analysis Based on the Policy Formalization and the Modeling of Big Systems

Security risk models have been successful in estimating the likelihood of attack for simple security threats. However, modeling complex system and their security risk is even a challenge. Many methods have been proposed to face this problem. Often difficult to manipulate, and not enough all-embracing they are not as famous as they should with administrators and deciders. We propose in this paper a new tool to model big systems on purpose. The software, takes into account attack threats and security strength.

Effect of Plant Growth Promoting Rhizobacteria (PGPR) and Planting Pattern on Yield and Its Components of Rice (Oryza sativa L.) in Ilam Province, Iran

Most parts of the world such as Iran are facing the excessive consumption of fertilizers, that are used to achieve high yield, but increase the cost of production of fertilizer and degradation of soil and water resources. This experiment was carried out to study the effect of PGPR and planting pattern on yield and yield components of rice (Oryza sativa L.) using split plot based on randomized complete block design with three replications in Ilam province, Iran. Bio-fertilizer including Azotobacter, Nitroxin and control treatment (without consumption) were designed as a main plot and planting pattern including 15 × 10, 15 × 15 and 15 × 20 and the number of plant in hill including 3, 4 and 5 plants in hill were considered as a sub-plots. The results showed that the effect of bio-fertilizers, planting pattern and the number of plants in hill were significant affect on yield and yield components. Interaction effect between bio-fertilizer and planting pattern had important difference on the number spikelet of panicle and harvest index. Interaction effect between bio-fertilizer and the number of plants in hill were significant affect on the number of spikelet per panicle. The maximum grain yield was obtained by inoculation with Nitroxin, planting pattern of 15 × 15 and 4 plants in hill with mean of 1110.6 g.m-2, 959.9 g.m-2 and 928.4 g.m-2, respectively.

Segmentation and Recognition of Handwritten Numeric Chains

In this paper we present an off line system for the recognition of the handwritten numeric chains. Our work is divided in two big parts. The first part is the realization of a recognition system of the isolated handwritten digits. In this case the study is based mainly on the evaluation of neural network performances, trained with the gradient back propagation algorithm. The used parameters to form the input vector of the neural network are extracted on the binary images of the digits by several methods: the distribution sequence, the Barr features and the centred moments of the different projections and profiles. The second part is the extension of our system for the reading of the handwritten numeric chains constituted of a variable number of digits. The vertical projection is used to segment the numeric chain at isolated digits and every digit (or segment) will be presented separately to the entry of the system achieved in the first part (recognition system of the isolated handwritten digits). The result of the recognition of the numeric chain will be displayed at the exit of the global system.

Underwater Interaction of 1064 nm Laser Radiation with Metal Target

Dynamics of laser radiation – metal target interaction in water at 1064 nm by applying Mach-Zehnder interference technique was studied. The mechanism of generating the well developed regime of evaporation of a metal surface and a spherical shock wave in water is proposed. Critical intensities of the NIR for the well developed evaporation of silver and gold targets were determined. Dynamics of shock waves was investigated for earlier (dozens) and later (hundreds) nanoseconds of time. Transparent expanding plasma-vapor-compressed water object was visualized and measured. The thickness of compressed layer of water and pressures behind the front of a shock wave for later time delays were obtained from the optical treatment of interferograms.

Tuning of Thermal FEA Using Krylov Parametric MOR for Subsea Application

A dead leg is a typical subsea production system component. CFD is required to model heat transfer within the dead leg. Unfortunately its solution is time demanding and thus not suitable for fast prediction or repeated simulations. Therefore there is a need to create a thermal FEA model, mimicking the heat flows and temperatures seen in CFD cool down simulations. This paper describes the conventional way of tuning and a new automated way using parametric model order reduction (PMOR) together with an optimization algorithm. The tuned FE analyses replicate the steady state CFD parameters within a maximum error in heat flow of 6 % and 3 % using manual and PMOR method respectively. During cool down, the relative error of the tuned FEA models with respect to temperature is below 5% comparing to the CFD. In addition, the PMOR method obtained the correct FEA setup five times faster than the manually tuned FEA.

Secondary Effects on Water Vapor Transport Properties Measured by Cup Method

The cup method is applied for the measurement of water vapor transport properties of porous materials worldwide. However, in practical applications the experimental results are often used without taking into account some secondary effects which can play an important role under specific conditions. In this paper, the effect of temperature on water vapor transport properties of cellular concrete is studied, together with the influence of sample thickness. At first, the bulk density, matrix density, total open porosity and sorption and desorption isotherms are measured for material characterization purposes. Then, the steady state cup method is used for determination of water vapor transport properties, whereas the measurements are performed at several temperatures and for three different sample thicknesses.

Modeling Peer-to-Peer Networks with Interest-Based Clusters

In the world of Peer-to-Peer (P2P) networking different protocols have been developed to make the resource sharing or information retrieval more efficient. The SemPeer protocol is a new layer on Gnutella that transforms the connections of the nodes based on semantic information to make information retrieval more efficient. However, this transformation causes high clustering in the network that decreases the number of nodes reached, therefore the probability of finding a document is also decreased. In this paper we describe a mathematical model for the Gnutella and SemPeer protocols that captures clustering-related issues, followed by a proposition to modify the SemPeer protocol to achieve moderate clustering. This modification is a sort of link management for the individual nodes that allows the SemPeer protocol to be more efficient, because the probability of a successful query in the P2P network is reasonably increased. For the validation of the models, we evaluated a series of simulations that supported our results.

Specification of a Model of Honeypot Attack Based On Raised Data

The security of their network remains the priorities of almost all companies. Existing security systems have shown their limit; thus a new type of security systems was born: honeypots. Honeypots are defined as programs or intended servers which have to attract pirates to study theirs behaviours. It is in this context that the leurre.com project of gathering about twenty platforms was born. This article aims to specify a model of honeypots attack. Our model describes, on a given platform, the evolution of attacks according to theirs hours. Afterward, we show the most attacked services by the studies of attacks on the various ports. It is advisable to note that this article was elaborated within the framework of the research projects on honeyspots within the LABTIC (Laboratory of Information Technologies and Communication).

Hardware Prototyping of an Efficient Encryption Engine

An approach to develop the FPGA of a flexible key RSA encryption engine that can be used as a standard device in the secured communication system is presented. The VHDL modeling of this RSA encryption engine has the unique characteristics of supporting multiple key sizes, thus can easily be fit into the systems that require different levels of security. A simple nested loop addition and subtraction have been used in order to implement the RSA operation. This has made the processing time faster and used comparatively smaller amount of space in the FPGA. The hardware design is targeted on Altera STRATIX II device and determined that the flexible key RSA encryption engine can be best suited in the device named EP2S30F484C3. The RSA encryption implementation has made use of 13,779 units of logic elements and achieved a clock frequency of 17.77MHz. It has been verified that this RSA encryption engine can perform 32-bit, 256-bit and 1024-bit encryption operation in less than 41.585us, 531.515us and 790.61us respectively.

Face Image Coding Using Face Prototyping

In this paper we present a novel approach for face image coding. The proposed method makes a use of the features of video encoders like motion prediction. At first encoder selects appropriate prototype from the database and warps it according to features of encoding face. Warped prototype is placed as first I frame. Encoding face is placed as second frame as P frame type. Information about features positions, color change, selected prototype and data flow of P frame will be sent to decoder. The condition is both encoder and decoder own the same database of prototypes. We have run experiment with H.264 video encoder and obtained results were compared to results achieved by JPEG and JPEG2000. Obtained results show that our approach is able to achieve 3 times lower bitrate and two times higher PSNR in comparison with JPEG. According to comparison with JPEG2000 the bitrate was very similar, but subjective quality achieved by proposed method is better.

Sensitivity of Small Disturbance Angle Stability to the System Parameters of Future Power Networks

The incorporation of renewable energy sources for the sustainable electricity production is undertaking a more prominent role in electric power systems. Thus, it will be an indispensable incident that the characteristics of future power networks, their prospective stability for instance, get influenced by the imposed features of sustainable energy sources. One of the distinctive attributes of the sustainable energy sources is exhibiting the stochastic behavior. This paper investigates the impacts of this stochastic behavior on the small disturbance rotor angle stability in the upcoming electric power networks. Considering the various types of renewable energy sources and the vast variety of system configurations, the sensitivity analysis can be an efficient breakthrough towards generalizing the effects of new energy sources on the concept of stability. In this paper, the definition of small disturbance angle stability for future power systems and the iterative-stochastic way of its analysis are presented. Also, the effects of system parameters on this type of stability are described by performing a sensitivity analysis for an electric power test system.

Automated Stereophotogrammetry Data Cleansing

The stereophotogrammetry modality is gaining more widespread use in the clinical setting. Registration and visualization of this data, in conjunction with conventional 3D volumetric image modalities, provides virtual human data with textured soft tissue and internal anatomical and structural information. In this investigation computed tomography (CT) and stereophotogrammetry data is acquired from 4 anatomical phantoms and registered using the trimmed iterative closest point (TrICP) algorithm. This paper fully addresses the issue of imaging artifacts around the stereophotogrammetry surface edge using the registered CT data as a reference. Several iterative algorithms are implemented to automatically identify and remove stereophotogrammetry surface edge outliers, improving the overall visualization of the combined stereophotogrammetry and CT data. This paper shows that outliers at the surface edge of stereophotogrammetry data can be successfully removed automatically.

Endogenous Fantasy – Based Serious Games: Intrinsic Motivation and Learning

Current technological advances pale in comparison to the changes in social behaviors and 'sense of place' that is being empowered since the Internet made it on the scene. Today-s students view the Internet as both a source of entertainment and an educational tool. The development of virtual environments is a conceptual framework that needs to be addressed by educators and it is important that they become familiar with who these virtual learners are and how they are motivated to learn. Massively multiplayer online role playing games (MMORPGs), if well designed, could become the vehicle of choice to deliver learning content. We suggest that these games, in order to accomplish these goals, must begin with well-established instructional design principles that are co-aligned with established principles of video game design. And have the opportunity to provide an instructional model of significant prescriptive power. The authors believe that game designers need to take advantage of the natural motivation player-learners have for playing games by developing them in such a way so as to promote, intrinsic motivation, content learning, transfer of knowledge, and naturalization.

Bayes Net Classifiers for Prediction of Renal Graft Status and Survival Period

This paper presents the development of a Bayesian belief network classifier for prediction of graft status and survival period in renal transplantation using the patient profile information prior to the transplantation. The objective was to explore feasibility of developing a decision making tool for identifying the most suitable recipient among the candidate pool members. The dataset was compiled from the University of Toledo Medical Center Hospital patients as reported to the United Network Organ Sharing, and had 1228 patient records for the period covering 1987 through 2009. The Bayes net classifiers were developed using the Weka machine learning software workbench. Two separate classifiers were induced from the data set, one to predict the status of the graft as either failed or living, and a second classifier to predict the graft survival period. The classifier for graft status prediction performed very well with a prediction accuracy of 97.8% and true positive values of 0.967 and 0.988 for the living and failed classes, respectively. The second classifier to predict the graft survival period yielded a prediction accuracy of 68.2% and a true positive rate of 0.85 for the class representing those instances with kidneys failing during the first year following transplantation. Simulation results indicated that it is feasible to develop a successful Bayesian belief network classifier for prediction of graft status, but not the graft survival period, using the information in UNOS database.

Advanced Geolocation of IP Addresses

Tracing and locating the geographical location of users (Geolocation) is used extensively in todays Internet. Whenever we, e.g., request a page from google we are - unless there was a specific configuration made - automatically forwarded to the page with the relevant language and amongst others, dependent on our location identified, specific commercials are presented. Especially within the area of Network Security, Geolocation has a significant impact. Because of the way the Internet works, attacks can be executed from almost everywhere. Therefore, for an attribution, knowledge of the origination of an attack - and thus Geolocation - is mandatory in order to be able to trace back an attacker. In addition, Geolocation can also be used very successfully to increase the security of a network during operation (i.e. before an intrusion actually has taken place). Similar to greylisting in emails, Geolocation allows to (i) correlate attacks detected with new connections and (ii) as a consequence to classify traffic a priori as more suspicious (thus particularly allowing to inspect this traffic in more detail). Although numerous techniques for Geolocation are existing, each strategy is subject to certain restrictions. Following the ideas of Endo et al., this publication tries to overcome these shortcomings with a combined solution of different methods to allow improved and optimized Geolocation. Thus, we present our architecture for improved Geolocation, by designing a new algorithm, which combines several Geolocation techniques to increase the accuracy.