Multifunctional Electrical Outlet based on Mobile Ad Hoc Network

Nowadays, new home appliances and office appliances have been developed that communicate with users through the Internet, for remote monitor and remote control. However, developments and sales of these new appliances are just started, then, many products in our houses and offices do not have these useful functions. In few years, we add these new functions to the outlet, it means multifunctional electrical power socket plug adapter. The outlet measure power consumption of connecting appliances, and it can switch power supply to connecting appliances, too. Using this outlet, power supply of old appliances can be control and monitor. And we developed the interface system using web browser to operate it from users[1]. But, this system need to set up LAN cables between outlets and so on. It is not convenience that cables around rooms. In this paper, we develop the system that use wireless mobile ad hoc network instead of wired LAN to communicate with the outlets.

A Graphical Environment for Petri Nets INA Tool Based on Meta-Modelling and Graph Grammars

The Petri net tool INA is a well known tool by the Petri net community. However, it lacks a graphical environment to cerate and analyse INA models. Building a modelling tool for the design and analysis from scratch (for INA tool for example) is generally a prohibitive task. Meta-Modelling approach is useful to deal with such problems since it allows the modelling of the formalisms themselves. In this paper, we propose an approach based on the combined use of Meta-modelling and Graph Grammars to automatically generate a visual modelling tool for INA for analysis purposes. In our approach, the UML Class diagram formalism is used to define a meta-model of INA models. The meta-modelling tool ATOM3 is used to generate a visual modelling tool according to the proposed INA meta-model. We have also proposed a graph grammar to automatically generate INA description of the graphically specified Petri net models. This allows the user to avoid the errors when this description is done manually. Then the INA tool is used to perform the simulation and the analysis of the resulted INA description. Our environment is illustrated through an example.

Economic Returns of Using Brewery`s Spent Grain in Animal Feed

UK breweries generate extensive by products in the form of spent grain, slurry and yeast. Much of the spent grain is produced by large breweries and processed in bulk for animal feed. Spent brewery grains contain up to 20% protein dry weight and up to 60% fiber and are useful additions to animal feed. Bulk processing is economic and allows spent grain to be sold so providing an income to the brewery. A proportion of spent grain, however, is produced by small local breweries and is more variably distributed to farms or other users using intermittent collection methods. Such use is much less economic and may incur losses if not carefully assessed for transport costs. This study reports an economic returns of using wet brewery spent grain (WBSG) in animal feed using the Co-product Optimizer Decision Evaluator model (Cattle CODE) developed by the University of Nebraska to predict performance and economic returns when byproducts are fed to finishing cattle. The results indicated that distance from brewery to farm had a significantly greater effect on the economics of use of small brewery spent grain and that alternative uses than cattle feed may be important to develop.

Learning through Shared Procedures -A Case of Using Technology to Bridge the Gap between Theory and Practice in Officer Education

In this article we explore how computer assisted exercises may allow for bridging the traditional gap between theory and practice in professional education. To educate officers able to master the complexity of the battlefield the Norwegian Military Academy needs to develop a learning environment that allows for creating viable connections between the educational environment and the field of practice. In response to this challenge we explore the conditions necessary to make computer assisted training systems (CATS) a useful tool to create structural similarities between an educational context and the field of military practice. Although, CATS may facilitate work procedures close to real life situations, this case do demonstrate how professional competence also must build on viable learning theories and environments. This paper explores the conditions that allow for using simulators to facilitate professional competence from within an educational setting. We develop a generic didactic model that ascribes learning to participation in iterative cycles of action and reflection. The development of this model is motivated by the need to develop an interdisciplinary professional education rooted in the pattern of military practice.

Lateral Crushing of Square and Rectangular Metallic Tubes under Different Quasi-Static Conditions

Impact is one of very important subjects which always have been considered in mechanical science. Nature of impact is such that which makes its control a hard task. Therefore it is required to present the transfer of impact to other vulnerable part of a structure, when it is necessary, one of the best method of absorbing energy of impact, is by using Thin-walled tubes these tubes collapses under impact and with absorption of energy, it prevents the damage to other parts.Purpose of recent study is to survey the deformation and energy absorption of tubes with different type of cross section (rectangular or square) and with similar volumes, height, mean cross section thickness, and material under loading with different speeds. Lateral loading of tubes are quasi-static type and beside as numerical analysis, also experimental experiences has been performed to evaluate the accuracy of the results. Results from the surveys is indicates that in a same conditions which mentioned above, samples with square cross section ,absorb more energy compare to rectangular cross section, and also by increscent in speed of loading, energy absorption would be more.

Soil-Vegetation Relationships in Arid Rangelands (Case Study: Nodushan Rangelands of Yazd, Iran)

The objective of this research was to identify the vegetation-soil relationships in Nodushan arid rangelands of Yazd. 5 sites were selected for measuring the cover of plant species and soil attributes. Soil samples were taken in 0-10 and 10-80 cm layers. The species studied were Salsola tomentosa, Salsola arbuscula, Peganum harmala, Zygophylum eurypterum and Eurotia ceratoides. Canonical correspondence analysis (CCA) was used to analyze the data. Based on the CCA results, 74.9 % of vegetation-soil variation was explained by axis 1-3. Axis 1, 2 and 3 accounted for 27.2%, 24.9 % and 22.8% of variance respectively. Correlation between axis 1, 2, 3 and speciesedaphic variables were 0.995, 0.989, 0.981 respectively. Soil texture, lime, salinity and organic matter significantly influenced the distribution of these plant species. Determination of soil-vegetation relationships will be useful for managing and improving rangelands in arid and semi arid environments.

Wavelet Based Qualitative Assessment of Femur Bone Strength Using Radiographic Imaging

In this work, the primary compressive strength components of human femur trabecular bone are qualitatively assessed using image processing and wavelet analysis. The Primary Compressive (PC) component in planar radiographic femur trabecular images (N=50) is delineated by semi-automatic image processing procedure. Auto threshold binarization algorithm is employed to recognize the presence of mineralization in the digitized images. The qualitative parameters such as apparent mineralization and total area associated with the PC region are derived for normal and abnormal images.The two-dimensional discrete wavelet transforms are utilized to obtain appropriate features that quantify texture changes in medical images .The normal and abnormal samples of the human femur are comprehensively analyzed using Harr wavelet.The six statistical parameters such as mean, median, mode, standard deviation, mean absolute deviation and median absolute deviation are derived at level 4 decomposition for both approximation and horizontal wavelet coefficients. The correlation coefficient of various wavelet derived parameters with normal and abnormal for both approximated and horizontal coefficients are estimated. It is seen that in almost all cases the abnormal show higher degree of correlation than normals. Further the parameters derived from approximation coefficient show more correlation than those derived from the horizontal coefficients. The parameters mean and median computed at the output of level 4 Harr wavelet channel was found to be a useful predictor to delineate the normal and the abnormal groups.

Detection and Correction of Ectopic Beats for HRV Analysis Applying Discrete Wavelet Transforms

The clinical usefulness of heart rate variability is limited to the range of Holter monitoring software available. These software algorithms require a normal sinus rhythm to accurately acquire heart rate variability (HRV) measures in the frequency domain. Premature ventricular contractions (PVC) or more commonly referred to as ectopic beats, frequent in heart failure, hinder this analysis and introduce ambiguity. This investigation demonstrates an algorithm to automatically detect ectopic beats by analyzing discrete wavelet transform coefficients. Two techniques for filtering and replacing the ectopic beats from the RR signal are compared. One technique applies wavelet hard thresholding techniques and another applies linear interpolation to replace ectopic cycles. The results demonstrate through simulation, and signals acquired from a 24hr ambulatory recorder, that these techniques can accurately detect PVC-s and remove the noise and leakage effects produced by ectopic cycles retaining smooth spectra with the minimum of error.

Intensity of Singular Stress Field at the Corner of Adhesive Layer in Bonded Plate

In this paper the strength of adhesive joint under tension and bending is discussed on the basis of intensity of singular stress by the application of FEM. A useful method is presented with focusing on the stress at the edge of interface between the adhesive and adherent obtained by FEM. After analyzing the adhesive joint strength with all material combinations, it is found that to improve the interface strength, thin adhesive layers are desirable because the intensity of singular stress decreases with decreasing the thickness.

Spreading Dynamics of a Viral Infection in a Complex Network

We report a computational study of the spreading dynamics of a viral infection in a complex (scale-free) network. The final epidemic size distribution (FESD) was found to be unimodal or bimodal depending on the value of the basic reproductive number R0 . The FESDs occurred on time-scales long enough for intermediate-time epidemic size distributions (IESDs) to be important for control measures. The usefulness of R0 for deciding on the timeliness and intensity of control measures was found to be limited by the multimodal nature of the IESDs and by its inability to inform on the speed at which the infection spreads through the population. A reduction of the transmission probability at the hubs of the scale-free network decreased the occurrence of the larger-sized epidemic events of the multimodal distributions. For effective epidemic control, an early reduction in transmission at the index cell and its neighbors was essential.

Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions

In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.

Computational Networks for Knowledge Representation

In the artificial intelligence field, knowledge representation and reasoning are important areas for intelligent systems, especially knowledge base systems and expert systems. Knowledge representation Methods has an important role in designing the systems. There have been many models for knowledge such as semantic networks, conceptual graphs, and neural networks. These models are useful tools to design intelligent systems. However, they are not suitable to represent knowledge in the domains of reality applications. In this paper, new models for knowledge representation called computational networks will be presented. They have been used in designing some knowledge base systems in education for solving problems such as the system that supports studying knowledge and solving analytic geometry problems, the program for studying and solving problems in Plane Geometry, the program for solving problems about alternating current in physics.

Community Detection-based Analysis of the Human Interactome Network

The study of proteomics reached unexpected levels of interest, as a direct consequence of its discovered influence over some complex biological phenomena, such as problematic diseases like cancer. This paper presents a new technique that allows for an accurate analysis of the human interactome network. It is basically a two-step analysis process that involves, at first, the detection of each protein-s absolute importance through the betweenness centrality computation. Then, the second step determines the functionallyrelated communities of proteins. For this purpose, we use a community detection technique that is based on the edge betweenness calculation. The new technique was thoroughly tested on real biological data and the results prove some interesting properties of those proteins that are involved in the carcinogenesis process. Apart from its experimental usefulness, the novel technique is also computationally effective in terms of execution times. Based on the analysis- results, some topological features of cancer mutated proteins are presented and a possible optimization solution for cancer drugs design is suggested.

Exploring Dimensionality, Systematic Mutations and Number of Contacts in Simple HP ab-initio Protein Folding Using a Blackboard-based Agent Platform

A computational platform is presented in this contribution. It has been designed as a virtual laboratory to be used for exploring optimization algorithms in biological problems. This platform is built on a blackboard-based agent architecture. As a test case, the version of the platform presented here is devoted to the study of protein folding, initially with a bead-like description of the chain and with the widely used model of hydrophobic and polar residues (HP model). Some details of the platform design are presented along with its capabilities and also are revised some explorations of the protein folding problems with different types of discrete space. It is also shown the capability of the platform to incorporate specific tools for the structural analysis of the runs in order to understand and improve the optimization process. Accordingly, the results obtained demonstrate that the ensemble of computational tools into a single platform is worthwhile by itself, since experiments developed on it can be designed to fulfill different levels of information in a self-consistent fashion. By now, it is being explored how an experiment design can be useful to create a computational agent to be included within the platform. These inclusions of designed agents –or software pieces– are useful for the better accomplishment of the tasks to be developed by the platform. Clearly, while the number of agents increases the new version of the virtual laboratory thus enhances in robustness and functionality.

The Creation of Sustainable Architecture by use of Transformable Intelligent Building Skins

Built environments have a large impact on environmental sustainability and if it is not considered properly can negatively affect our planet. The application of transformable intelligent building systems that automatically respond to environmental conditions is one of the best ways that can intelligently assist us to create sustainable environment. The significance of this issue is evident as energy crisis and environmental changes has made the sustainability the main concerns in many societies. The aim of this research is to review and evaluate the importance and influence of transformable intelligent structure on the creation of sustainable architecture. Intelligent systems in current buildings provide convenience through automatically responding to changes in environmental conditions, reducing energy dissipation and increase of the lifecycle of buildings. This paper by analyzing significant intelligent building systems will evaluate the potentials of transformable intelligent systems in the creation of sustainable architecture and environment.

New Approach for Manipulation of Stratified Programs

Negation is useful in the majority of the real world applications. However, its introduction leads to semantic and canonical problems. We propose in this paper an approach based on stratification to deal with negation problems. This approach is based on an extension of predicates nets. It is characterized with two main contributions. The first concerns the management of the whole class of stratified programs. The second contribution is related to usual operations optimizations on stratified programs (maximal stratification, incremental updates ...).

A Study on the Cloud Simulation with a Network Topology Generator

CloudSim is a useful tool to simulate the cloud environment. It shows the service availability, the power consumption, and the network traffic of services on the cloud environment. Moreover, it supports to calculate a network communication delay through a network topology data easily. CloudSim allows inputting a file of topology data, but it does not provide any generating process. Thus, it needs the file of topology data generated from some other tools. The BRITE is typical network topology generator. Also, it supports various type of topology generating algorithms. If CloudSim can include the BRITE, network simulation for clouds is easier than existing version. This paper shows the potential of connection between BRITE and CloudSim. Also, it proposes the direction to link between them.

An Intelligent Human-Computer Interaction System for Decision Support

This paper proposes a novel architecture for developing decision support systems. Unlike conventional decision support systems, the proposed architecture endeavors to reveal the decision-making process such that humans' subjectivity can be incorporated into a computerized system and, at the same time, to preserve the capability of the computerized system in processing information objectively. A number of techniques used in developing the decision support system are elaborated to make the decisionmarking process transparent. These include procedures for high dimensional data visualization, pattern classification, prediction, and evolutionary computational search. An artificial data set is first employed to compare the proposed approach with other methods. A simulated handwritten data set and a real data set on liver disease diagnosis are then employed to evaluate the efficacy of the proposed approach. The results are analyzed and discussed. The potentials of the proposed architecture as a useful decision support system are demonstrated.

Analysis of Different Combining Schemes of Two Amplify-Forward Relay Branches with Individual Links Experiencing Nakagami Fading

Relay based communication has gained considerable importance in the recent years. In this paper we find the end-toend statistics of a two hop non-regenerative relay branch, each hop being Nakagami-m faded. Closed form expressions for the probability density functions of the signal envelope at the output of a selection combiner and a maximal ratio combiner at the destination node are also derived and analytical formulations are verified through computer simulation. These density functions are useful in evaluating the system performance in terms of bit error rate and outage probability.

Improved Text-Independent Speaker Identification using Fused MFCC and IMFCC Feature Sets based on Gaussian Filter

A state of the art Speaker Identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, Mel-Frequency Cepstral Coefficients (MFCC) modeled on the human auditory system has been used as a standard acoustic feature set for speech related applications. On a recent contribution by authors, it has been shown that the Inverted Mel- Frequency Cepstral Coefficients (IMFCC) is useful feature set for SI, which contains complementary information present in high frequency region. This paper introduces the Gaussian shaped filter (GF) while calculating MFCC and IMFCC in place of typical triangular shaped bins. The objective is to introduce a higher amount of correlation between subband outputs. The performances of both MFCC & IMFCC improve with GF over conventional triangular filter (TF) based implementation, individually as well as in combination. With GMM as speaker modeling paradigm, the performances of proposed GF based MFCC and IMFCC in individual and fused mode have been verified in two standard databases YOHO, (Microphone Speech) and POLYCOST (Telephone Speech) each of which has more than 130 speakers.