A Novel Genetic Algorithm Designed for Hardware Implementation

A new genetic algorithm, termed the 'optimum individual monogenetic genetic algorithm' (OIMGA), is presented whose properties have been deliberately designed to be well suited to hardware implementation. Specific design criteria were to ensure fast access to the individuals in the population, to keep the required silicon area for hardware implementation to a minimum and to incorporate flexibility in the structure for the targeting of a range of applications. The first two criteria are met by retaining only the current optimum individual, thereby guaranteeing a small memory requirement that can easily be stored in fast on-chip memory. Also, OIMGA can be easily reconfigured to allow the investigation of problems that normally warrant either large GA populations or individuals many genes in length. Local convergence is achieved in OIMGA by retaining elite individuals, while population diversity is ensured by continually searching for the best individuals in fresh regions of the search space. The results given in this paper demonstrate that both the performance of OIMGA and its convergence time are superior to those of a range of existing hardware GA implementations.

Specification of Agent Explicit Knowledge in Cryptographic Protocols

Cryptographic protocols are widely used in various applications to provide secure communications. They are usually represented as communicating agents that send and receive messages. These agents use their knowledge to exchange information and communicate with other agents involved in the protocol. An agent knowledge can be partitioned into explicit knowledge and procedural knowledge. The explicit knowledge refers to the set of information which is either proper to the agent or directly obtained from other agents through communication. The procedural knowledge relates to the set of mechanisms used to get new information from what is already available to the agent. In this paper, we propose a mathematical framework which specifies the explicit knowledge of an agent involved in a cryptographic protocol. Modelling this knowledge is crucial for the specification, analysis, and implementation of cryptographic protocols. We also, report on a prototype tool that allows the representation and the manipulation of the explicit knowledge.

Experimental Studies on Treated Sub-base Soil with Fly Ash and Cement for Sustainable Design Recommendations

The pavement constructions on soft and expansive soils are not durable and unable to sustain heavy traffic loading. As a result, pavement failures and settlement problems will occur very often even under light traffic loading due to cyclic and rolling effects. Geotechnical engineers have dwelled deeply into this matter, and adopt various methods to improve the engineering characteristics of soft fine-grained soils and expansive soils. The problematic soils are either replaced by good and better quality material or treated by using chemical stabilization with various binding materials. Increased the strength and durability are also the part of the sustainability drive to reduce the environment footprint of the built environment by the efficient use of resources and waste recycle materials. This paper presents a series of laboratory tests and evaluates the effect of cement and fly ash on the strength and drainage characteristics of soil in Miri. The tests were performed at different percentages of cement and fly ash by dry weight of soil. Additional tests were also performed on soils treated with the combinations of fly ash with cement and lime. The results of this study indicate an increase in unconfined compression strength and a decrease in hydraulic conductivity of the treated soil.

Dynamic Models versus Frailty Models for Recurrent Event Data

Recurrent event data is a special type of multivariate survival data. Dynamic and frailty models are one of the approaches that dealt with this kind of data. A comparison between these two models is studied using the empirical standard deviation of the standardized martingale residual processes as a way of assessing the fit of the two models based on the Aalen additive regression model. Here we found both approaches took heterogeneity into account and produce residual standard deviations close to each other both in the simulation study and in the real data set.

Affect of Viscosity and Droplet Diameter on water-in-oil (w/o) Emulsions: An Experimental Study

The influence of viscosity on droplet diameter for water-in-crude oil (w/o) emulsion with two different ratios; 20-80 % and 50-50 % w/o emulsion was examined in the Brookfield Rotational Digital Rheometer. The emulsion was prepared with sorbitan sesquiolate (Span 83) act as emulsifier at varied temperature and stirring speed in rotation per minute (rpm). Results showed that the viscosity of w/o emulsion was strongly augmented by increasing volume of water and decreased the temperature. The changing of viscosity also altered the droplet size distribution. Changing of droplet diameter was depends on the viscosity and the behavior of emulsion either Newtonian or non-Newtonian.

Semantic Mobility Channel (SMC): Ubiquitous and Mobile Computing Meets the Semantic Web

With the advent of emerging personal computing paradigms such as ubiquitous and mobile computing, Web contents are becoming accessible from a wide range of mobile devices. Since these devices do not have the same rendering capabilities, Web contents need to be adapted for transparent access from a variety of client agents. Such content adaptation is exploited for either an individual element or a set of consecutive elements in a Web document and results in better rendering and faster delivery to the client device. Nevertheless, Web content adaptation sets new challenges for semantic markup. This paper presents an advanced components platform, called SMC, enabling the development of mobility applications and services according to a channel model based on the principles of Services Oriented Architecture (SOA). It then goes on to describe the potential for integration with the Semantic Web through a novel framework of external semantic annotation that prescribes a scheme for representing semantic markup files and a way of associating Web documents with these external annotations. The role of semantic annotation in this framework is to describe the contents of individual documents themselves, assuring the preservation of the semantics during the process of adapting content rendering. Semantic Web content adaptation is a way of adding value to Web contents and facilitates repurposing of Web contents (enhanced browsing, Web Services location and access, etc).

Visual Attention Analysis on Mutated Brand Name using Eye-Tracking: A Case Study

Brand name plays a vital role for in-shop buying behavior of consumers and mutated brand name may affect the selling of leading branded products. In Indian market, there are many products with mutated brand names which are either orthographically or phonologically similar. Due to presence of such products, Indian consumers very often fall under confusion when buying some regularly used stuff. Authors of the present paper have attempted to demonstrate relationship between less attention and false recognition of mutated brand names during a product selection process. To achieve this goal, visual attention study was conducted on 15 male college students using eye-tracker against a mutated brand name and errors in recognition were noted using questionnaire. Statistical analysis of the acquired data revealed that there was more false recognition of mutated brand name when less attention was paid during selection of favorite product. Moreover, it was perceived that eye tracking is an effective tool for analyzing false recognition of brand name mutation.

A Few Descriptive and Optimization Issues on the Material Flow at a Research-Academic Institution: The Role of Simulation

Lately, significant work in the area of Intelligent Manufacturing has become public and mainly applied within the frame of industrial purposes. Special efforts have been made in the implementation of new technologies, management and control systems, among many others which have all evolved the field. Aware of all this and due to the scope of new projects and the need of turning the existing flexible ideas into more autonomous and intelligent ones, i.e.: Intelligent Manufacturing, the present paper emerges with the main aim of contributing to the design and analysis of the material flow in either systems, cells or work stations under this new “intelligent" denomination. For this, besides offering a conceptual basis in some of the key points to be taken into account and some general principles to consider in the design and analysis of the material flow, also some tips on how to define other possible alternative material flow scenarios and a classification of the states a system, cell or workstation are offered as well. All this is done with the intentions of relating it with the use of simulation tools, for which these have been briefly addressed with a special focus on the Witness simulation package. For a better comprehension, the previous elements are supported by a detailed layout, other figures and a few expressions which could help obtaining necessary data. Such data and others will be used in the future, when simulating the scenarios in the search of the best material flow configurations.

The Role of Ga to Improve AlN-Nucleation Layer for Al0.1Ga0.9N/Si(111)

Group-III nitride material as particularly AlxGa1-xN is one of promising optoelectronic materials to require for shortwavelength devices. To achieve the high-quality AlxGa1-xN films for a high performance of such devices, AlN-nucleation layers are the important factor. To improve the AlN-nucleation layers with a variation of Ga-addition, XRD measurements were conducted to analyze the crystalline quality of the subsequent Al0.1Ga0.9N with the minimum ω-FWHMs of (0002) and (10-10) reflections of 425 arcsec and 750 arcsec, respectively. SEM and AFM measurements were performed to observe the surface morphology and TEM measurements to identify the microstructures and orientations. Results showed that the optimized Ga-atoms in the Al(Ga)Nnucleation layers improved the surface diffusion to form moreuniform crystallites in structure and size, better alignment of each crystallite, and better homogeneity of island distribution. This, hence, improves the orientation of epilayers on the Si-surface and finally improves the crystalline quality and reduces the residual strain of subsequent Al0.1Ga0.9N layers.

A Practical Approach for Electricity Load Forecasting

This paper is a continuation of our daily energy peak load forecasting approach using our modified network which is part of the recurrent networks family and is called feed forward and feed back multi context artificial neural network (FFFB-MCANN). The inputs to the network were exogenous variables such as the previous and current change in the weather components, the previous and current status of the day and endogenous variables such as the past change in the loads. Endogenous variable such as the current change in the loads were used on the network output. Experiment shows that using endogenous and exogenous variables as inputs to the FFFBMCANN rather than either exogenous or endogenous variables as inputs to the same network produces better results. Experiments show that using the change in variables such as weather components and the change in the past load as inputs to the FFFB-MCANN rather than the absolute values for the weather components and past load as inputs to the same network has a dramatic impact and produce better accuracy.

Accurate Visualization of Graphs of Functions of Two Real Variables

The study of a real function of two real variables can be supported by visualization using a Computer Algebra System (CAS). One type of constraints of the system is due to the algorithms implemented, yielding continuous approximations of the given function by interpolation. This often masks discontinuities of the function and can provide strange plots, not compatible with the mathematics. In recent years, point based geometry has gained increasing attention as an alternative surface representation, both for efficient rendering and for flexible geometry processing of complex surfaces. In this paper we present different artifacts created by mesh surfaces near discontinuities and propose a point based method that controls and reduces these artifacts. A least squares penalty method for an automatic generation of the mesh that controls the behavior of the chosen function is presented. The special feature of this method is the ability to improve the accuracy of the surface visualization near a set of interior points where the function may be discontinuous. The present method is formulated as a minimax problem and the non uniform mesh is generated using an iterative algorithm. Results show that for large poorly conditioned matrices, the new algorithm gives more accurate results than the classical preconditioned conjugate algorithm.

Supporting QoS-aware Multicasting in Differentiated Service Networks

A scalable QoS aware multicast deployment in DiffServ networks has become an important research dimension in recent years. Although multicasting and differentiated services are two complementary technologies, the integration of the two technologies is a non-trivial task due to architectural conflicts between them. A popular solution proposed is to extend the functionality of the DiffServ components to support multicasting. In this paper, we propose an algorithm to construct an efficient QoSdriven multicast tree, taking into account the available bandwidth per service class. We also present an efficient way to provision the limited available bandwidth for supporting heterogeneous users. The proposed mechanism is evaluated using simulated tests. The simulated result reveals that our algorithm can effectively minimize the bandwidth use and transmission cost

Comparison between Solar Simulation and Infrared Technique for Thermal Balance Test

The precision of heat flux simulation influences the temperature field and test aberration for TB test and also reflects the test level for spacecraft development. This paper describes TB tests for a small satellite using solar simulator, electric heaters, calrod heaters to evaluate the difference of the three methods. Under the same boundary condition, calrod heaters cases were about 6oC higher than solar simulator cases and electric heaters cases for non-external-heat-flux cases (extreme low temperature cases). While calrod heaters cases and electric heaters cases were 5~7oC and 2~3oC lower than solar simulator cases respectively for high temperature cases. The results show that the solar simulator is better than calrod heaters for its better collimation, non-homogeneity and stability.

Linear-Operator Formalism in the Analysis of Omega Planar Layered Waveguides

A complete spectral representation for the electromagnetic field of planar multilayered waveguides inhomogeneously filled with omega media is presented. The problem of guided electromagnetic propagation is reduced to an eigenvalue equation related to a 2 ´ 2 matrix differential operator. Using the concept of adjoint waveguide, general bi-orthogonality relations for the hybrid modes (either from the discrete or from the continuous spectrum) are derived. For the special case of homogeneous layers the linear operator formalism is reduced to a simple 2 ´ 2 coupling matrix eigenvalue problem. Finally, as an example of application, the surface and the radiation modes of a grounded omega slab waveguide are analyzed.

Robust Camera Calibration using Discrete Optimization

Camera calibration is an indispensable step for augmented reality or image guided applications where quantitative information should be derived from the images. Usually, a camera calibration is obtained by taking images of a special calibration object and extracting the image coordinates of projected calibration marks enabling the calculation of the projection from the 3d world coordinates to the 2d image coordinates. Thus such a procedure exhibits typical steps, including feature point localization in the acquired images, camera model fitting, correction of distortion introduced by the optics and finally an optimization of the model-s parameters. In this paper we propose to extend this list by further step concerning the identification of the optimal subset of images yielding the smallest overall calibration error. For this, we present a Monte Carlo based algorithm along with a deterministic extension that automatically determines the images yielding an optimal calibration. Finally, we present results proving that the calibration can be significantly improved by automated image selection.

Variations of Body Mass Index with Age in Masters Athletes (World Masters Games)

Whilst there is growing evidence that activity across the lifespan is beneficial for improved health, there are also many changes involved with the aging process and subsequently the potential for reduced indices of health. The nexus between health, physical activity and aging is complex and has raised much interest in recent times due to the realization that a multifaceted approached is necessary in order to counteract a growing obesity epidemic. By investigating age based trends within a population adhering to competitive sport at older ages, further insight might be gleaned to assist in understanding one of many factors influencing this relationship. BMI was derived using data gathered on a total of 6,071 masters athletes (51.9% male, 48.1% female) aged 25 to 91 years ( =51.5, s =±9.7), competing at the Sydney World Masters Games (2009). Using linear and loess regression it was demonstrated that the usual tendency for prevalence of higher BMI increasing with age was reversed in the sample. This trend in reversal was repeated for both male and female only sub-sets of the sample participants, indicating the possibility of improved prevalence of BMI with increasing age for both the sample as a whole and these individual subgroups. This evidence of improved classification in one index of health (reduced BMI) for masters athletes (when compared to the general population) implies there are either improved levels of this index of health with aging due to adherence to sport or possibly the reduced BMI is advantageous and contributes to this cohort adhering (or being attracted) to masters sport at older ages. Demonstration of this proportionately under-investigated World Masters Games population having an improved relationship between BMI and increasing age over the general population is of particular interest in the context of the measures being taken globally to curb an obesity epidemic.

Fatigue Analysis of Crack Growing Rate and Stress Intensity Factor for Stress Corrosion Cracking in a Pipeline System

Environment-assisted cracking (EAC) is one of the most serious causes of structural failure over a broad range of industrial applications including offshore structures. In EAC condition there is not a definite relation such as Paris equation in Linear Elastic Fracture Mechanics (LEFM). According to studying and searching a lot what the researchers said either a material has contact with hydrogen or any other corrosive environment, phenomenon of electrical and chemical reactions of material with its environment will be happened. In the literature, there are many different works to consider fatigue crack growing and solve it but they are experimental works. Thus, in this paper, authors have an aim to evaluate mathematically the pervious works in LEFM. Obviously, if an environment is more sour and corrosive, the changes of stress intensity factor is more and the calculation of stress intensity factor is difficult. A mathematical relation to deal with the stress intensity factor during the diffusion of sour environment especially hydrogen in a marine pipeline is presented. By using this relation having and some experimental relation an analytical formulation will be presented which enables the fatigue crack growth and critical crack length under cyclic loading to be predicted. In addition, we can calculate KSCC and stress intensity factor in the pipeline caused by EAC.

Analysis and Remediation of Fecal Coliform Bacteria Pollution in Selected Surface Water Bodies of Enugu State of Nigeria

The assessment of surface waters in Enugu metropolis for fecal coliform bacteria was undertaken. Enugu urban was divided into three areas (A1, A2 and A3), and fecal coliform bacteria analysed in the surface waters found in these areas for four years (2005-2008). The plate count method was used for the analyses. Data generated were subjected to statistical tests involving; Normality test, Homogeneity of variance test, correlation test, and tolerance limit test. The influence of seasonality and pollution trends were investigated using time series plots. Results from the tolerance limit test at 95% coverage with 95% confidence, and with respect to EU maximum permissible concentration show that the three areas suffer from fecal coliform pollution. To this end, remediation procedure involving the use of saw-dust extracts from three woods namely; Chlorophora-Excelsa (C-Excelsa),Khayan-Senegalensis,(CSenegalensis) and Erythrophylum-Ivorensis (E-Ivorensis) in controlling the coliforms was studied. Results show that mixture of the acetone extracts of the woods show the most effective antibacterial inhibitory activities (26.00mm zone of inhibition) against E-coli. Methanol extract mixture of the three woods gave best inhibitory activity (26.00mm zone of inhibition) against S-areus, and 25.00mm zones of inhibition against E-Aerogenes. The aqueous extracts mixture gave acceptable zones of inhibitions against the three bacteria organisms.

A Multiclass BCMP Queueing Modeling and Simulation-Based Road Traffic Flow Analysis

Urban road network traffic has become one of the most studied research topics in the last decades. This is mainly due to the enlargement of the cities and the growing number of motor vehicles traveling in this road network. One of the most sensitive problems is to verify if the network is congestion-free. Another related problem is the automatic reconfiguration of the network without building new roads to alleviate congestions. These problems require an accurate model of the traffic to determine the steady state of the system. An alternative is to simulate the traffic to see if there are congestions and when and where they occur. One key issue is to find an adequate model for road intersections. Once the model established, either a large scale model is built or the intersection is represented by its performance measures and simulation for analysis. In both cases, it is important to seek the queueing model to represent the road intersection. In this paper, we propose to model the road intersection as a BCMP queueing network and we compare this analytical model against a simulation model for validation.

A High-Frequency Low-Power Low-Pass-Filter-Based All-Current-Mirror Sinusoidal Quadrature Oscillator

A high-frequency low-power sinusoidal quadrature oscillator is presented through the use of two 2nd-order low-pass current-mirror (CM)-based filters, a 1st-order CM low-pass filter and a CM bilinear transfer function. The technique is relatively simple based on (i) inherent time constants of current mirrors, i.e. the internal capacitances and the transconductance of a diode-connected NMOS, (ii) a simple negative resistance RN formed by a resistor load RL of a current mirror. Neither external capacitances nor inductances are required. As a particular example, a 1.9-GHz, 0.45-mW, 2-V CMOS low-pass-filter-based all-current-mirror sinusoidal quadrature oscillator is demonstrated. The oscillation frequency (f0) is 1.9 GHz and is current-tunable over a range of 370 MHz or 21.6 %. The power consumption is at approximately 0.45 mW. The amplitude matching and the quadrature phase matching are better than 0.05 dB and 0.15°, respectively. Total harmonic distortions (THD) are less than 0.3 %. At 2 MHz offset from the 1.9 GHz, the carrier to noise ratio (CNR) is 90.01 dBc/Hz whilst the figure of merit called a normalized carrier-to-noise ratio (CNRnorm) is 153.03 dBc/Hz. The ratio of the oscillation frequency (f0) to the unity-gain frequency (fT) of a transistor is 0.25. Comparisons to other approaches are also included.