Mathematical Model for Dengue Disease with Maternal Antibodies

Mathematical models can be used to describe the dynamics of the spread of infectious disease between susceptibles and infectious populations. Dengue fever is a re-emerging disease in the tropical and subtropical regions of the world. Its incidence has increased fourfold since 1970 and outbreaks are now reported quite frequently from many parts of the world. In dengue endemic regions, more cases of dengue infection in pregnancy and infancy are being found due to the increasing incidence. It has been reported that dengue infection was vertically transmitted to the infants. Primary dengue infection is associated with mild to high fever, headache, muscle pain and skin rash. Immune response includes IgM antibodies produced by the 5th day of symptoms and persist for 30-60 days. IgG antibodies appear on the 14th day and persist for life. Secondary infections often result in high fever and in many cases with hemorrhagic events and circulatory failure. In the present paper, a mathematical model is proposed to simulate the succession of dengue disease transmission in pregnancy and infancy. Stability analysis of the equilibrium points is carried out and a simulation is given for the different sets of parameter. Moreover, the bifurcation diagrams of our model are discussed. The controlling of this disease in infant cases is introduced in the term of the threshold condition.

Overriding Moral Intuitions – Does It Make Us Immoral? Dual-Process Theory of Higher Cognition Account for Moral Reasoning

Moral decisions are considered as an intuitive process, while conscious reasoning is mostly used only to justify those intuitions. This problem is described in few different dual-process theories of mind, that are being developed e.g. by Frederick and Kahneman, Stanovich and Evans. Those theories recently evolved into tri-process theories with a proposed process that makes ultimate decision or allows to paraformal processing with focal bias.. Presented experiment compares the decision patterns to the implications of those models. In presented study participants (n=179) considered different aspects of trolley dilemma or its footbridge version and decided after that. Results show that in the control group 70% of people decided to use the lever to change tracks for the running trolley, and 20% chose to push the fat man down the tracks. In contrast, after experimental manipulation almost no one decided to act. Also the decision time difference between dilemmas disappeared after experimental manipulation. The result supports the idea of three co-working processes: intuitive (TASS), paraformal (reflective mind) and algorithmic process.

A Study on the Secure ebXML Transaction Models

ebXML (Electronic Business using eXtensible Markup Language) is an e-business standard, sponsored by UN/CEFACT and OASIS, which enables enterprises to exchange business messages, conduct trading relationships, communicate data in common terms and define and register business processes. While there is tremendous e-business value in the ebXML, security remains an unsolved problem and one of the largest barriers to adoption. XML security technologies emerging recently have extensibility and flexibility suitable for security implementation such as encryption, digital signature, access control and authentication. In this paper, we propose ebXML business transaction models that allow trading partners to securely exchange XML based business transactions by employing XML security technologies. We show how each XML security technology meets the ebXML standard by constructing the test software and validating messages between the trading partners.

Detecting the Capacity Reserve in an Overhead Line

There are various solutions for improving existing overhead line systems with the general purpose of increasing their limited capacity. The capacity reserve of the existing overhead lines is an important problem that must be considered from different aspects. The paper contains a comparative analysis of the mechanical and thermal limitations of an existing overhead line based on certain calculation conditions characterizing the examined variants. The methodology of the proposed estimation of the permissible conductor temperature and maximum load current is described in detail. The transmission line model consists of specific information of an existing overhead line of the Latvian power network. The main purpose of the simulation tasks is to find an additional capacity reserve by using accurate mathematical models. The results of the obtained data are presented.

Mass Transfer Modeling in a Packed Bed of Palm Kernels under Supercritical Conditions

Studies on gas solid mass transfer using Supercritical fluid CO2 (SC-CO2) in a packed bed of palm kernels was investigated at operating conditions of temperature 50 °C and 70 °C and pressures ranges from 27.6 MPa, 34.5 MPa, 41.4 MPa and 48.3 MPa. The development of mass transfer models requires knowledge of three properties: the diffusion coefficient of the solute, the viscosity and density of the Supercritical fluids (SCF). Matematical model with respect to the dimensionless number of Sherwood (Sh), Schmidt (Sc) and Reynolds (Re) was developed. It was found that the model developed was found to be in good agreement with the experimental data within the system studied.

Applications of Rough Set Decompositions in Information Retrieval

This paper proposes rough set models with three different level knowledge granules in incomplete information system under tolerance relation by similarity between objects according to their attribute values. Through introducing dominance relation on the discourse to decompose similarity classes into three subclasses: little better subclass, little worse subclass and vague subclass, it dismantles lower and upper approximations into three components. By using these components, retrieving information to find naturally hierarchical expansions to queries and constructing answers to elaborative queries can be effective. It illustrates the approach in applying rough set models in the design of information retrieval system to access different granular expanded documents. The proposed method enhances rough set model application in the flexibility of expansions and elaborative queries in information retrieval.

Numerical Studies on Flow Field Characteristics of Cavity Based Scramjet Combustors

The flow field within the combustor of scramjet engine is very complex and poses a considerable challenge in the design and development of a supersonic combustor with an optimized geometry. In this paper comprehensive numerical studies on flow field characteristics of different cavity based scramjet combustors with transverse injection of hydrogen have been carried out for both non-reacting and reacting flows. The numerical studies have been carried out using a validated 2D unsteady, density based 1st-order implicit k-omega turbulence model with multi-component finite rate reacting species. The results show a wide variety of flow features resulting from the interactions between the injector flows, shock waves, boundary layers, and cavity flows. We conjectured that an optimized cavity is a good choice to stabilize the flame in the hypersonic flow, and it generates a recirculation zone in the scramjet combustor. We comprehended that the cavity based scramjet combustors having a bearing on the source of disturbance for the transverse jet oscillation, fuel/air mixing enhancement, and flameholding improvement. We concluded that cavity shape with backward facing step and 45o forward ramp is a good choice to get higher temperatures at the exit compared to other four models of scramjet combustors considered in this study.

Advancing the Theory of Planned Behavior within Dietary and Physical Domains among Type 2 Diabetics: A Mixed Methods Approach

Many studies have applied the Theory of Planned Behavior (TPB) in predicting health behaviors among unique populations. However, a new paradigm is emerging where focus is now directed to modification and expansion of the TPB model rather than utilization of the traditional theory. This review proposes new models modified from the Theory of Planned Behavior and suggest an appropriate study design that can be used to test the models within physical activity and dietary practice domains among Type 2 diabetics in Kenya. The review was conducted by means of literature search in the field of nutrition behavior, health psychology and mixed methods using predetermined key words. The results identify pre-intention and post intention gaps within the TPB model that need to be filled. Additional psychosocial factors are proposed to be included in the TPB model to generate new models and the efficacy of these models tested using mixed methods design.

Human Pose Estimation using Active Shape Models

Human pose estimation can be executed using Active Shape Models. The existing techniques for applying to human-body research using Active Shape Models, such as human detection, primarily take the form of silhouette of human body. This technique is not able to estimate accurately for human pose to concern two arms and legs, as the silhouette of human body represents the shape as out of round. To solve this problem, we applied the human body model as stick-figure, “skeleton". The skeleton model of human body can give consideration to various shapes of human pose. To obtain effective estimation result, we applied background subtraction and deformed matching algorithm of primary Active Shape Models in the fitting process. The images which were used to make the model were 600 human bodies, and the model has 17 landmark points which indicate body junction and key features of human pose. The maximum iteration for the fitting process was 30 times and the execution time was less than .03 sec.

The Effect of Cyclone Shape and Dust Collector on Gas-Solid Flow and Performance

Numerical analysis of flow characteristics and separation efficiency in a high-efficiency cyclone has been performed. Several models based on the experimental observation for a design purpose were proposed. However, the model is only estimated the cyclone's performance under the limited environments; it is difficult to obtain a general model for all types of cyclones. The purpose of this study is to find out the flow characteristics and separation efficiency numerically. The Reynolds stress model (RSM) was employed instead of a standard k-ε or a k-ω model which was suitable for isotropic turbulence and it could predict the pressure drop and the Rankine vortex very well. For small particles, there were three significant components (entrance of vortex finder, cone, and dust collector) for the particle separation. In the present work, the particle re-entraining phenomenon from the dust collector to the cyclone body was observed after considerable time. This re-entrainment degraded the separation efficiency and was one of the significant factors for the separation efficiency of the cyclone.

Generating State-Based Testing Models for Object-Oriented Framework Interface Classes

An application framework provides a reusable design and implementation for a family of software systems. Application developers extend the framework to build their particular applications using hooks. Hooks are the places identified to show how to use and customize the framework. Hooks define the Framework Interface Classes (FICs) and the specifications of their methods. As part of the development life cycle, it is required to test the implementations of the FICs. Building a testing model to express the behavior of a class is an essential step for the generation of the class-based test cases. The testing model has to be consistent with the specifications provided for the hooks. State-based models consisting of states and transitions are testing models well suited to objectoriented software. Typically, hand-construction of a state-based model of a class behavior is expensive, error-prone, and may result in constructing an inconsistent model with the specifications of the class methods, which misleads verification results. In this paper, a technique is introduced to automatically synthesize a state-based testing model for FICs using the specifications provided for the hooks. A tool that supports the proposed technique is introduced.

Using Combination of Optimized Recurrent Neural Network with Design of Experiments and Regression for Control Chart Forecasting

recurrent neural network (RNN) is an efficient tool for modeling production control process as well as modeling services. In this paper one RNN was combined with regression model and were employed in order to be checked whether the obtained data by the model in comparison with actual data, are valid for variable process control chart. Therefore, one maintenance process in workshop of Esfahan Oil Refining Co. (EORC) was taken for illustration of models. First, the regression was made for predicting the response time of process based upon determined factors, and then the error between actual and predicted response time as output and also the same factors as input were used in RNN. Finally, according to predicted data from combined model, it is scrutinized for test values in statistical process control whether forecasting efficiency is acceptable. Meanwhile, in training process of RNN, design of experiments was set so as to optimize the RNN.

A Comparative Study of Turbulence Models Performance for Turbulent Flow in a Planar Asymmetric Diffuser

This paper presents a computational study of the separated flow in a planer asymmetric diffuser. The steady RANS equations for turbulent incompressible fluid flow and six turbulence closures are used in the present study. The commercial software code, FLUENT 6.3.26, was used for solving the set of governing equations using various turbulence models. Five of the used turbulence models are available directly in the code while the v2-f turbulence model was implemented via User Defined Scalars (UDS) and User Defined Functions (UDF). A series of computational analysis is performed to assess the performance of turbulence models at different grid density. The results show that the standard k-ω, SST k-ω and v2-f models clearly performed better than other models when an adverse pressure gradient was present. The RSM model shows an acceptable agreement with the velocity and turbulent kinetic energy profiles but it failed to predict the location of separation and attachment points. The standard k-ε and the low-Re k- ε delivered very poor results.

Phosphine Mortality Estimation for Simulation of Controlling Pest of Stored Grain: Lesser Grain Borer (Rhyzopertha dominica)

There is a world-wide need for the development of sustainable management strategies to control pest infestation and the development of phosphine (PH3) resistance in lesser grain borer (Rhyzopertha dominica). Computer simulation models can provide a relatively fast, safe and inexpensive way to weigh the merits of various management options. However, the usefulness of simulation models relies on the accurate estimation of important model parameters, such as mortality. Concentration and time of exposure are both important in determining mortality in response to a toxic agent. Recent research indicated the existence of two resistance phenotypes in R. dominica in Australia, weak and strong, and revealed that the presence of resistance alleles at two loci confers strong resistance, thus motivating the construction of a two-locus model of resistance. Experimental data sets on purified pest strains, each corresponding to a single genotype of our two-locus model, were also available. Hence it became possible to explicitly include mortalities of the different genotypes in the model. In this paper we described how we used two generalized linear models (GLM), probit and logistic models, to fit the available experimental data sets. We used a direct algebraic approach generalized inverse matrix technique, rather than the traditional maximum likelihood estimation, to estimate the model parameters. The results show that both probit and logistic models fit the data sets well but the former is much better in terms of small least squares (numerical) errors. Meanwhile, the generalized inverse matrix technique achieved similar accuracy results to those from the maximum likelihood estimation, but is less time consuming and computationally demanding.

Automatic Musical Genre Classification Using Divergence and Average Information Measures

Recently many research has been conducted to retrieve pertinent parameters and adequate models for automatic music genre classification. In this paper, two measures based upon information theory concepts are investigated for mapping the features space to decision space. A Gaussian Mixture Model (GMM) is used as a baseline and reference system. Various strategies are proposed for training and testing sessions with matched or mismatched conditions, long training and long testing, long training and short testing. For all experiments, the file sections used for testing are never been used during training. With matched conditions all examined measures yield the best and similar scores (almost 100%). With mismatched conditions, the proposed measures yield better scores than the GMM baseline system, especially for the short testing case. It is also observed that the average discrimination information measure is most appropriate for music category classifications and on the other hand the divergence measure is more suitable for music subcategory classifications.

Adopting Procedural Animation Technology to Generate Locomotion of Quadruped Characters in Dynamic Environments

A procedural-animation-based approach which rapidly synthesize the adaptive locomotion for quadruped characters that they can walk or run in any directions on an uneven terrain within a dynamic environment was proposed. We devise practical motion models of the quadruped animals for adapting to a varied terrain in a real-time manner. While synthesizing locomotion, we choose the corresponding motion models by means of the footstep prediction of the current state in the dynamic environment, adjust the key-frames of the motion models relying on the terrain-s attributes, calculate the collision-free legs- trajectories, and interpolate the key-frames according to the legs- trajectories. Finally, we apply dynamic time warping to each part of motion for seamlessly concatenating all desired transition motions to complete the whole locomotion. We reduce the time cost of producing the locomotion and takes virtual characters to fit in with dynamic environments no matter when the environments are changed by users.

InAlGaN Quaternary Multi-Quantum Wells UVLaser Diode Performance and Characterization

The InAlGaN alloy has only recently began receiving serious attention into its growth and application. High quality InGaN films have led to the development of light emitting diodes (LEDs) and blue laser diodes (LDs). The quaternary InAlGaN however, represents a more versatile material since the bandgap and lattice constant can be independently varied. We report an ultraviolet (UV) quaternary InAlGaN multi-quantum wells (MQWs) LD study by using the simulation program of Integrated System Engineering (ISE TCAD). Advanced physical models of semiconductor properties were used in order to obtain an optimized structure. The device performance which is affected by piezoelectric and thermal effects was studied via drift-diffusion model for carrier transport, optical gain and loss. The optical performance of the UV LD with different numbers of quantum wells was numerically investigated. The main peak of the emission wavelength for double quantum wells (DQWs) was shifted from 358 to 355.8 nm when the forward current was increased. Preliminary simulated results indicated that better output performance and lower threshold current could be obtained when the quantum number is four, with output power of 130 mW and threshold current of 140 mA.

Evidence of Climate Change (Global Warming) and Temperature Increases in Arctic Areas

This paper contributes to the debate on the proximate causes of climate change. Also, it discusses the impact of the global temperature increases since the beginning of the twentieth century and the effectiveness of climate change models in isolating the primary cause (anthropogenic influences or natural variability in temperature) of the observed temperature increases that occurred within this period. The paper argues that if climate scientist and policymakers ignore the anthropogenic influence (greenhouse gases) on global warming on the pretense of lack of agreement among various climate models and their inability to account for all the necessary factors of global warming at all levels the current efforts of greenhouse emissions control and global warming as a whole could be exacerbated.

Requirements Driven Multiple View Paradigm for Developing Security Architecture

This paper describes a paradigmatic approach to develop architecture of secure systems by describing the requirements from four different points of view: that of the owner, the administrator, the user, and the network. Deriving requirements and developing architecture implies the joint elicitation and describing the problem and the structure of the solution. The view points proposed in this paper are those we consider as requirements towards their contributions as major parties in the design, implementation, usage and maintenance of secure systems. The dramatic growth of the technology of Internet and the applications deployed in World Wide Web have lead to the situation where the security has become a very important concern in the development of secure systems. Many security approaches are currently being used in organizations. In spite of the widespread use of many different security solutions, the security remains a problem. It is argued that the approach that is described in this paper for the development of secure architecture is practical by all means. The models representing these multiple points of view are termed the requirements model (views of owner and administrator) and the operations model (views of user and network). In this paper, this multiple view paradigm is explained by first describing the specific requirements and or characteristics of secure systems (particularly in the domain of networks) and the secure architecture / system development methodology.

Why Traditional Technology Acceptance Models Won't Work for Future Information Technologies?

This paper illustrates why existing technology acceptance models are only of limited use for predicting and explaining the adoption of future information and communication technologies. It starts with a general overview over technology adoption processes, and presents several theories for the acceptance as well as adoption of traditional information technologies. This is followed by an overview over the recent developments in the area of information and communication technologies. Based on the arguments elaborated in these sections, it is shown why the factors used to predict adoption in existing systems, will not be sufficient for explaining the adoption of future information and communication technologies.