The Importance of 3D Mesh Generation for Large Eddy Simulation of Gas – Solid Turbulent Flows in a Fluidized Beds

The objective of this work is to show a procedure for mesh generation in a fluidized bed using large eddy simulations (LES) of a filtered two-fluid model. The experimental data were obtained by [1] in a laboratory fluidized bed. Results show that it is possible to use mesh with less cells as compared to RANS turbulence model with granular kinetic theory flow (KTGF). Also, the numerical results validate the experimental data near wall of the bed, which cannot be predicted by RANS.model.

The Sizes of Large Hierarchical Long-Range Percolation Clusters

We study a long-range percolation model in the hierarchical lattice ΩN of order N where probability of connection between two nodes separated by distance k is of the form min{αβ−k, 1}, α ≥ 0 and β > 0. The parameter α is the percolation parameter, while β describes the long-range nature of the model. The ΩN is an example of so called ultrametric space, which has remarkable qualitative difference between Euclidean-type lattices. In this paper, we characterize the sizes of large clusters for this model along the line of some prior work. The proof involves a stationary embedding of ΩN into Z. The phase diagram of this long-range percolation is well understood.

Multi-models Approach for Describing and Verifying Constraints Based Interactive Systems

The requirements analysis, modeling, and simulation have consistently been one of the main challenges during the development of complex systems. The scenarios and the state machines are two successful models to describe the behavior of an interactive system. The scenarios represent examples of system execution in the form of sequences of messages exchanged between objects and are a partial view of the system. In contrast, state machines can represent the overall system behavior. The automation of processing scenarios in the state machines provide some answers to various problems such as system behavior validation and scenarios consistency checking. In this paper, we propose a method for translating scenarios in state machines represented by Discreet EVent Specification and procedure to detect implied scenarios. Each induced DEVS model represents the behavior of an object of the system. The global system behavior is described by coupling the atomic DEVS models and validated through simulation. We improve the validation process with integrating formal methods to eliminate logical inconsistencies in the global model. For that end, we use the Z notation.

Expert Witness Testimony in the Battered Woman Syndrome

The Expert Witness Testimony in the Battered Woman Syndrome Expert witness testimony (EWT) is a kind of information given by an expert specialized in the field (here in BWS) to the jury in order to help the court better understand the case. EWT does not always work in favor of the battered women. Two main decision-making models are discussed in the paper: the Mathematical model and the Explanation model. In the first model, the jurors calculate ″the importance and strength of each piece of evidence″ whereas in the second model they try to integrate the EWT with the evidence and create a coherent story that would describe the crime. The jury often misunderstands and misjudges battered women for their action (or in this case inaction). They assume that these women are masochists and accept being mistreated for if a man abuses a woman constantly, she should and could divorce him or simply leave at any time. The research in the domain found that indeed, expert witness testimony has a powerful influence on juror’s decisions thus its quality needs to be further explored. One of the important factors that need further studies is a bias called the dispositionist worldview (a belief that what happens to people is of their own doing). This kind of attributional bias represents a tendency to think that a person’s behavior is due to his or her disposition, even when the behavior is clearly attributed to the situation. Hypothesis The hypothesis of this paper is that if a juror has a dispositionist worldview then he or she will blame the rape victim for triggering the assault. The juror would therefore commit the fundamental attribution error and believe that the victim’s disposition caused the rape and not the situation she was in. Methods The subjects in the study were 500 randomly sampled undergraduate students from McGill, Concordia, Université de Montréal and UQAM. Dispositional Worldview was scored on the Dispositionist Worldview Questionnaire. After reading the Rape Scenarios, each student was asked to play the role of a juror and answer a questionnaire consisting of 7 questions about the responsibility, causality and fault of the victim. Results The results confirm the hypothesis which states that if a juror has a dispositionist worldview then he or she will blame the rape victim for triggering the assault. By doing so, the juror commits the fundamental attribution error because he will believe that the victim’s disposition, and not the constraints or opportunities of the situation, caused the rape scenario.

A Hydro-Mechanical Model for Unsaturated Soils

The hydro-mechanical model for unsaturated soils has been presented based on the effective stress principle taking into account effects of drying-wetting process. The elasto-plastic constitutive equations for stress-strain relations of the soil skeleton have been established. A plasticity model is modified from modified Cam-Clay model. The hardening rule has been established by considering the isotropic consolidation paths. The effect of dryingwetting process is introduced through the ¤ç parameter. All model coefficients are identified in terms of measurable parameters. The simulations from the proposed model are compared with the experimental results. The model calibration was performed to extract the model parameter from the experimental results. Good agreement between the results predicted using proposed model and the experimental results was obtained.

Dengue Transmission Model between Infantand Pregnant Woman with Antibody

Dengue, a disease found in most tropical and subtropical areas of the world. It has become the most common arboviral disease of humans. This disease is caused by any of four serotypes of dengue virus (DEN1-DEN4). In many endemic countries, the average age of getting dengue infection is shifting upwards, dengue in pregnancy and infancy are likely to be encountered more frequently. The dynamics of the disease is studied by a compartmental model involving ordinary differential equations for the pregnant, infant human and the vector populations. The stability of each equilibrium point is given. The epidemic dynamic is discussed. Moreover, the numerical results are shown for difference values of dengue antibody.

Geographic Profiling Based on Multi-point Centrography with K-means Clustering

Geographic Profiling has successfully assisted investigations for serial crimes. Considering the multi-cluster feature of serial criminal spots, we propose a Multi-point Centrography model as a natural extension of Single-point Centrography for geographic profiling. K-means clustering is first performed on the data samples and then Single-point Centrography is adopted to derive a probability distribution on each cluster. Finally, a weighted combinations of each distribution is formed to make next-crime spot prediction. Experimental study on real cases demonstrates the effectiveness of our proposed model.

Crash Severity Modeling in Urban Highways Using Backward Regression Method

Identifying and classifying intersections according to severity is very important for implementation of safety related counter measures and effective models are needed to compare and assess the severity. Highway safety organizations have considered intersection safety among their priorities. In spite of significant advances in highways safety, the large numbers of crashes with high severities still occur in the highways. Investigation of influential factors on crashes enables engineers to carry out calculations in order to reduce crash severity. Previous studies lacked a model capable of simultaneous illustration of the influence of human factors, road, vehicle, weather conditions and traffic features including traffic volume and flow speed on the crash severity. Thus, this paper is aimed at developing the models to illustrate the simultaneous influence of these variables on the crash severity in urban highways. The models represented in this study have been developed using binary Logit Models. SPSS software has been used to calibrate the models. It must be mentioned that backward regression method in SPSS was used to identify the significant variables in the model. Consider to obtained results it can be concluded that the main factor in increasing of crash severity in urban highways are driver age, movement with reverse gear, technical defect of the vehicle, vehicle collision with motorcycle and bicycle, bridge, frontal impact collisions, frontal-lateral collisions and multi-vehicle crashes in urban highways which always increase the crash severity in urban highways.

Modeling of Surface Roughness for Flow over a Complex Vegetated Surface

Turbulence modeling of large-scale flow over a vegetated surface is complex. Such problems involve large scale computational domains, while the characteristics of flow near the surface are also involved. In modeling large scale flow, surface roughness including vegetation is generally taken into account by mean of roughness parameters in the modified law of the wall. However, the turbulence structure within the canopy region cannot be captured with this method, another method which applies source/sink terms to model plant drag can be used. These models have been developed and tested intensively but with a simple surface geometry. This paper aims to compare the use of roughness parameter, and additional source/sink terms in modeling the effect of plant drag on wind flow over a complex vegetated surface. The RNG k-ε turbulence model with the non-equilibrium wall function was tested with both cases. In addition, the k-ω turbulence model, which is claimed to be computationally stable, was also investigated with the source/sink terms. All numerical results were compared to the experimental results obtained at the study site Mason Bay, Stewart Island, New Zealand. In the near-surface region, it is found that the results obtained by using the source/sink term are more accurate than those using roughness parameters. The k-ω turbulence model with source/sink term is more appropriate as it is more accurate and more computationally stable than the RNG k-ε turbulence model. At higher region, there is no significant difference amongst the results obtained from all simulations.

Instability Analysis of Laminated Composite Beams Subjected to Parametric Axial Load

The integral form of equations of motion of composite beams subjected to varying time loads are discretized using a developed finite element model. The model consists of a straight five node twenty-two degrees of freedom beam element. The stability analysis of the beams is studied by solving the matrix form characteristic equations of the system. The principle of virtual work and the first order shear deformation theory are employed to analyze the beams with large deformation and small strains. The regions of dynamic instability of the beam are determined by solving the obtained Mathieu form of differential equations. The effects of nonconservative loads, shear stiffness, and damping parameters on stability and response of the beams are examined. Several numerical calculations are presented to compare the results with data reported by other researchers.

Comparison of different Channel Modeling Techniques used in the BPLC Systems

The paper compares different channel models used for modeling Broadband Power-Line Communication (BPLC) system. The models compared are Zimmermann and Dostert, Philipps, Anatory et al and Anatory et al generalized Transmission Line (TL) model. The validity of each model was compared in time domain with ATP-EMTP software which uses transmission line approach. It is found that for a power-line network with minimum number of branches all the models give similar signal/pulse time responses compared with ATP-EMTP software; however, Zimmermann and Dostert model indicates the same amplitude but different time delay. It is observed that when the numbers of branches are increased only generalized TL theory approach results are comparable with ATPEMTP results. Also the Multi-Carrier Spread Spectrum (MC-SS) system was applied to check the implication of such behavior on the modulation schemes. It is observed that using Philipps on the underground cable can predict the performance up to 25dB better than other channel models which can misread the actual performance of the system. Also modified Zimmermann and Dostert under multipath can predict a better performance of about 5dB better than the actual predicted by Generalized TL theory. It is therefore suggested for a realistic BPLC system design and analyses the model based on generalized TL theory be used.

Effects of Annealing Treatment on Optical Properties of Anatase TiO2 Thin Films

In this investigation, anatase TiO2 thin films were grown by radio frequency magnetron sputtering on glass substrates at a high sputtering pressure and room temperature. The anatase films were then annealed at 300-600 °C in air for a period of 1 hour. To examine the structure and morphology of the films, X-ray diffraction (XRD) and atomic force microscopy (AFM) methods were used respectively. From X-ray diffraction patterns of the TiO2 films, it was found that the as-deposited film showed some differences compared with the annealed films and the intensities of the peaks of the crystalline phase increased with the increase of annealing temperature. From AFM images, the distinct variations in the morphology of the thin films were also observed. The optical constants were characterized using the transmission spectra of the films obtained by UV-VIS-IR spectrophotometer. Besides, optical thickness of the film deposited at room temperature was calculated and cross-checked by taking a cross-sectional image through SEM. The optical band gaps were evaluated through Tauc model. It was observed that TiO2 films produced at room temperatures exhibited high visible transmittance and transmittance decreased slightly with the increase of annealing temperatures. The films were found to be crystalline having anatase phase. The refractive index of the films was found from 2.31-2.35 in the visible range. The extinction coefficient was nearly zero in the visible range and was found to increase with annealing temperature. The allowed indirect optical band gap of the films was estimated to be in the range from 3.39 to 3.42 eV which showed a small variation. The allowed direct band gap was found to increase from 3.67 to 3.72 eV. The porosity was also found to decrease at a higher annealing temperature making the film compact and dense.

Complex-Valued Neural Network in Signal Processing: A Study on the Effectiveness of Complex Valued Generalized Mean Neuron Model

A complex valued neural network is a neural network which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in signal processing. In Neural networks, generalized mean neuron model (GMN) is often discussed and studied. The GMN includes a new aggregation function based on the concept of generalized mean of all the inputs to the neuron. This paper aims to present exhaustive results of using Generalized Mean Neuron model in a complex-valued neural network model that uses the back-propagation algorithm (called -Complex-BP-) for learning. Our experiments results demonstrate the effectiveness of a Generalized Mean Neuron Model in a complex plane for signal processing over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error required on a Generalized Mean neural network model. Some inherent properties of this complex back propagation algorithm are also studied and discussed.

Detection of Ultrasonic Images in the Presence of a Random Number of Scatterers: A Statistical Learning Approach

Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of medical ultrasound images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to clinical ultrasound images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected ultrasound images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (detection hypotheses) in the original images.

Application of Artificial Neural Network for Predicting Maintainability Using Object-Oriented Metrics

Importance of software quality is increasing leading to development of new sophisticated techniques, which can be used in constructing models for predicting quality attributes. One such technique is Artificial Neural Network (ANN). This paper examined the application of ANN for software quality prediction using Object- Oriented (OO) metrics. Quality estimation includes estimating maintainability of software. The dependent variable in our study was maintenance effort. The independent variables were principal components of eight OO metrics. The results showed that the Mean Absolute Relative Error (MARE) was 0.265 of ANN model. Thus we found that ANN method was useful in constructing software quality model.

A New Time Dependent, High Temperature Analytical Model for the Single-electron Box in Digital Applications

Several models have been introduced so far for single electron box, SEB, which all of them were restricted to DC response and or low temperature limit. In this paper we introduce a new time dependent, high temperature analytical model for SEB for the first time. DC behavior of the introduced model will be verified against SIMON software and its time behavior will be verified against a newly published paper regarding step response of SEB.

FIR Filter Design via Linear Complementarity Problem, Messy Genetic Algorithm, and Ising Messy Genetic Algorithm

In this paper the design of maximally flat linear phase finite impulse response (FIR) filters is considered. The problem is handled with totally two different approaches. The first one is completely deterministic numerical approach where the problem is formulated as a Linear Complementarity Problem (LCP). The other one is based on a combination of Markov Random Fields (MRF's) approach with messy genetic algorithm (MGA). Markov Random Fields (MRFs) are a class of probabilistic models that have been applied for many years to the analysis of visual patterns or textures. Our objective is to establish MRFs as an interesting approach to modeling messy genetic algorithms. We establish a theoretical result that every genetic algorithm problem can be characterized in terms of a MRF model. This allows us to construct an explicit probabilistic model of the MGA fitness function and introduce the Ising MGA. Experimentations done with Ising MGA are less costly than those done with standard MGA since much less computations are involved. The least computations of all is for the LCP. Results of the LCP, random search, random seeded search, MGA, and Ising MGA are discussed.

Analytical Model Prediction: Micro-Cutting Tool Forces with the Effect of Friction on Machining Titanium Alloy (Ti-6Al-4V)

In this paper, a methodology of a model based on predicting the tool forces oblique machining are introduced by adopting the orthogonal technique. The applied analytical calculation is mostly based on Devries model and some parts of the methodology are employed from Amareggo-Brown model. Model validation is performed by comparing experimental data with the prediction results on machining titanium alloy (Ti-6Al-4V) based on micro-cutting tool perspective. Good agreements with the experiments are observed. A detailed friction form that affected the tool forces also been examined with reasonable results obtained.

Development of a RAM Simulation Model for Acid Gas Removal System

A reliability, availability and maintainability (RAM) model has been built for acid gas removal plant for system analysis that will play an important role in any process modifications, if required, for achieving its optimum performance. Due to the complexity of the plant, the model was based on a Reliability Block Diagram (RBD) with a Monte Carlo simulation engine. The model has been validated against actual plant data as well as local expert opinions, resulting in an acceptable simulation model. The results from the model showed that the operation and maintenance can be further improved, resulting in reduction of the annual production loss.

Coupling Compensation of 6-DOF Parallel Robot Based on Screw Theory

In order to improve control performance and eliminate steady, a coupling compensation for 6-DOF parallel robot is presented. Taking dynamic load Tank Simulator as the research object, this paper analyzes the coupling of 6-DOC parallel robot considering the degree of freedom of the 6-DOF parallel manipulator. The coupling angle and coupling velocity are derived based on inverse kinematics model. It uses the mechanism-model combined method which takes practical moving track that considering the performance of motion controller and motor as its input to make the study. Experimental results show that the coupling compensation improves motion stability as well as accuracy. Besides, it decreases the dither amplitude of dynamic load Tank Simulator.