Navigation Patterns Mining Approach based on Expectation Maximization Algorithm

Web usage mining algorithms have been widely utilized for modeling user web navigation behavior. In this study we advance a model for mining of user-s navigation pattern. The model makes user model based on expectation-maximization (EM) algorithm.An EM algorithm is used in statistics for finding maximum likelihood estimates of parameters in probabilistic models, where the model depends on unobserved latent variables. The experimental results represent that by decreasing the number of clusters, the log likelihood converges toward lower values and probability of the largest cluster will be decreased while the number of the clusters increases in each treatment.

A New Decision Making Approach based on Possibilistic Influence Diagrams

This paper proposes a new decision making approch based on quantitative possibilistic influence diagrams which are extension of standard influence diagrams in the possibilistic framework. We will in particular treat the case where several expert opinions relative to value nodes are available. An initial expert assigns confidence degrees to other experts and fixes a similarity threshold that provided possibility distributions should respect. To illustrate our approach an evaluation algorithm for these multi-source possibilistic influence diagrams will also be proposed.

Automated Process Quality Monitoring with Prediction of Fault Condition Using Measurement Data

Detection of incipient abnormal events is important to improve safety and reliability of machine operations and reduce losses caused by failures. Improper set-ups or aligning of parts often leads to severe problems in many machines. The construction of prediction models for predicting faulty conditions is quite essential in making decisions on when to perform machine maintenance. This paper presents a multivariate calibration monitoring approach based on the statistical analysis of machine measurement data. The calibration model is used to predict two faulty conditions from historical reference data. This approach utilizes genetic algorithms (GA) based variable selection, and we evaluate the predictive performance of several prediction methods using real data. The results shows that the calibration model based on supervised probabilistic principal component analysis (SPPCA) yielded best performance in this work. By adopting a proper variable selection scheme in calibration models, the prediction performance can be improved by excluding non-informative variables from their model building steps.

Modeling of PZ in Haunch Connections Systems

Modeling of Panel Zone (PZ) seismic behavior, because of its role in overall ductility and lateral stiffness of steel moment frames, has been considered a challenge for years. There are some studies regarding the effects of different doubler plates thicknesses and geometric properties of PZ on its seismic behavior. However, there is not much investigation on the effects of number of provided continuity plates in case of presence of one triangular haunch, two triangular haunches and rectangular haunch (T shape haunches) for exterior columns. In this research first detailed finite element models of 12tested connection of SAC joint venture were created and analyzed then obtained cyclic behavior backbone curves of these models besides other FE models for similar tests were used for neural network training. Then seismic behavior of these data is categorized according to continuity plate-s arrangements and differences in type of haunches. PZ with one-sided haunches have little plastic rotation. As the number of continuity plates increases due to presence of two triangular haunches (four continuity plate), there will be no plastic rotation, in other words PZ behaves in its elastic range. In the case of rectangular haunch, PZ show more plastic rotation in comparison with one-sided triangular haunch and especially double-sided triangular haunches. Moreover, the models that will be presented in case of triangular one-sided and double- sided haunches and rectangular haunches as a result of this study seem to have a proper estimation of PZ seismic behavior.

Electric Load Forecasting Using Genetic Based Algorithm, Optimal Filter Estimator and Least Error Squares Technique: Comparative Study

This paper presents performance comparison of three estimation techniques used for peak load forecasting in power systems. The three optimum estimation techniques are, genetic algorithms (GA), least error squares (LS) and, least absolute value filtering (LAVF). The problem is formulated as an estimation problem. Different forecasting models are considered. Actual recorded data is used to perform the study. The performance of the above three optimal estimation techniques is examined. Advantages of each algorithms are reported and discussed.

Optimized Calculation of Hourly Price Forward Curve (HPFC)

This paper examines many mathematical methods for molding the hourly price forward curve (HPFC); the model will be constructed by numerous regression methods, like polynomial regression, radial basic function neural networks & a furrier series. Examination the models goodness of fit will be done by means of statistical & graphical tools. The criteria for choosing the model will depend on minimize the Root Mean Squared Error (RMSE), using the correlation analysis approach for the regression analysis the optimal model will be distinct, which are robust against model misspecification. Learning & supervision technique employed to determine the form of the optimal parameters corresponding to each measure of overall loss. By using all the numerical methods that mentioned previously; the explicit expressions for the optimal model derived and the optimal designs will be implemented.

Studding of Number of Dataset on Precision of Estimated Saturated Hydraulic Conductivity

Saturated hydraulic conductivity of Soil is an important property in processes involving water and solute flow in soils. Saturated hydraulic conductivity of soil is difficult to measure and can be highly variable, requiring a large number of replicate samples. In this study, 60 sets of soil samples were collected at Saqhez region of Kurdistan province-IRAN. The statistics such as Correlation Coefficient (R), Root Mean Square Error (RMSE), Mean Bias Error (MBE) and Mean Absolute Error (MAE) were used to evaluation the multiple linear regression models varied with number of dataset. In this study the multiple linear regression models were evaluated when only percentage of sand, silt, and clay content (SSC) were used as inputs, and when SSC and bulk density, Bd, (SSC+Bd) were used as inputs. The R, RMSE, MBE and MAE values of the 50 dataset for method (SSC), were calculated 0.925, 15.29, -1.03 and 12.51 and for method (SSC+Bd), were calculated 0.927, 15.28,-1.11 and 12.92, respectively, for relationship obtained from multiple linear regressions on data. Also the R, RMSE, MBE and MAE values of the 10 dataset for method (SSC), were calculated 0.725, 19.62, - 9.87 and 18.91 and for method (SSC+Bd), were calculated 0.618, 24.69, -17.37 and 22.16, respectively, which shows when number of dataset increase, precision of estimated saturated hydraulic conductivity, increases.

Temporal Analysis of Magnetic Nerve Stimulation–Towards Enhanced Systems via Virtualisation

The triumph of inductive neuro-stimulation since its rediscovery in the 1980s has been quite spectacular. In lots of branches ranging from clinical applications to basic research this system is absolutely indispensable. Nevertheless, the basic knowledge about the processes underlying the stimulation effect is still very rough and rarely refined in a quantitative way. This seems to be not only an inexcusable blank spot in biophysics and for stimulation prediction, but also a fundamental hindrance for technological progress. The already very sophisticated devices have reached a stage where further optimization requires better strategies than provided by simple linear membrane models of integrate-and-fire style. Addressing this problem for the first time, we suggest in the following text a way for virtual quantitative analysis of a stimulation system. Concomitantly, this ansatz seems to provide a route towards a better understanding by using nonlinear signal processing and taking the nerve as a filter that is adapted for neuronal magnetic stimulation. The model is compact and easy to adjust. The whole setup behaved very robustly during all performed tests. Exemplarily a recent innovative stimulator design known as cTMS is analyzed and dimensioned with this approach in the following. The results show hitherto unforeseen potentials.

A Proposed Framework for Visualization to Teach Computer Science

Computer programming is considered a very difficult course by many computer science students. The reasons for the difficulties include cognitive load involved in programming, different learning styles of students, instructional methodology and the choice of the programming languages. To reduce the difficulties the following have been tried: pair programming, program visualization, different learning styles etc. However, these efforts have produced limited success. This paper reviews the problem and proposes a framework to help students overcome the difficulties involved.

Modeling Reaction Time in Car-Following Behaviour Based on Human Factors

This paper develops driver reaction-time models for car-following analysis based on human factors. The reaction time was classified as brake-reaction time (BRT) and acceleration/deceleration reaction time (ADRT). The BRT occurs when the lead vehicle is barking and its brake light is on, while the ADRT occurs when the driver reacts to adjust his/her speed using the gas pedal only. The study evaluates the effect of driver characteristics and traffic kinematic conditions on the driver reaction time in a car-following environment. The kinematic conditions introduced urgency and expectancy based on the braking behaviour of the lead vehicle at different speeds and spacing. The kinematic conditions were used for evaluating the BRT and are classified as normal, surprised, and stationary. Data were collected on a driving simulator integrated into a real car and included the BRT and ADRT (as dependent variables) and driver-s age, gender, driving experience, driving intensity (driving hours per week), vehicle speed, and spacing (as independent variables). The results showed that there was a significant difference in the BRT at normal, surprised, and stationary scenarios and supported the hypothesis that both urgency and expectancy had significant effects on BRT. Driver-s age, gender, speed, and spacing were found to be significant variables for the BRT in all scenarios. The results also showed that driver-s age and gender were significant variables for the ADRT. The research presented in this paper is part of a larger project to develop a driversensitive in-vehicle rear-end collision warning system.

Urban Environment Quality Improvement Planning Case Study: Moft Abad Neighborhood, Tehran, Iran

Rapid enlargement and physical development of cities have facilitated the emergence of a number of city life crises and decrease of environment quality. Subsequently, the need for noticing the concept of quality and its improvement in urban environments, besides quantitative issues, is obviously recognized. In the domain of urban ideas the importance of taking these issues into consideration is obvious not only in accordance to sustainable development concepts and improvement of public environment quality, but also in the enhancement of social and behavioral models. The major concern of present article is to study the nature of urban environment quality in urban development plans, which is important not only in the concept and the aim of projects but also in their execution procedure. As a result, this paper is going to utilize planning capacities caused by environmental virtues in the planning procedure of Moft Abad neighborhood. Thus, at the first step, applying the Analytical Hierarchy Process (AHP), it has assessed quantitative environmental issues. The present conditions of Moft Abad state that “the neighborhood is generally suffering from the lack of qualitative parameters, and the previously formed planning procedures could not take the sustainable and developmental paths which are aimed at environment quality virtues." The diminution of economical and environmental virtues has resulted in the diminution of residential and social virtues. Therefore, in order to enhance the environment quality in Moft Abad, the present paper has tried to supply the subject plans in order to make a safe, healthy, and lively neighborhood.

Zero Inflated Models for Overdispersed Count Data

The zero inflated models are usually used in modeling count data with excess zeros where the existence of the excess zeros could be structural zeros or zeros which occur by chance. These type of data are commonly found in various disciplines such as finance, insurance, biomedical, econometrical, ecology, and health sciences which involve sex and health dental epidemiology. The most popular zero inflated models used by many researchers are zero inflated Poisson and zero inflated negative binomial models. In addition, zero inflated generalized Poisson and zero inflated double Poisson models are also discussed and found in some literature. Recently zero inflated inverse trinomial model and zero inflated strict arcsine models are advocated and proven to serve as alternative models in modeling overdispersed count data caused by excessive zeros and unobserved heterogeneity. The purpose of this paper is to review some related literature and provide a variety of examples from different disciplines in the application of zero inflated models. Different model selection methods used in model comparison are discussed.

Mathematical Rescheduling Models for Railway Services

This paper presents the review of past studies concerning mathematical models for rescheduling passenger railway services, as part of delay management in the occurrence of railway disruption. Many past mathematical models highlighted were aimed at minimizing the service delays experienced by passengers during service disruptions. Integer programming (IP) and mixed-integer programming (MIP) models are critically discussed, focusing on the model approach, decision variables, sets and parameters. Some of them have been tested on real-life data of railway companies worldwide, while a few have been validated on fictive data. Based on selected literatures on train rescheduling, this paper is able to assist researchers in the model formulation by providing comprehensive analyses towards the model building. These analyses would be able to help in the development of new approaches in rescheduling strategies or perhaps to enhance the existing rescheduling models and make them more powerful or more applicable with shorter computing time.

Modeling a Multinomial Logit Model of Intercity Travel Mode Choice Behavior for All Trips in Libya

In the planning point of view, it is essential to have mode choice, due to the massive amount of incurred in transportation systems. The intercity travellers in Libya have distinct features, as against travellers from other countries, which includes cultural and socioeconomic factors. Consequently, the goal of this study is to recognize the behavior of intercity travel using disaggregate models, for projecting the demand of nation-level intercity travel in Libya. Multinomial Logit Model for all the intercity trips has been formulated to examine the national-level intercity transportation in Libya. The Multinomial logit model was calibrated using nationwide revealed preferences (RP) and stated preferences (SP) survey. The model was developed for deference purpose of intercity trips (work, social and recreational). The variables of the model have been predicted based on maximum likelihood method. The data needed for model development were obtained from all major intercity corridors in Libya. The final sample size consisted of 1300 interviews. About two-thirds of these data were used for model calibration, and the remaining parts were used for model validation. This study, which is the first of its kind in Libya, investigates the intercity traveler’s mode-choice behavior. The intercity travel mode-choice model was successfully calibrated and validated. The outcomes indicate that, the overall model is effective and yields higher precision of estimation. The proposed model is beneficial, due to the fact that, it is receptive to a lot of variables, and can be employed to determine the impact of modifications in the numerous characteristics on the need for various travel modes. Estimations of the model might also be of valuable to planners, who can estimate possibilities for various modes and determine the impact of unique policy modifications on the need for intercity travel.

Prediction of Post Underwater Shock Properties of Polymer - Clay/Silica Hybrid Nanocomposites through Regression Models

Exploding concentrated underwater charges to damage underwater structures such as ship hulls is a part of naval warfare strategies. Adding small amounts of foreign particles (like clay or silica) of nanosize significantly improves the engineering properties of the polymers. In the present work the clay in terms 1, 2 and 3 percent by weight was surface treated with a suitable silane agent. The hybrid nanocomposite was prepared by the hand lay-up technique. Mathematical regression models have been employed for theoretical prediction. This will result in considerable savings in terms of project time, effort and cost.

Implementation of Generalized Plasticity in Load-Deformation Behavior of Foundation with Emphasis on Localization Problem

Nonlinear finite element method with eight noded isoparametric quadrilateral element is used for prediction of loaddeformation behavior including bearing capacity of foundations. Modified generalized plasticity model with non-associated flow rule is applied for analysis of soil-footing system. Also Von Mises and Tresca criterions are used for simulation of soil behavior. Modified generalized plasticity model is able to simulate load-deformation including softening behavior. Localization phenomena are considered by different meshes. Localization phenomena have not been seen in the examples. Predictions by modified generalized plasticity model show good agreement with laboratory data and theoretical prediction in comparison the other models.

Predicting Radiative Heat Transfer in Arbitrary Two and Three-Dimensional Participating Media

The radiative exchange method is introduced as a numerical method for the simulation of radiative heat transfer in an absorbing, emitting and isotropically scattering media. In this method, the integro-differential radiative balance equation is solved by using a new introduced concept for the exchange factor. Even though the radiative source term is calculated in a mesh structure that is coarser than the structure used in computational fluid dynamics, calculating the exchange factor between different coarse elements by using differential integration elements makes the result of the method close to that of integro-differential radiative equation. A set of equations for calculating exchange factors in two and threedimensional Cartesian coordinate system is presented, and the method is used in the simulation of radiative heat transfer in twodimensional rectangular case and a three-dimensional simple cube. The result of using this method in simulating different cases is verified by comparing them with those of using other numerical radiative models.

Statistical Analysis-Driven Risk Assessment of Criteria Air Pollutants: A Sulfur Dioxide Case Study

A 7-step method (with 25 sub-steps) to assess risk of air pollutants is introduced. These steps are: pre-considerations, sampling, statistical analysis, exposure matrix and likelihood, doseresponse matrix and likelihood, total risk evaluation, and discussion of findings. All mentioned words and expressions are wellunderstood; however, almost all steps have been modified, improved, and coupled in such a way that a comprehensive method has been prepared. Accordingly, the SADRA (Statistical Analysis-Driven Risk Assessment) emphasizes extensive and ongoing application of analytical statistics in traditional risk assessment models. A Sulfur Dioxide case study validates the claim and provides a good illustration for this method.

A Quantitative Approach to Strategic Design of Component-Based Business Process Models

A new paradigm for software design and development models software by its business process, translates the model into a process execution language, and has it run by a supporting execution engine. This process-oriented paradigm promotes modeling of software by less technical users or business analysts as well as rapid development. Since business process models may be shared by different organizations and sometimes even by different business domains, it is interesting to apply a technique used in traditional software component technology to design reusable business processes. This paper discusses an approach to apply a technique for software component fabrication to the design of process-oriented software units, called process components. These process components result from decomposing a business process of a particular application domain into subprocesses with an aim that the process components can be reusable in different process-based software models. The approach is quantitative because the quality of process component design is measured from technical features of the process components. The approach is also strategic because the measured quality is determined against business-oriented component management goals. A software tool has been developed to measure how good a process component design is, according to the required managerial goals and comparing to other designs. We also discuss how we benefit from reusable process components.

A Study on Removal of Toluidine Blue Dye from Aqueous Solution by Adsorption onto Neem Leaf Powder

Adsorption of Toluidine blue dye from aqueous solutions onto Neem Leaf Powder (NLP) has been investigated. The surface characterization of this natural material was examined by Particle size analysis, Scanning Electron Microscopy (SEM), Fourier Transform Infrared (FTIR) spectroscopy and X-Ray Diffraction (XRD). The effects of process parameters such as initial concentration, pH, temperature and contact duration on the adsorption capacities have been evaluated, in which pH has been found to be most effective parameter among all. The data were analyzed using the Langmuir and Freundlich for explaining the equilibrium characteristics of adsorption. And kinetic models like pseudo first- order, second-order model and Elovich equation were utilized to describe the kinetic data. The experimental data were well fitted with Langmuir adsorption isotherm model and pseudo second order kinetic model. The thermodynamic parameters, such as Free energy of adsorption (AG"), enthalpy change (AH') and entropy change (AS°) were also determined and evaluated.