Abstract: Opinion formation in complex social networks may exhibit complex system dynamics even when based on some simplest system evolution models. An interesting and important issue is the effects of the initial state on the final steady-state opinion distribution. By carrying out extensive simulations and providing necessary discussions, we show that, while different initial opinion distributions certainly make differences to opinion evolution in social systems without noises, in systems with noises, given enough time, different initial states basically do not contribute to making any significant differences in the final steady state. Instead, it is the basal distribution of the preferred opinions that contributes to deciding the final state of the systems. We briefly explain the reasons leading to the observed conclusions. Such an observation contradicts with a long-term belief on the roles of system initial state in opinion formation, demonstrating the dominating role that opinion mutation can play in opinion formation given enough time. The observation may help to better understand certain observations of opinion evolution dynamics in real-life social networks.
Abstract: The present work proposes the development of an adaptive control system which enables the suppression of Pilot Induced Oscillations (PIO) in Digital Fly-By-Wire (DFBW) aircrafts. The proposed system consists of a Modified Model Reference Adaptive Control (M-MRAC) integrated with the Gain Scheduling technique. The PIO oscillations are detected using a Real Time Oscillation Verifier (ROVER) algorithm, which then enables the system to switch between two reference models; one in PIO condition, with low proneness to the phenomenon and another one in normal condition, with high (or medium) proneness. The reference models are defined in a closed loop condition using the Linear Quadratic Regulator (LQR) control methodology for Multiple-Input-Multiple-Output (MIMO) systems. The implemented algorithms are simulated in software implementations with state space models and commercial flight simulators as the controlled elements and with pilot dynamics models. A sequence of pitch angles is considered as the reference signal, named as Synthetic Task (Syntask), which must be tracked by the pilot models. The initial outcomes show that the proposed system can detect and suppress (or mitigate) the PIO oscillations in real time before it reaches high amplitudes.
Abstract: Ambient air pollution with fine particulate matter (PM10) is a systematic permanent problem in many countries around the world. The accumulation of a large number of measurements of both the PM10 concentrations and the accompanying atmospheric factors allow for their statistical modeling to detect dependencies and forecast future pollution. This study applies the classification and regression trees (CART) method for building and analyzing PM10 models. In the empirical study, average daily air data for the city of Pleven, Bulgaria for a period of 5 years are used. Predictors in the models are seven meteorological variables, time variables, as well as lagged PM10 variables and some lagged meteorological variables, delayed by 1 or 2 days with respect to the initial time series, respectively. The degree of influence of the predictors in the models is determined. The selected best CART models are used to forecast future PM10 concentrations for two days ahead after the last date in the modeling procedure and show very accurate results.
Abstract: Stress causes deleterious effects to the physical, psychological and organizational levels, which highlight the need to use effective coping strategies to deal with it. Several coping models exist, but they don’t integrate the different strategies in a coherent way nor do they take into account the new research on the emotional coping and acceptance of the stressful situation. To fill these gaps, an integrative model incorporating the main coping strategies was developed. This model arises from the review of the scientific literature on coping and from a qualitative study carried out among workers with low or high levels of stress, as well as from an analysis of clinical cases. The model allows one to understand under what circumstances the strategies are effective or ineffective and to learn how one might use them more wisely. It includes Specific Strategies in controllable situations (the Modification of the Situation and the Resignation-Disempowerment), Specific Strategies in non-controllable situations (Acceptance and Stubborn Relentlessness) as well as so-called General Strategies (Wellbeing and Avoidance). This study is intended to undertake and present the process of development and validation of an instrument to measure coping strategies based on this model. An initial pool of items has been generated from the conceptual definitions and three expert judges have validated the content. Of these, 18 items have been selected for a short form questionnaire. A sample of 300 students and employees from a Quebec university was used for the validation of the questionnaire. Concerning the reliability of the instrument, the indices observed following the inter-rater agreement (Krippendorff’s alpha) and the calculation of the coefficients for internal consistency (Cronbach's alpha) are satisfactory. To evaluate the construct validity, a confirmatory factor analysis using MPlus supports the existence of a model with six factors. The results of this analysis suggest also that this configuration is superior to other alternative models. The correlations show that the factors are only loosely related to each other. Overall, the analyses carried out suggest that the instrument has good psychometric qualities and demonstrates the relevance of further work to establish predictive validity and reconfirm its structure. This instrument will help researchers and clinicians better understand and assess coping strategies to cope with stress and thus prevent mental health issues.
Abstract: Combustion analysis of suspended sodium droplet is performed by solving numerically the Navier-Stokes equations and the energy conservation equations. The combustion model consists of the pre-ignition and post-ignition models. The reaction rate for the pre-ignition model is based on the chemical kinetics, while that for the post-ignition model is based on the mass transfer rate of oxygen. The calculated droplet temperature is shown to be in good agreement with the existing experimental data. The temperature field in and around the droplet is obtained as well as the droplet shape variation, and the present numerical model is confirmed to be effective for the combustion analysis.
Abstract: Most people see human faces in car front and back ends because of the process of pareidolia. 96 people were surveyed to see how many of them saw a face in the vehicle styling. Participants were aged 18 to 72 years. 94% of the participants saw faces in the front-end design of production models. All participants that recognized faces indicated that most styles showed some degree of an angry expression. It was found that women were more likely to see faces in inanimate objects. However, with respect to whether women were more likely to perceive anger in the vehicle design, the results need further clarification. Survey responses were correlated to the design features of vehicles to determine what cues the respondents were likely looking at when responding. Whether the features looked anthropomorphic was key to anger perception. Features such as the headlights which could represent eyes and the air intake that could represent a mouth had high correlations to trends in scores. Results are compared among models, makers, by groupings of body styles classifications for the top 12 brands sold in the US, and by year for the top 20 models sold in the US in 2016. All of the top models sold increased in perception of an angry expression over the last 20 years or since the model was introduced, but the relative change varied by body style grouping.
Abstract: Cardiologists perform cardiac auscultation to detect
abnormalities in heart sounds. Since accurate auscultation is
a crucial first step in screening patients with heart diseases,
there is a need to develop computer-aided detection/diagnosis
(CAD) systems to assist cardiologists in interpreting heart sounds
and provide second opinions. In this paper different algorithms
are implemented for automated heart sound classification using
unsegmented phonocardiogram (PCG) signals. Support vector
machine (SVM), artificial neural network (ANN) and cartesian
genetic programming evolved artificial neural network (CGPANN)
without the application of any segmentation algorithm has been
explored in this study. The signals are first pre-processed to remove
any unwanted frequencies. Both time and frequency domain features
are then extracted for training the different models. The different
algorithms are tested in multiple scenarios and their strengths and
weaknesses are discussed. Results indicate that SVM outperforms
the rest with an accuracy of 73.64%.
Abstract: The use of microscopic traffic simulation in evaluating the operational and safety conditions at toll plazas is demonstrated. Two toll plazas in New Jersey are selected as case studies and were developed and validated in Paramics traffic simulation software. In order to simulate drivers’ lane selection behavior in Paramics, a utility-based lane selection approach is implemented in Paramics Application Programming Interface (API). For each vehicle approaching the toll plaza, a utility value is assigned to each toll lane by taking into account the factors that are likely to impact drivers’ lane selection behavior, such as approach lane, exit lane and queue lengths. The results demonstrate that similar operational conditions, such as lane-by-lane toll plaza traffic volume can be attained using this approach. In addition, assessment of safety at toll plazas is conducted via a surrogate safety measure. In particular, the crash index (CI), an improved surrogate measure of time-to-collision (TTC), which reflects the severity of a crash is used in the simulation analyses. The results indicate that the spatial and temporal frequency of observed crashes can be simulated using the proposed methodology. Further analyses can be conducted to evaluate and compare various different operational decisions and safety measures using microscopic simulation models.
Abstract: This paper presents an in-depth investigation of the effects of several grid supply harmonic voltages on the stator currents of an example wound rotor induction machine. The observed effects of higher order grid supply harmonics are identified using a finite element time stepping transient model, as well as a time-stepping electromagnetic model. In addition, a number of analytical equations to calculate the spectral content of the stator currents are presented in the paper. The presented equations are validated through comparison with the obtained spectra predicted using the finite element and electromagnetic models. The presented study provides a better understanding of the origin of supply harmonic effects identified in the stator currents of the example wound rotor induction machine. Furthermore, the study helps to understand the effects of higher order supply harmonics on the harmonic emissions of the wound rotor induction machine.
Abstract: Integrated resilience engineering (IRE) is capable of returning banking systems to the normal state in extensive economic circumstances. In this study, information system of a large bank (with several branches) is assessed and optimized under severe economic conditions. Data envelopment analysis (DEA) models are employed to achieve the objective of this study. Nine IRE factors are considered to be the outputs, and a dummy variable is defined as the input of the DEA models. A standard questionnaire is designed and distributed among executive managers to be considered as the decision-making units (DMUs). Reliability and validity of the questionnaire is examined based on Cronbach's alpha and t-test. The most appropriate DEA model is determined based on average efficiency and normality test. It is shown that the proposed integrated design provides higher efficiency than the conventional RE design. Results of sensitivity and perturbation analysis indicate that self-organization, fault tolerance, and reporting culture respectively compose about 50 percent of total weight.
Abstract: The classification and the prediction of efficiencies in Data Envelopment Analysis (DEA) is an important issue, especially in large scale problems or when new units frequently enter the under-assessment set. In this paper, we contribute to the subject by proposing a grid structure based on interval segmentations of the range of values for the inputs and outputs. Such intervals combined, define hyper-rectangles that partition the space of the problem. This structure, exploited by Interval DEA models and a dominance relation, acts as a DEA pre-processor, enabling the classification and prediction of efficiency scores, without applying any DEA models.
Abstract: Electricity prices have sophisticated features such as
high volatility, nonlinearity and high frequency that make forecasting
quite difficult. Electricity price has a volatile and non-random
character so that, it is possible to identify the patterns based on the
historical data. Intelligent decision-making requires accurate price
forecasting for market traders, retailers, and generation companies.
So far, many shallow-ANN (artificial neural networks) models have
been published in the literature and showed adequate forecasting
results. During the last years, neural networks with many hidden
layers, which are referred to as DNN (deep neural networks) have
been using in the machine learning community. The goal of this
study is to investigate electricity price forecasting performance of the
shallow-ANN and DNN models for the Turkish day-ahead electricity
market. The forecasting accuracy of the models has been evaluated
with publicly available data from the Turkish day-ahead electricity
market. Both shallow-ANN and DNN approach would give successful
result in forecasting problems. Historical load, price and weather
temperature data are used as the input variables for the models.
The data set includes power consumption measurements gathered
between January 2016 and December 2017 with one-hour resolution.
In this regard, forecasting studies have been carried out comparatively
with shallow-ANN and DNN models for Turkish electricity markets
in the related time period. The main contribution of this study
is the investigation of different shallow-ANN and DNN models
in the field of electricity price forecast. All models are compared
regarding their MAE (Mean Absolute Error) and MSE (Mean Square)
results. DNN models give better forecasting performance compare to
shallow-ANN. Best five MAE results for DNN models are 0.346,
0.372, 0.392, 0,402 and 0.409.
Abstract: Load forecasting has become crucial in recent years
and become popular in forecasting area. Many different power
forecasting models have been tried out for this purpose. Electricity
load forecasting is necessary for energy policies, healthy and reliable
grid systems. Effective power forecasting of renewable energy load
leads the decision makers to minimize the costs of electric utilities
and power plants. Forecasting tools are required that can be used
to predict how much renewable energy can be utilized. The purpose
of this study is to explore the effectiveness of LSTM-based neural
networks for estimating renewable energy loads. In this study, we
present models for predicting renewable energy loads based on
deep neural networks, especially the Long Term Memory (LSTM)
algorithms. Deep learning allows multiple layers of models to learn
representation of data. LSTM algorithms are able to store information
for long periods of time. Deep learning models have recently been
used to forecast the renewable energy sources such as predicting
wind and solar energy power. Historical load and weather information
represent the most important variables for the inputs within the
power forecasting models. The dataset contained power consumption
measurements are gathered between January 2016 and December
2017 with one-hour resolution. Models use publicly available data
from the Turkish Renewable Energy Resources Support Mechanism.
Forecasting studies have been carried out with these data via deep
neural networks approach including LSTM technique for Turkish
electricity markets. 432 different models are created by changing
layers cell count and dropout. The adaptive moment estimation
(ADAM) algorithm is used for training as a gradient-based optimizer
instead of SGD (stochastic gradient). ADAM performed better than
SGD in terms of faster convergence and lower error rates. Models
performance is compared according to MAE (Mean Absolute Error)
and MSE (Mean Squared Error). Best five MAE results out of
432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting
performance of the proposed LSTM models gives successful results
compared to literature searches.
Abstract: The main hypothesis of the dynamics of solid phase microextraction (SPME) is that steady-state mass transfer is respected throughout the SPME extraction process. It considers steady-state diffusion is established in the two phases and fast exchange of the analyte at the solid phase film/water interface. An improved model is proposed in this paper to handle with the situation when the analyte (atrazine) is in contact with colloid suspensions (carboxylate latex in aqueous solution). A mathematical solution is obtained by substituting the diffusion coefficient by the mean of diffusion coefficient between analyte and carboxylate latex, and also thickness layer by the mean thickness in aqueous solution. This solution provides an equation relating the extracted amount of the analyte to the extraction a little more complicated than previous models. It also gives a better description of experimental observations. Moreover, the rate constant of analyte obtained is in satisfactory agreement with that obtained from the initial curve fitting.
Abstract: In Dynamic Data Envelopment Analysis (DDEA), which is a subfield of Data Envelopment Analysis (DEA), the productivity of Decision Making Units (DMUs) is considered in relation to time. In this case, as it is accepted by the most of the researchers, there are outputs, which are produced by a DMU to be used as inputs in a future time. Those outputs are known as intermediates. The common models, in DDEA, do not take into account the shape of the distribution of those inputs, outputs or intermediates data, assuming that the distribution of the virtual value of them does not deviate from linearity. This weakness causes the limitation of the accuracy of the analytical power of the traditional DDEA models. In this paper, the authors, using the concept of piecewise linear inputs and outputs, propose an extended DDEA model. The proposed model increases the flexibility of the traditional DDEA models and improves the measurement of the dynamic performance of DMUs.
Abstract: Requirement Engineering (RE) is a part being created for programming structure during the software development lifecycle. Software product line development is a new topic area within the domain of software engineering. It also plays important role in decision making and it is ultimately helpful in rising business environment for productive programming headway. Decisions are central to engineering processes and they hold them together. It is argued that better decisions will lead to better engineering. To achieve better decisions requires that they are understood in detail. In order to address the issues, companies are moving towards Software Product Line Engineering (SPLE) which helps in providing large varieties of products with minimum development effort and cost. This paper proposed a new framework for software product line and compared with other models. The results can help to understand the needs in SPL testing, by identifying points that still require additional investigation. In our future scenario, we will combine this model in a controlled environment with industrial SPL projects which will be the new horizon for SPL process management testing strategies.
Abstract: The brain’s functional connectivity while temporally non-stationary does express consistency at a macro spatial level. The study of stable resting state connectivity patterns hence provides opportunities for identification of diseases if such stability is severely perturbed. A mathematical model replicating the brain’s spatial connections will be useful for understanding brain’s representative geometry and complements the empirical model where it falls short. Empirical computations tend to involve large matrices and become infeasible with fine parcellation. However, the proposed analytical model has no such computational problems. To improve replicability, 92 subject data are obtained from two open sources. The proposed methodology, inspired by financial theory, uses multivariate regression to find relationships of every cortical region of interest (ROI) with some pre-identified hubs. These hubs acted as representatives for the entire cortical surface. A variance-covariance framework of all ROIs is then built based on these relationships to link up all the ROIs. The result is a high level of match between model and empirical correlations in the range of 0.59 to 0.66 after adjusting for sample size; an increase of almost forty percent. More significantly, the model framework provides an intuitive way to delineate between systemic drivers and idiosyncratic noise while reducing dimensions by more than 30 folds, hence, providing a way to conduct attribution analysis. Due to its analytical nature and simple structure, the model is useful as a standalone toolkit for network dependency analysis or as a module for other mathematical models.
Abstract: Transportation network development in the developing country is in rapid pace. The majority of the network belongs to railway and expressway which passes through diverse topography, landform and geological conditions despite the avoidance principle during route selection. Construction of such networks demand many low to high embankment which required improvement in the foundation soil. This paper is mainly focused on the various advanced ground improvement techniques used to improve the soft soil, modelling approach and its predictability for embankments construction. The ground improvement techniques can be broadly classified in to three groups i.e. densification group, drainage and consolidation group and reinforcement group which are discussed with some case studies. Various methods were used in modelling of the embankments from simple 1-dimensional to complex 3-dimensional model using variety of constitutive models. However, the reliability of the predictions is not found systematically improved with the level of sophistication. And sometimes the predictions are deviated more than 60% to the monitored value besides using same level of erudition. This deviation is found mainly due to the selection of constitutive model, assumptions made during different stages, deviation in the selection of model parameters and simplification during physical modelling of the ground condition. This deviation can be reduced by using optimization process, optimization tools and sensitivity analysis of the model parameters which will guide to select the appropriate model parameters.
Abstract: This paper introduces the applicability of underwater
photogrammetric survey within challenging conditions as the main
tool to enhance and enrich the process of documenting archaeological
excavation through the creation of 4D models. Photogrammetry was
being attempted on underwater archaeological sites at least as early
as the 1970s’ and today the production of traditional 3D models is
becoming a common practice within the discipline. Photogrammetry
underwater is more often implemented to record exposed underwater
archaeological remains and less so as a dynamic interpretative tool. Therefore, it tends to be applied in bright environments and
when underwater visibility is > 1m, reducing its implementation
on most submerged archaeological sites in more turbid conditions.
Recent years have seen significant development of better digital
photographic sensors and the improvement of optical technology,
ideal for darker environments. Such developments, in tandem with
powerful processing computing systems, have allowed underwater
photogrammetry to be used by this research as a standard recording
and interpretative tool. Using multi-source photogrammetry (5,
GoPro5 Hero Black cameras) this paper presents the accumulation of
daily (4D) underwater surveys carried out in the Early Bronze Age
(3,300 BC) to Late Ottoman (17th Century AD) archaeological site of
Ropotamo in the Bulgarian Black Sea under challenging conditions
(< 0.5m visibility). It proves that underwater photogrammetry can
and should be used as one of the main recording methods even in low
light and poor underwater conditions as a way to better understand
the complexity of the underwater archaeological record.
Abstract: Minimizing the weight in flexible structures means
reducing material and costs as well. However, these structures could
become prone to vibrations. Attenuating these vibrations has become
a pivotal engineering problem that shifted the focus of many research
endeavors. One technique to do that is to design and implement
an active control system. This system is mainly composed of a
vibrating structure, a sensor to perceive the vibrations, an actuator
to counteract the influence of disturbances, and finally a controller to
generate the appropriate control signals. In this work, two different
techniques are explored to create two different mathematical models
of an active control system. The first model is a finite element model
with a reduced number of nodes and it is called a super-element.
The second model is in the form of state-space representation, i.e.
a set of partial differential equations. The damping coefficients are
calculated and incorporated into both models. The effectiveness of
these models is demonstrated when the system is excited by its first
natural frequency and an active control strategy is developed and
implemented to attenuate the resulting vibrations. Results from both
modeling techniques are presented and compared.