Abstract: This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.
Abstract: Assessing several individuals intensively over time
yields intensive longitudinal data (ILD). Even though ILD provide
rich information, they also bring other data analytic challenges. One
of these is the increased occurrence of missingness with increased
study length, possibly under non-ignorable missingness scenarios.
Multiple imputation (MI) handles missing data by creating several
imputed data sets, and pooling the estimation results across imputed
data sets to yield final estimates for inferential purposes. In this
article, we introduce dynr.mi(), a function in the R package,
Dynamic Modeling in R (dynr). The package dynr provides a suite
of fast and accessible functions for estimating and visualizing the
results from fitting linear and nonlinear dynamic systems models in
discrete as well as continuous time. By integrating the estimation
functions in dynr and the MI procedures available from the R
package, Multivariate Imputation by Chained Equations (MICE), the
dynr.mi() routine is designed to handle possibly non-ignorable
missingness in the dependent variables and/or covariates in a
user-specified dynamic systems model via MI, with convergence
diagnostic check. We utilized dynr.mi() to examine, in the context
of a vector autoregressive model, the relationships among individuals’
ambulatory physiological measures, and self-report affect valence
and arousal. The results from MI were compared to those from
listwise deletion of entries with missingness in the covariates.
When we determined the number of iterations based on the
convergence diagnostics available from dynr.mi(), differences in
the statistical significance of the covariate parameters were observed
between the listwise deletion and MI approaches. These results
underscore the importance of considering diagnostic information in
the implementation of MI procedures.
Abstract: Tourism industry development is one of the key priorities in Georgia, as it has positive influence on economic activities. Its contribution is very important for the different regions, as well as for the national economy. Benefits of the tourism industry include new jobs, service development, and increasing tax revenues, etc. The main aim of this research is to review and analyze the potential of the Georgian tourism industry with its long-term strategy and current challenges. To plan activities in a long-term development, it is required to evaluate several factors on the regional and on the national level. Factors include activities, transportation, services, lodging facilities, infrastructure and institutions. The major research contributions are practical estimates about regional tourism development which plays an important role in the integration process with global markets.
Abstract: The statistical modelling of precipitation data for a
given portion of territory is fundamental for the monitoring of
climatic conditions and for Hydrogeological Management Plans
(HMP). This modelling is rendered particularly complex by the
changes taking place in the frequency and intensity of precipitation,
presumably to be attributed to the global climate change. This paper
applies the Wakeby distribution (with 5 parameters) as a theoretical
reference model. The number and the quality of the parameters
indicate that this distribution may be the appropriate choice for
the interpolations of the hydrological variables and, moreover, the
Wakeby is particularly suitable for describing phenomena producing
heavy tails. The proposed estimation methods for determining the
value of the Wakeby parameters are the same as those used for
density functions with heavy tails. The commonly used procedure
is the classic method of moments weighed with probabilities
(probability weighted moments, PWM) although this has often shown
difficulty of convergence, or rather, convergence to a configuration
of inappropriate parameters. In this paper, we analyze the problem of
the likelihood estimation of a random variable expressed through its
quantile function. The method of maximum likelihood, in this case,
is more demanding than in the situations of more usual estimation.
The reasons for this lie, in the sampling and asymptotic properties of
the estimators of maximum likelihood which improve the estimates
obtained with indications of their variability and, therefore, their
accuracy and reliability. These features are highly appreciated in
contexts where poor decisions, attributable to an inefficient or
incomplete information base, can cause serious damages.
Abstract: There are many situations in which human activities have significant effects on the environment. Damage to the ozone layer is one of them. The objective of this work is to use the Least Squares Method, considering the linear, exponential, logarithmic, power and polynomial models of the second degree, to analyze through the coefficient of determination (R²), which model best fits the behavior of the chlorodifluoromethane (HCFC-142b) in parts per trillion between 1992 and 2018, as well as estimates of future concentrations between 5 and 10 periods, i.e. the concentration of this pollutant in the years 2023 and 2028 in each of the adjustments. A total of 809 observations of the concentration of HCFC-142b in one of the monitoring stations of gases precursors of the deterioration of the ozone layer during the period of time studied were selected and, using these data, the statistical software Excel was used for make the scatter plots of each of the adjustment models. With the development of the present study, it was observed that the logarithmic fit was the model that best fit the data set, since besides having a significant R² its adjusted curve was compatible with the natural trend curve of the phenomenon.
Abstract: As a great physiographic divide, the Himalayas affecting a large system of water and air circulation which helps to determine the climatic condition in the Indian subcontinent to the south and mid-Asian highlands to the north. It creates obstacles by defending chill continental air from north side into India in winter and also defends rain-bearing southwesterly monsoon to give up maximum precipitation in that area in monsoon season. Nowadays extreme weather conditions such as heavy precipitation, cloudburst, flash flood, landslide and extreme avalanches are the regular happening incidents in the region of North Western Himalayan (NWH). The present study has been planned to investigate the suitable model(s) to find out the rainfall pattern over that region. For this investigation, selected models from Coordinated Regional Climate Downscaling Experiment (CORDEX) and Coupled Model Intercomparison Project Phase 5 (CMIP5) has been utilized in a consistent framework for the period of 1976 to 2000 (historical). The ability of these driving models from CORDEX domain and CMIP5 has been examined according to their capability of the spatial distribution as well as time series plot of rainfall over NWH in the rainy season and compared with the ground-based Indian Meteorological Department (IMD) gridded rainfall data set. It is noted from the analysis that the models like MIROC5 and MPI-ESM-LR from the both CORDEX and CMIP5 provide the best spatial distribution of rainfall over NWH region. But the driving models from CORDEX underestimates the daily rainfall amount as compared to CMIP5 driving models as it is unable to capture daily rainfall data properly when it has been plotted for time series (TS) individually for the state of Uttarakhand (UK) and Himachal Pradesh (HP). So finally it can be said that the driving models from CMIP5 are better than CORDEX domain models to investigate the rainfall pattern over NWH region.
Abstract: Tourism is the most viable and sustainable economic development option for Georgia and one of the main sources of foreign exchange earnings. Events are considered as one of the most effective ways to attract foreign visitors to the country, and, recently, the government of Georgia has begun investing in this sector very actively. This article stresses the necessity of research based economic policy in the tourism sector. In this regard, it is of paramount importance to measure the economic effects of the events which are subsidized by taxpayers’ money. The economic effect of events can be analyzed from two perspectives; financial perspective of the government and perspective of economic effects of the tourism administration. The article emphasizes more realistic and all-inclusive focus of the economic effect analysis of the tourism administration as it concentrates on the income of residents and local businesses, part of which generate tax revenues for the government. The public would like to know what the economic returns to investment are. In this article, the methodology used to describe the economic effects of UEFA Super Cup held in Tbilisi, will help to answer this question. Methodology is based on three main principles and covers three stages. Using the suggested methodology article estimates the direct economic effect of UEFA Super cup on Georgian economy. Although the attempt to make an economic effect analysis of the event was successful in Georgia, some obstacles and insufficiencies were identified during the survey. The article offers several recommendations that will help to refine methodology and improve the accuracy of the data. Furthermore, it is very important to receive the correct standard of measurement of events in Georgia. In this caseü non-ethical acts of measurement which are widely utilized by different research companies will not trigger others to show overestimated effects. It is worth mentioning that to author’s best knowledge, this is the first attempt to measure the economic effect of an event held in Georgia.
Abstract: Associations between life events and various forms of cancers have been identified. The purpose of a recent random-effects meta-analysis was to identify studies that examined the association between adverse events associated with changes to financial status including decreased income and breast cancer risk. The same association was studied in four separate studies which displayed traits that were not consistent between studies such as the study design, location, and time frame. It was of interest to pool information from various studies to help identify characteristics that differentiated study results. Two random-effects Bayesian meta-analysis models are proposed to combine the reported estimates of the described studies. The proposed models allow major sources of variation to be taken into account, including study level characteristics, between study variance and within study variance, and illustrate the ease with which uncertainty can be incorporated using a hierarchical Bayesian modelling approach.
Abstract: The driven processes of Wiener and Lévy are known
self-standing Gaussian-Markov processes for fitting non-linear
dynamical Vasciek model. In this paper, a coincidental Gaussian
density stationarity condition and autocorrelation function of the
two driven processes were established. This led to the conflation
of Wiener and Lévy processes so as to investigate the efficiency
of estimates incorporated into the one-dimensional Vasciek model
that was estimated via the Maximum Likelihood (ML) technique.
The conditional laws of drift, diffusion and stationarity process
was ascertained for the individual Wiener and Lévy processes as
well as the commingle of the two processes for a fixed effect
and Autoregressive like Vasciek model when subjected to financial
series; exchange rate of Naira-CFA Franc. In addition, the model
performance error of the sub-merged driven process was miniature
compared to the self-standing driven process of Wiener and Lévy.
Abstract: Computer-based optimization techniques can be employed to improve the efficiency of energy conversions processes, including reducing the aerodynamic loss in a thermal power plant turbomachine. In this paper, towards mitigating secondary flow losses, a design optimization workflow is implemented for the casing geometry of a 1.5 stage axial flow turbine that improves the turbine isentropic efficiency. The improved turbine is used in an open thermodynamic gas cycle with regeneration and cogeneration. Performance estimates are obtained by the commercial software Cycle – Tempo. Design and off design conditions are considered as well as variations in inlet air temperature. Reductions in both the natural gas specific fuel consumption and in CO2 emissions are predicted by using the gas turbine cycle fitted with the new casing design. These gains are attractive towards enhancing the competitiveness and reducing the environmental impact of thermal power plant.
Abstract: Modeling sediment transport processes by means of numerical approach often poses severe challenges. In this way, a number of techniques have been suggested to solve flow and sediment equations in decoupled, semi-coupled or fully coupled forms. Furthermore, in order to capture flow discontinuities, a number of techniques, like artificial viscosity and shock fitting, have been proposed for solving these equations which are mostly required careful calibration processes. In this research, a numerical scheme for solving shallow water and Exner equations in fully coupled form is presented. First-Order Centered scheme is applied for producing required numerical fluxes and the reconstruction process is carried out toward using Monotonic Upstream Scheme for Conservation Laws to achieve a high order scheme. In order to satisfy C-property of the scheme in presence of bed topography, Surface Gradient Method is proposed. Combining the presented scheme with fourth order Runge-Kutta algorithm for time integration yields a competent numerical scheme. In addition, to handle non-prismatic channels problems, Cartesian Cut Cell Method is employed. A trained Multi-Layer Perceptron Artificial Neural Network which is of Feed Forward Back Propagation (FFBP) type estimates sediment flow discharge in the model rather than usual empirical formulas. Hydrodynamic part of the model is tested for showing its capability in simulation of flow discontinuities, transcritical flows, wetting/drying conditions and non-prismatic channel flows. In this end, dam-break flow onto a locally non-prismatic converging-diverging channel with initially dry bed conditions is modeled. The morphodynamic part of the model is verified simulating dam break on a dry movable bed and bed level variations in an alluvial junction. The results show that the model is capable in capturing the flow discontinuities, solving wetting/drying problems even in non-prismatic channels and presenting proper results for movable bed situations. It can also be deducted that applying Artificial Neural Network, instead of common empirical formulas for estimating sediment flow discharge, leads to more accurate results.
Abstract: The article presents the development trends of farms, estimates on the optimal scope of farming, as well as the experience of local and foreign countries in this area. As well, the advantages of small and large farms are discussed; herewith, the scales of farms are compared to the local reality. The study analyzes the results of farm operations and the possibilities of diversification of farms. The indicators of an effective use of land resources and land fragmentation are measured; also, a comparative analysis with other countries is presented, in particular, the measurements of agricultural lands for farming, as well as the indicators of population ensuring. The conducted research shows that most of the farms in Georgia are small and their development is at the initial stage, which outlines that the country has a high resource potential to increase the scale of the farming industry and its full integration into market relations. On the basis of the obtained results, according to the research on the scale of farming in Georgia and the identification of hampering factors of farming development, the conclusions are presented and the relevant recommendations are suggested.
Abstract: Before designing an electrical system, the estimation of load is necessary for unit sizing and demand-generation balancing. The system could be a stand-alone system for a village or grid connected or integrated renewable energy to grid connection, especially as there are non–electrified villages in developing countries. In the classical model, the energy demand was found by estimating the household appliances multiplied with the amount of their rating and the duration of their operation, but in this paper, information exists for electrified villages could be used to predict the demand, as villages almost have the same life style. This paper describes a method used to predict the average energy consumed in each two months for every consumer living in a village by Artificial Neural Network (ANN). The input data are collected using a regional survey for samples of consumers representing typical types of different living, household appliances and energy consumption by a list of information, and the output data are collected from administration office of Piramagrun for each corresponding consumer. The result of this study shows that the average demand for different consumers from four villages in different months throughout the year is approximately 12 kWh/day, this model estimates the average demand/day for every consumer with a mean absolute percent error of 11.8%, and MathWorks software package MATLAB version 7.6.0 that contains and facilitate Neural Network Toolbox was used.
Abstract: The Higgs boson was discovered by the ATLAS
and CMS experimental groups in 2012 at the Large Hadron
Collider (LHC). Production and decay properties of the Higgs
boson, Standard Model (SM) couplings, and limits on effective
scale of the Higgs boson’s couplings with other bosons are
investigated at particle colliders. Deviations from SM estimates are
parametrized by effective Lagrangian terms to investigate Higgs
couplings. This is a model-independent method for describing the
new physics. In this study, sensitivity to neutral gauge boson
anomalous couplings with the Higgs boson is investigated using
the parameters of the Large Hadron electron Collider (LHeC)
and the Future Circular electron-hadron Collider (FCC-eh) with
a model-independent approach. By using MadGraph5_aMC@NLO
multi-purpose event generator with the parameters of LHeC and
FCC-eh, the bounds on the anomalous Hγγ, HγZ and HZZ couplings
in e− p → e− q H process are obtained. Detector simulations are
also taken into account in the calculations.
Abstract: A mathematical model and a numerical method for computing the temperature field of the profile part of convectionally cooled blades are developed. The theoretical substantiation of the method is proved by corresponding theorems. To this end, convergent quadrature processes were developed and error estimates were obtained in terms of the Zygmund continuity moduli. The boundary conditions for heat exchange are determined from the solution of the corresponding integral equations and empirical relations. The reliability of the developed methods is confirmed by calculation and experimental studies of the thermohydraulic characteristics of the nozzle apparatus of the first stage of the gas turbine.
Abstract: A mathematical model and an effective numerical method for calculating the temperature field of the profile part of convection cooled blades have been developed. The theoretical substantiation of the method is proved by corresponding theorems. To this end, convergent quadrature processes were developed and error estimates were obtained in terms of the Zygmund continuity moduli.The boundary conditions for heat exchange are determined from the solution of the corresponding integral equations and empirical relations.The reliability of the developed methods is confirmed by the calculation-experimental studies of the thermohydraulic characteristics of the nozzle apparatus of the first stage of a gas turbine.
Abstract: Dependencies between diverse factors involved in probabilistic seismic loss evaluation are recognized to be an imperative issue in acquiring accurate loss estimates. Dependencies among component damage costs could be taken into account considering two partial distinct states of independent or perfectly-dependent for component damage states; however, in our best knowledge, there is no available procedure to take account of loss dependencies in story level. This paper attempts to present a method called "modal cost superposition method" for decoupling story damage costs subjected to earthquake ground motions dealt with closed form differential equations between damage cost and engineering demand parameters which should be solved in complex system considering all stories' cost equations by the means of the introduced "substituted matrixes of mass and stiffness". Costs are treated as probabilistic variables with definite statistic factors of median and standard deviation amounts and a presumed probability distribution. To supplement the proposed procedure and also to display straightforwardness of its application, one benchmark study has been conducted. Acceptable compatibility has been proven for the estimated damage costs evaluated by the new proposed modal and also frequently used stochastic approaches for entire building; however, in story level, insufficiency of employing modification factor for incorporating occurrence probability dependencies between stories has been revealed due to discrepant amounts of dependency between damage costs of different stories. Also, more dependency contribution in occurrence probability of loss could be concluded regarding more compatibility of loss results in higher stories than the lower ones, whereas reduction in incorporation portion of cost modes provides acceptable level of accuracy and gets away from time consuming calculations including some limited number of cost modes in high mode situation.
Abstract: Reinforced concrete bridge deck condition assessments primarily use visual inspection methods, where an inspector looks for and records locations of cracks, potholes, efflorescence and other signs of probable deterioration. Sounding is another technique used to diagnose the condition of a bridge deck, however this method listens for damage within the subsurface as the surface is struck with a hammer or chain. Even though extensive procedures are in place for using these inspection techniques, neither one provides the inspector with a comprehensive understanding of the internal condition of a bridge deck – the location where damage originates from. In order to make accurate estimates of repair locations and quantities, in addition to allocating the necessary funding, a total understanding of the deck’s deteriorated state is key. The research presented in this paper collected infrared thermography and ground penetrating radar data from reinforced concrete bridge decks without an asphalt overlay. These decks were of various ages and their condition varied from brand new, to in need of replacement. The goals of this work were to first verify that these nondestructive evaluation methods could identify similar areas of healthy and damaged concrete, and then to see if combining the results of both methods would provide a higher confidence than if the condition assessment was completed using only one method. The results from each method were presented as plan view color contour plots. The results from one of the decks assessed as a part of this research, including these plan view plots, are presented in this paper. Furthermore, in order to answer the interest of transportation agencies throughout the United States, this research developed a step-by-step guide which demonstrates how to collect and assess a bridge deck using these nondestructive evaluation methods. This guide addresses setup procedures on the deck during the day of data collection, system setups and settings for different bridge decks, data post-processing for each method, and data visualization and quantification.
Abstract: In this paper, we propose a method to model the
relationship between failure time and degradation for a simple step
stress test where underlying degradation path is linear and different
causes of failure are possible. It is assumed that the intensity function
depends only on the degradation value. No assumptions are made
about the distribution of the failure times. A simple step-stress test
is used to shorten failure time of products and a tampered failure
rate (TFR) model is proposed to describe the effect of the changing
stress on the intensities. We assume that some of the products that
fail during the test have a cause of failure that is only known to
belong to a certain subset of all possible failures. This case is known
as masking. In the presence of masking, the maximum likelihood
estimates (MLEs) of the model parameters are obtained through an
expectation-maximization (EM) algorithm by treating the causes of
failure as missing values. The effect of incomplete information on the
estimation of parameters is studied through a Monte-Carlo simulation.
Finally, a real example is analyzed to illustrate the application of the
proposed methods.
Abstract: The increase of capital mobility across emerging economies has become an interesting topic for many economic policy makers. The current study tests the validity of Feldstein–Horioka puzzle for 5 BRICS countries. The sample period of the study runs from 2001 to 2014. The study uses the following parameter estimates well known as the Fully Modified OLS (FMOLS), and Dynamic OLS (DOLS). The results of the study show that investment and savings are cointegrated in the long run. The parameters estimated using FMOLS and DOLS are 0.85 and 0.74, respectively. These results imply that policy makers within BRICS countries have to consider flexible monetary and fiscal policy instruments to influence the mobility of capital with the bloc.