The Competitive Newsvendor Game with Overestimated Demand

The tradition competitive newsvendor game assumes decision makers are rational. However, there are behavioral biases when people make decisions, such as loss aversion, mental accounting and overconfidence. Overestimation of a subject’s own performance is one type of overconfidence. The objective of this research is to analyze the impact of the overestimated demand in the newsvendor competitive game with two players. This study builds a competitive newsvendor game model where newsvendors have private information of their demands, which is overestimated. At the same time, demands of each newsvendor forecasted by a third party institution are available. This research shows that the overestimation leads to demand steal effect, which reduces the competitor’s order quantity. However, the overall supply of the product increases due to overestimation. This study illustrates the boundary condition for the overestimated newsvendor to have the equilibrium order drop due to the demand steal effect from the other newsvendor. A newsvendor who has higher critical fractile will see its equilibrium order decrease with the drop of estimation level from the other newsvendor.

Foil Bearing Stiffness Estimation with Pseudospectral Scheme

Compliant foil gas lubricated bearings are used for the support of light loads in the order of few kilograms at high speeds, in the order of 50,000 RPM. The stiffness of the foil bearings depends both on the stiffness of the compliant foil and on the lubricating gas film. The stiffness of the bearings plays a crucial role in the stable operation of the supported rotor over a range of speeds. This paper describes a numerical approach to estimate the stiffness of the bearings using pseudo spectral scheme. Methodology to obtain the stiffness of the foil bearing as a function of weight of the shaft is given and the results are presented.

Effect of Soil Corrosion in Failures of Buried Gas Pipelines

In this paper, a brief review of the corrosion mechanism in buried pipe and modes of failure is provided together with the available corrosion models. Moreover, the sensitivity analysis is performed to understand the influence of corrosion model parameters on the remaining life estimation. Further, the probabilistic analysis is performed to propagate the uncertainty in the corrosion model on the estimation of the renaming life of the pipe. Finally, the comparison among the corrosion models on the basis of the remaining life estimation will be provided to improve the renewal plan.

Image Features Comparison-Based Position Estimation Method Using a Camera Sensor

In this paper, propose method that can user’s position that based on database is built from single camera. Previous positioning calculate distance by arrival-time of signal like GPS (Global Positioning System), RF(Radio Frequency). However, these previous method have weakness because these have large error range according to signal interference. Method for solution estimate position by camera sensor. But, signal camera is difficult to obtain relative position data and stereo camera is difficult to provide real-time position data because of a lot of image data, too. First of all, in this research we build image database at space that able to provide positioning service with single camera. Next, we judge similarity through image matching of database image and transmission image from user. Finally, we decide position of user through position of most similar database image. For verification of propose method, we experiment at real-environment like indoor and outdoor. Propose method is wide positioning range and this method can verify not only position of user but also direction.

An Approach to Noise Variance Estimation in Very Low Signal-to-Noise Ratio Stochastic Signals

This paper describes a method for AWGN (Additive White Gaussian Noise) variance estimation in noisy stochastic signals, referred to as Multiplicative-Noising Variance Estimation (MNVE). The aim was to develop an estimation algorithm with minimal number of assumptions on the original signal structure. The provided MATLAB simulation and results analysis of the method applied on speech signals showed more accuracy than standardized AR (autoregressive) modeling noise estimation technique. In addition, great performance was observed on very low signal-to-noise ratios, which in general represents the worst case scenario for signal denoising methods. High execution time appears to be the only disadvantage of MNVE. After close examination of all the observed features of the proposed algorithm, it was concluded it is worth of exploring and that with some further adjustments and improvements can be enviably powerful.

Long Term Examination of the Profitability Estimation Focused on Benefits

Strategic investment decisions are characterized by high innovation potential and long-term effects on the competitiveness of enterprises. Due to the uncertainty and risks involved in this complex decision making process, the need arises for well-structured support activities. A method that considers cost and the long-term added value is the cost-benefit effectiveness estimation. One of those methods is the “profitability estimation focused on benefits – PEFB”-method developed at the Institute of Management Cybernetics at RWTH Aachen University. The method copes with the challenges associated with strategic investment decisions by integrating long-term non-monetary aspects whilst also mapping the chronological sequence of an investment within the organization’s target system. Thus, this method is characterized as a holistic approach for the evaluation of costs and benefits of an investment. This participation-oriented method was applied to business environments in many workshops. The results of the workshops are a library of more than 96 cost aspects, as well as 122 benefit aspects. These aspects are preprocessed and comparatively analyzed with regards to their alignment to a series of risk levels. For the first time, an accumulation and a distribution of cost and benefit aspects regarding their impact and probability of occurrence are given. The results give evidence that the PEFB-method combines precise measures of financial accounting with the incorporation of benefits. Finally, the results constitute the basics for using information technology and data science for decision support when applying within the PEFB-method.

Link Availability Estimation for Modified AOMDV Protocol

Routing in adhoc networks is a challenge as nodes are mobile, and links are constantly created and broken. Present ondemand adhoc routing algorithms initiate route discovery after a path breaks, incurring significant cost to detect disconnection and establish a new route. Specifically, when a path is about to be broken, the source is warned of the likelihood of a disconnection. The source then initiates path discovery early, avoiding disconnection totally. A path is considered about to break when link availability decreases. This study modifies Adhoc On-demand Multipath Distance Vector routing (AOMDV) so that route handoff occurs through link availability estimation.

A Comparative Study of Image Segmentation Algorithms

In some applications, such as image recognition or compression, segmentation refers to the process of partitioning a digital image into multiple segments. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. Image segmentation is to classify or cluster an image into several parts (regions) according to the feature of image, for example, the pixel value or the frequency response. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image. Several image segmentation algorithms were proposed to segment an image before recognition or compression. Up to now, many image segmentation algorithms exist and be extensively applied in science and daily life. According to their segmentation method, we can approximately categorize them into region-based segmentation, data clustering, and edge-base segmentation. In this paper, we give a study of several popular image segmentation algorithms that are available.

Two New Relative Efficiencies of Linear Weighted Regression

In statistics parameter theory, usually the parameter estimations have two kinds, one is the least-square estimation (LSE), and the other is the best linear unbiased estimation (BLUE). Due to the determining theorem of minimum variance unbiased estimator (MVUE), the parameter estimation of BLUE in linear model is most ideal. But since the calculations are complicated or the covariance is not given, people are hardly to get the solution. Therefore, people prefer to use LSE rather than BLUE. And this substitution will take some losses. To quantize the losses, many scholars have presented many kinds of different relative efficiencies in different views. For the linear weighted regression model, this paper discusses the relative efficiencies of LSE of β to BLUE of β. It also defines two new relative efficiencies and gives their lower bounds.

Fuzzy Based Visual Texture Feature for Psoriasis Image Analysis

This paper proposes a rotational invariant texture feature based on the roughness property of the image for psoriasis image analysis. In this work, we have applied this feature for image classification and segmentation. The fuzzy concept is employed to overcome the imprecision of roughness. Since the psoriasis lesion is modeled by a rough surface, the feature is extended for calculating the Psoriasis Area Severity Index value. For classification and segmentation, the Nearest Neighbor algorithm is applied. We have obtained promising results for identifying affected lesions by using the roughness index and severity level estimation.

Human Motion Capture: New Innovations in the Field of Computer Vision

Human motion capture has become one of the major area of interest in the field of computer vision. Some of the major application areas that have been rapidly evolving include the advanced human interfaces, virtual reality and security/surveillance systems. This study provides a brief overview of the techniques and applications used for the markerless human motion capture, which deals with analyzing the human motion in the form of mathematical formulations. The major contribution of this research is that it classifies the computer vision based techniques of human motion capture based on the taxonomy, and then breaks its down into four systematically different categories of tracking, initialization, pose estimation and recognition. The detailed descriptions and the relationships descriptions are given for the techniques of tracking and pose estimation. The subcategories of each process are further described. Various hypotheses have been used by the researchers in this domain are surveyed and the evolution of these techniques have been explained. It has been concluded in the survey that most researchers have focused on using the mathematical body models for the markerless motion capture.

Wear Measuring and Wear Modelling Based On Archard, ASTM, and Neural Network Models

The wear measuring and wear modelling are fundamental issues in the industrial field, mainly correlated to the economy and safety. Therefore, there is a need to study the wear measurements and wear estimation. Pin-on-disc test is the most common test which is used to study the wear behaviour. In this paper, the pin-on-disc (AEROTECH UNIDEX 11) is used for the investigation of the effects of normal load and hardness of material on the wear under dry and sliding conditions. In the pin-on-disc rig, two specimens were used; one, a pin is made of steel with a tip, positioned perpendicular to the disc, where the disc is made of aluminium. The pin wear and disc wear were measured by using the following instruments: The Talysurf instrument, a digital microscope, and the alicona instrument. The Talysurf profilometer was used to measure the pin/disc wear scar depth, digital microscope was used to measure the diameter and width of wear scar, and the alicona was used to measure the pin wear and disc wear. After that, the Archard model, American Society for Testing and Materials model (ASTM), and neural network model were used for pin/disc wear modelling. Simulation results were implemented by using the Matlab program. This paper focuses on how the alicona can be used for wear measurements and how the neural network can be used for wear estimation.

Spike Sorting Method Using Exponential Autoregressive Modeling of Action Potentials

Neurons in the nervous system communicate with each other by producing electrical signals called spikes. To investigate the physiological function of nervous system it is essential to study the activity of neurons by detecting and sorting spikes in the recorded signal. In this paper a method is proposed for considering the spike sorting problem which is based on the nonlinear modeling of spikes using exponential autoregressive model. The genetic algorithm is utilized for model parameter estimation. In this regard some selected model coefficients are used as features for sorting purposes. For optimal selection of model coefficients, self-organizing feature map is used. The results show that modeling of spikes with nonlinear autoregressive model outperforms its linear counterpart. Also the extracted features based on the coefficients of exponential autoregressive model are better than wavelet based extracted features and get more compact and well-separated clusters. In the case of spikes different in small-scale structures where principal component analysis fails to get separated clouds in the feature space, the proposed method can obtain well-separated cluster which removes the necessity of applying complex classifiers.

Towards an Intelligent Ontology Construction Cost Estimation System: Using BIM and New Rules of Measurement Techniques

Construction cost estimation is one of the most important aspects of construction project design. For generations, the process of cost estimating has been manual, time-consuming and error-prone. This has partly led to most cost estimates to be unclear and riddled with inaccuracies that at times lead to over- or underestimation of construction cost. The development of standard set of measurement rules that are understandable by all those involved in a construction project, have not totally solved the challenges. Emerging Building Information Modelling (BIM) technologies can exploit standard measurement methods to automate cost estimation process and improve accuracies. This requires standard measurement methods to be structured in ontological and machine readable format; so that BIM software packages can easily read them. Most standard measurement methods are still text-based in textbooks and require manual editing into tables or Spreadsheet during cost estimation. The aim of this study is to explore the development of an ontology based on New Rules of Measurement (NRM) commonly used in the UK for cost estimation. The methodology adopted is Methontology, one of the most widely used ontology engineering methodologies. The challenges in this exploratory study are also reported and recommendations for future studies proposed.

The Evaluation of the Performance of Different Filtering Approaches in Tracking Problem and the Effect of Noise Variance

Performance of different filtering approaches depends on modeling of dynamical system and algorithm structure. For modeling and smoothing the data the evaluation of posterior distribution in different filtering approach should be chosen carefully. In this paper different filtering approaches like filter KALMAN, EKF, UKF, EKS and smoother RTS is simulated in some trajectory tracking of path and accuracy and limitation of these approaches are explained. Then probability of model with different filters is compered and finally the effect of the noise variance to estimation is described with simulations results.

Parameters Estimation of Multidimensional Possibility Distributions

We present a solution to the Maxmin u/E parameters estimation problem of possibility distributions in m-dimensional case. Our method is based on geometrical approach, where minimal area enclosing ellipsoid is constructed around the sample. Also we demonstrate that one can improve results of well-known algorithms in fuzzy model identification task using Maxmin u/E parameters estimation.

Localization of Near Field Radio Controlled Unintended Emitting Sources

Locating Radio Controlled (RC) devices using their unintended emissions has a great interest considering security concerns. Weak nature of these emissions requires near field localization approach since it is hard to detect these signals in far field region of array. Instead of only angle estimation, near field localization also requires range estimation of the source which makes this method more complicated than far field models. Challenges of locating such devices in a near field region and real time environment are analyzed in this paper. An ESPRIT like near field localization scheme is utilized for both angle and range estimation. 1-D search with symmetric subarrays is provided. Two 7 element uniform linear antenna arrays (ULA) are employed for locating RC source. Experiment results of location estimation for one unintended emitting walkie-talkie for different positions are given.

A Study of Adaptive Fault Detection Method for GNSS Applications

This study is purposed to develop an efficient fault detection method for Global Navigation Satellite Systems (GNSS) applications based on adaptive noise covariance estimation. Due to the dependence on radio frequency signals, GNSS measurements are dominated by systematic errors in receiver’s operating environment. In the proposed method, the pseudorange and carrier-phase measurement noise covariances are obtained at time propagations and measurement updates in process of Carrier-Smoothed Code (CSC) filtering, respectively. The test statistics for fault detection are generated by the estimated measurement noise covariances. To evaluate the fault detection capability, intentional faults were added to the filed-collected measurements. The experiment result shows that the proposed method is efficient in detecting unhealthy measurements and improves GNSS positioning accuracy against fault occurrences.

Studying the Effects of Economic and Financial Development as well as Institutional Quality on Environmental Destruction in the Upper-Middle Income Countries

The current study explored the effect of economic development, financial development and institutional quality on environmental destruction in upper-middle income countries during the time period of 1999-2011. The dependent variable is logarithm of carbon dioxide emissions that can be considered as an index for destruction or quality of the environment given to its effects on the environment. Financial development and institutional development variables as well as some control variables were considered. In order to study cross-sectional correlation among the countries under study, Pesaran and Friz test was used. Since the results of both tests show cross-sectional correlation in the countries under study, seemingly unrelated regression method was utilized for model estimation. The results disclosed that Kuznets’ environmental curve hypothesis is confirmed in upper-middle income countries and also, financial development and institutional quality have a significant effect on environmental quality. The results of this study can be considered by policy makers in countries with different income groups to have access to a growth accompanied by improved environmental quality.

Comparative Analysis of Two Approaches to Joint Signal Detection, ToA and AoA Estimation in Multi-Element Antenna Arrays

In this paper two approaches to joint signal detection, time of arrival (ToA) and angle of arrival (AoA) estimation in multi-element antenna array are investigated. Two scenarios were considered: first one, when the waveform of the useful signal is known a priori and, second one, when the waveform of the desired signal is unknown. For first scenario, the antenna array signal processing based on multi-element matched filtering (MF) with the following non-coherent detection scheme and maximum likelihood (ML) parameter estimation blocks is exploited. For second scenario, the signal processing based on the antenna array elements covariance matrix estimation with the following eigenvector analysis and ML parameter estimation blocks is applied. The performance characteristics of both signal processing schemes are thoroughly investigated and compared for different useful signals and noise parameters.