The Effect of Ownership Structure on Stock Prices after Crisis: A Study on Ise 100 Index

Using Turkish data, in this study it is investigated that whether a firm’s ownership structure has an impact on its stock prices after the crisis. A linear regression model is conducted on the data of non-financial firms that are trading in Istanbul Stock Exchange 100 Index (ISE 100) index. The findings show that, all explanatory variables such as inside ownership, largest ownership, concentrated ownership, foreign shareholders, family controlled and dispersed ownership are not very important to explain stock prices after the crisis. Family controlled firms and concentrated ownership is positively related to stock price, dispersed ownership, largest ownership, foreign shareholders, and inside ownership structures have negative interaction between stock prices, but because of the p value is not under the value of 0.05 this relation is not significant. In addition, the analysis shows that, the shares of firms that have inside, largest and dispersed ownership structure are outperform comparing with the other firms. Furthermore, ownership concentrated firms outperform to family controlled firms.

Development of User Interface for Path Planning System for Bus Network and On-demand Bus Reservation System

Route bus system is one of fundamental transportation device for aged people and students, and has an important role in every province. However, passengers decrease year by year, therefore the authors have developed the system called "Bus-Net" as a web application to sustain the public transport. But there are two problems in Bus-Net. One is the user interface that does not consider the variety of the device, and the other is the path planning system that dose not correspond to the on-demand bus. Then, Bus-Net was improved to be able to utilize the variety of the device, and a new function corresponding to the on-demand bus was developed.

Automatic Distance Compensation for Robust Voice-based Human-Computer Interaction

Distant-talking voice-based HCI system suffers from performance degradation due to mismatch between the acoustic speech (runtime) and the acoustic model (training). Mismatch is caused by the change in the power of the speech signal as observed at the microphones. This change is greatly influenced by the change in distance, affecting speech dynamics inside the room before reaching the microphones. Moreover, as the speech signal is reflected, its acoustical characteristic is also altered by the room properties. In general, power mismatch due to distance is a complex problem. This paper presents a novel approach in dealing with distance-induced mismatch by intelligently sensing instantaneous voice power variation and compensating model parameters. First, the distant-talking speech signal is processed through microphone array processing, and the corresponding distance information is extracted. Distance-sensitive Gaussian Mixture Models (GMMs), pre-trained to capture both speech power and room property are used to predict the optimal distance of the speech source. Consequently, pre-computed statistic priors corresponding to the optimal distance is selected to correct the statistics of the generic model which was frozen during training. Thus, model combinatorics are post-conditioned to match the power of instantaneous speech acoustics at runtime. This results to an improved likelihood in predicting the correct speech command at farther distances. We experiment using real data recorded inside two rooms. Experimental evaluation shows voice recognition performance using our method is more robust to the change in distance compared to the conventional approach. In our experiment, under the most acoustically challenging environment (i.e., Room 2: 2.5 meters), our method achieved 24.2% improvement in recognition performance against the best-performing conventional method.

Implementing a Visual Servoing System for Robot Controlling

Nowadays, with the emerging of the new applications like robot control in image processing, artificial vision for visual servoing is a rapidly growing discipline and Human-machine interaction plays a significant role for controlling the robot. This paper presents a new algorithm based on spatio-temporal volumes for visual servoing aims to control robots. In this algorithm, after applying necessary pre-processing on video frames, a spatio-temporal volume is constructed for each gesture and feature vector is extracted. These volumes are then analyzed for matching in two consecutive stages. For hand gesture recognition and classification we tested different classifiers including k-Nearest neighbor, learning vector quantization and back propagation neural networks. We tested the proposed algorithm with the collected data set and results showed the correct gesture recognition rate of 99.58 percent. We also tested the algorithm with noisy images and algorithm showed the correct recognition rate of 97.92 percent in noisy images.

Experimental Studies on the Combustion and Emission Characteristics of a Diesel Engine Fuelled with Used Cooking Oil Methyl Esterand its Diesel Blends

Transesterified vegetable oils (biodiesel) are promising alternative fuel for diesel engines. Used vegetable oils are disposed from restaurants in large quantities. But higher viscosity restricts their direct use in diesel engines. In this study, used cooking oil was dehydrated and then transesterified using an alkaline catalyst. The combustion, performance and emission characteristics of Used Cooking oil Methyl Ester (UCME) and its blends with diesel oil are analysed in a direct injection C.I. engine. The fuel properties and the combustion characteristics of UCME are found to be similar to those of diesel. A minor decrease in thermal efficiency with significant improvement in reduction of particulates, carbon monoxide and unburnt hydrocarbons is observed compared to diesel. The use of transesterified used cooking oil and its blends as fuel for diesel engines will reduce dependence on fossil fuels and also decrease considerably the environmental pollution.

Iran’s Gas Flare Recovery Options Using MCDM

In this paper, five options of Iran’s gas flare recovery have been compared via MCDM method. For developing the model, the weighing factor of each indicator an AHP method is used via the Expert-choice software. Several cases were considered in this analysis. They are defined where the priorities were defined always keeping one criterion in first position, while the priorities of the other criteria were defined by ordinal information defining the mutual relations of the criteria and the respective indicators. The results, show that amongst these cases, priority is obtained for CHP usage where availability indicator is highly weighted while the pipeline usage is obtained where environmental indicator highly weighted and the injection priority is obtained where economic indicator is highly weighted and also when the weighing factor of all the criteria are the same the Injection priority is obtained.

Monitoring Patents Using the Statistical Process Control

The statistical process control (SPC) is one of the most powerful tools developed to assist ineffective control of quality, involves collecting, organizing and interpreting data during production. This article aims to show how the use of CEP industries can control and continuously improve product quality through monitoring of production that can detect deviations of parameters representing the process by reducing the amount of off-specification products and thus the costs of production. This study aimed to conduct a technological forecasting in order to characterize the research being done related to the CEP. The survey was conducted in the databases Spacenet, WIPO and the National Institute of Industrial Property (INPI). Among the largest are the United States depositors and deposits via PCT, the classification section that was presented in greater abundance to F.

Holistic Face Recognition using Multivariate Approximation, Genetic Algorithms and AdaBoost Classifier: Preliminary Results

Several works regarding facial recognition have dealt with methods which identify isolated characteristics of the face or with templates which encompass several regions of it. In this paper a new technique which approaches the problem holistically dispensing with the need to identify geometrical characteristics or regions of the face is introduced. The characterization of a face is achieved by randomly sampling selected attributes of the pixels of its image. From this information we construct a set of data, which correspond to the values of low frequencies, gradient, entropy and another several characteristics of pixel of the image. Generating a set of “p" variables. The multivariate data set with different polynomials minimizing the data fitness error in the minimax sense (L∞ - Norm) is approximated. With the use of a Genetic Algorithm (GA) it is able to circumvent the problem of dimensionality inherent to higher degree polynomial approximations. The GA yields the degree and values of a set of coefficients of the polynomials approximating of the image of a face. By finding a family of characteristic polynomials from several variables (pixel characteristics) for each face (say Fi ) in the data base through a resampling process the system in use, is trained. A face (say F ) is recognized by finding its characteristic polynomials and using an AdaBoost Classifier from F -s polynomials to each of the Fi -s polynomials. The winner is the polynomial family closer to F -s corresponding to target face in data base.

Constitutive Equations for Human Saphenous Vein Coronary Artery Bypass Graft

Coronary artery bypass grafts (CABG) are widely studied with respect to hemodynamic conditions which play important role in presence of a restenosis. However, papers which concern with constitutive modeling of CABG are lacking in the literature. The purpose of this study is to find a constitutive model for CABG tissue. A sample of the CABG obtained within an autopsy underwent an inflation–extension test. Displacements were recoredered by CCD cameras and subsequently evaluated by digital image correlation. Pressure – radius and axial force – elongation data were used to fit material model. The tissue was modeled as onelayered composite reinforced by two families of helical fibers. The material is assumed to be locally orthotropic, nonlinear, incompressible and hyperelastic. Material parameters are estimated for two strain energy functions (SEF). The first is classical exponential. The second SEF is logarithmic which allows interpretation by means of limiting (finite) strain extensibility. Presented material parameters are estimated by optimization based on radial and axial equilibrium equation in a thick-walled tube. Both material models fit experimental data successfully. The exponential model fits significantly better relationship between axial force and axial strain than logarithmic one.

EGCL: An Extended G-Code Language with Flow Control, Functions and Mnemonic Variables

In the context of computer numerical control (CNC) and computer aided manufacturing (CAM), the capabilities of programming languages such as symbolic and intuitive programming, program portability and geometrical portfolio have special importance. They allow to save time and to avoid errors during part programming and permit code re-usage. Our updated literature review indicates that the current state of art presents voids in parametric programming, program portability and programming flexibility. In response to this situation, this article presents a compiler implementation for EGCL (Extended G-code Language), a new, enriched CNC programming language which allows the use of descriptive variable names, geometrical functions and flow-control statements (if-then-else, while). Our compiler produces low-level generic, elementary ISO-compliant Gcode, thus allowing for flexibility in the choice of the executing CNC machine and in portability. Our results show that readable variable names and flow control statements allow a simplified and intuitive part programming and permit re-usage of the programs. Future work includes allowing the programmer to define own functions in terms of EGCL, in contrast to the current status of having them as library built-in functions.

High Energy Dual-Wavelength Mid-Infrared Extracavity KTA Optical Parametric Oscillator

A high energy dual-wavelength extracavity KTA optical parametric oscillator (OPO) with excellent stability and beam quality, which is pumped by a Q-switched single-longitudinal-mode Nd:YAG laser, has been demonstrated based on a type II noncritical phase matching (NCPM) KTA crystal. The maximum pulse energy of 10.2 mJ with the output stability of better than 4.1% rms at 3.467 μm is obtained at the repetition rate of 10 Hz and pulse width of 2 ns, and the 11.9 mJ of 1.535 μm radiation is obtained simultaneously. This extracavity NCPM KTA OPO is very useful when high energy, high beam quality and smooth time domain are needed.

Investigation of Short Time Scale Variation of Solar Radiation Spectrum in UV, PAR, and NIR Bands due to Atmospheric Aerosol and Water Vapor

Long terms variation of solar insolation had been widely studied. However, its parallel observations in short time scale is rather lacking. This paper aims to investigate the short time scale evolution of solar radiation spectrum (UV, PAR, and NIR bands) due to atmospheric aerosols and water vapors. A total of 25 days of global and diffused solar spectrum ranges from air mass 2 to 6 were collected using ground-based spectrometer with shadowband technique. The result shows that variation of solar radiation is the least in UV fraction, followed by PAR and the most in NIR. Broader variations in PAR and NIR are associated with the short time scale fluctuations of aerosol and water vapors. The corresponding daily evolution of UV, PAR, and NIR fractions implies that aerosol and water vapors variation could also be responsible for the deviation pattern in the Langley-plot analysis.

Iterative Joint Power Control and Partial Crosstalk Cancellation in Upstream VDSL

Crosstalk is the major limiting issue in very high bit-rate digital subscriber line (VDSL) systems in terms of bit-rate or service coverage. At the central office side, joint signal processing accompanied by appropriate power allocation enables complex multiuser processors to provide near capacity rates. Unfortunately complexity grows with the square of the number of lines within a binder, so by taking into account that there are only a few dominant crosstalkers who contribute to main part of crosstalk power, the canceller structure can be simplified which resulted in a much lower run-time complexity. In this paper, a multiuser power control scheme, namely iterative waterfilling, is combined with previously proposed partial crosstalk cancellation approaches to demonstrate the best ever achieved performance which is verified by simulation results.

Comparing Transformational Leadership in Successful and Unsuccessful Companies

In this article, while it is attempted to describe the problem and its importance, transformational leadership is studied by considering leadership theories. Issues such as the definition of transformational leadership and its aspects are compared on the basis of the ideas of various connoisseurs and then it (transformational leadership) is examined in successful and unsuccessful companies. According to the methodology, the method of research, hypotheses, population and statistical sample are investigated and research findings are analyzed by using descriptive and inferential statistical methods in the framework of analytical tables. Finally, our conclusion is provided by considering the results of statistical tests. The final result shows that transformational leadership is significantly higher in successful companies than unsuccessful ones P

Higher-Dimensional Quantum Cryptography

We report on a high-speed quantum cryptography system that utilizes simultaneous entanglement in polarization and in “time-bins". With multiple degrees of freedom contributing to the secret key, we can achieve over ten bits of random entropy per detected coincidence. In addition, we collect from multiple spots o the downconversion cone to further amplify the data rate, allowing usto achieve over 10 Mbits of secure key per second.

Surrogate based Evolutionary Algorithm for Design Optimization

Optimization is often a critical issue for most system design problems. Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, finding optimal solution to complex high dimensional, multimodal problems often require highly computationally expensive function evaluations and hence are practically prohibitive. The Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model presented in our earlier work [14] reduced computation time by controlled use of meta-models to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the meta-model are generated from a single uniform model. Situations like model formation involving variable input dimensions and noisy data certainly can not be covered by this assumption. In this paper we present an enhanced version of DAFHEA that incorporates a multiple-model based learning approach for the SVM approximator. DAFHEA-II (the enhanced version of the DAFHEA framework) also overcomes the high computational expense involved with additional clustering requirements of the original DAFHEA framework. The proposed framework has been tested on several benchmark functions and the empirical results illustrate the advantages of the proposed technique.

Shadow Detection for Increased Accuracy of Privacy Enhancing Methods in Video Surveillance Edge Devices

Shadow detection is still considered as one of the potential challenges for intelligent automated video surveillance systems. A pre requisite for reliable and accurate detection and tracking is the correct shadow detection and classification. In such a landscape of conditions, privacy issues add more and more complexity and require reliable shadow detection. In this work the intertwining between security, accuracy, reliability and privacy is analyzed and, accordingly, a novel architecture for Privacy Enhancing Video Surveillance (PEVS) is introduced. Shadow detection and masking are dealt with through the combination of two different approaches simultaneously. This results in a unique privacy enhancement, without affecting security. Subsequently, the methodology was employed successfully in a large-scale wireless video surveillance system; privacy relevant information was stored and encrypted on the unit, without transferring it over an un-trusted network.

Design an Electrical Nose with ZnO Nanowire Arrays

Vertical ZnO nanowire array films were synthesized based on aqueous method for sensing applications. ZnO nanowires were investigated structurally using X-ray diffraction (XRD) and scanning electron microscopy (SEM). The gas-sensing properties of ZnO nanowires array films are studied. It is found that the ZnO nanowires array film sensor exhibits excellent sensing properties towards O2 and CO2 at 100 °C with the response time shorter than 5 s. High surface area / volume ratio of vertical ZnO nanowire and high mobility accounts for the fast response and recovery. The sensor response was measured in the range from 100 to 500 ppm O2 and CO2 in this study.

Fuzzy Hierarchical Clustering Applied for Quality Estimation in Manufacturing System

This paper develops a quality estimation method with the application of fuzzy hierarchical clustering. Quality estimation is essential to quality control and quality improvement as a precise estimation can promote a right decision-making in order to help better quality control. Normally the quality of finished products in manufacturing system can be differentiated by quality standards. In the real life situation, the collected data may be vague which is not easy to be classified and they are usually represented in term of fuzzy number. To estimate the quality of product presented by fuzzy number is not easy. In this research, the trapezoidal fuzzy numbers are collected in manufacturing process and classify the collected data into different clusters so as to get the estimation. Since normal hierarchical clustering methods can only be applied for real numbers, fuzzy hierarchical clustering is selected to handle this problem based on quality standards.

Ruin Probabilities with Dependent Rates of Interest and Autoregressive Moving Average Structures

This paper studies ruin probabilities in two discrete-time risk models with premiums, claims and rates of interest modelled by three autoregressive moving average processes. Generalized Lundberg inequalities for ruin probabilities are derived by using recursive technique. A numerical example is given to illustrate the applications of these probability inequalities.