South Korean Tourists' Expectation, Satisfaction and Loyalty Relationship

The aim of this study is to investigate the relationship between expectation, satisfaction and loyalty of South Korean tourists visiting Turkey. In the research, a questionnaire was used as a data collecting tool. The questionnaires are filled by South Korean tourists coming to Turkey through package tours and individual. The survey was conducted in 2014 in Nevsehir (Cappadocia Region) and Istanbul. Tourist guides and agency staff have helped the implementation of surveys. The survey questions are composed of 4 parts, which are “demographic characteristics of tourists”, “travel behavior characteristics”, “perception of expectations on destination attributes” and “perception of destination loyalty”. 5-point Likert type scale including 28 destination attributes was used to measure the expectations of South Korean tourists coming to Turkey. Questions were directed to the tourists to measure the destination loyalty. The questions relating to destination loyalty are “Talking about Turkey to others”, “Recommendation Turkey to others” and “Tourists’ intentions to revisit Turkey”. The basic hypothesis of the research is that there is a statistically significant relationship among expectations, satisfactions and destination loyalty of South Korean tourists coming to Turkey. The results indicated that the expectation had a significant effect on overall satisfaction. In addition it was seen that between overall satisfaction of tourists and destination loyalty had a significant relationship. Based on findings, some suggestions for tour operators and travel agencies were made.

The Effect of Stone Column (Nailing and Geogrid) on Stability of Expansive Clay

By enhancing the applicatıon of grounds for establishment and due to the lack of appropriate sites, engineers attempt to seek out a new method to reduce the weakness of soils. İn aspect of economic situation, various ways have been used to decrease the weak grounds. Because of the rapid development of infrastructural facilities, spreading the construction operation is an obligation. Furthermore, in various sites with the really bad soil situation, engineers have considered obvious problems. One of the most essential ways for developing the weak soils is stone column. Obviously, the method was introduced in France in 1830 to improve a native soil initially. Stone columns have an expanding range of usage in different rough foundation sites all over the world to increase the bearing capacity, to reduce the whole and differential settlements, to enhance the rate of consolidation, to stabilize slopes stability of embankments and to increase the liquefaction resistance as well. A recent procedure called installing vertical nails along the round stone columns in order to make better the performance of considered columns is offered. Moreover, thanks to the enhancing the nail diameter, number and embedment nail depth, the positive points of vertical circumferential nails increases. Based on the result of this study, load caring capacity will be develop with enhancing the length and the power of reinforcements in vertical encasement stone column (CESC). In this study, the main purpose is comparing two methods of stone columns (installed a nail surrounding the stone columns and using geogrid on clay) for enhancing the bearing capacity, decreasing the whole and various settlements.

Robust Batch Process Scheduling in Pharmaceutical Industries: A Case Study

Batch production plants provide a wide range of scheduling problems. In pharmaceutical industries a batch process is usually described by a recipe, consisting of an ordering of tasks to produce the desired product. In this research work we focused on pharmaceutical production processes requiring the culture of a microorganism population (i.e. bacteria, yeasts or antibiotics). Several sources of uncertainty may influence the yield of the culture processes, including (i) low performance and quality of the cultured microorganism population or (ii) microbial contamination. For these reasons, robustness is a valuable property for the considered application context. In particular, a robust schedule will not collapse immediately when a cell of microorganisms has to be thrown away due to a microbial contamination. Indeed, a robust schedule should change locally in small proportions and the overall performance measure (i.e. makespan, lateness) should change a little if at all. In this research work we formulated a constraint programming optimization (COP) model for the robust planning of antibiotics production. We developed a discrete-time model with a multi-criteria objective, ordering the different criteria and performing a lexicographic optimization. A feasible solution of the proposed COP model is a schedule of a given set of tasks onto available resources. The schedule has to satisfy tasks precedence constraints, resource capacity constraints and time constraints. In particular time constraints model tasks duedates and resource availability time windows constraints. To improve the schedule robustness, we modeled the concept of (a, b) super-solutions, where (a, b) are input parameters of the COP model. An (a, b) super-solution is one in which if a variables (i.e. the completion times of a culture tasks) lose their values (i.e. cultures are contaminated), the solution can be repaired by assigning these variables values with a new values (i.e. the completion times of a backup culture tasks) and at most b other variables (i.e. delaying the completion of at most b other tasks). The efficiency and applicability of the proposed model is demonstrated by solving instances taken from a real-life pharmaceutical company. Computational results showed that the determined super-solutions are near-optimal.

Bed Evolution under One-Episode Flushing in a Truck Sewer in Paris, France

Sewer deposits have been identified as a major cause of dysfunctions in combined sewer systems regarding sewer management, which induces different negative consequents resulting in poor hydraulic conveyance, environmental damages as well as worker’s health. In order to overcome the problematics of sedimentation, flushing has been considered as the most operative and cost-effective way to minimize the sediments impacts and prevent such challenges. Flushing, by prompting turbulent wave effects, can modify the bed form depending on the hydraulic properties and geometrical characteristics of the conduit. So far, the dynamics of the bed-load during high-flow events in combined sewer systems as a complex environment is not well understood, mostly due to lack of measuring devices capable to work in the “hostile” in combined sewer system correctly. In this regards, a one-episode flushing issue from an opening gate valve with weir function was carried out in a trunk sewer in Paris to understand its cleansing efficiency on the sediments (thickness: 0-30 cm). During more than 1h of flushing within 5 m distance in downstream of this flushing device, a maximum flowrate and a maximum level of water have been recorded at 5 m in downstream of the gate as 4.1 m3/s and 2.1 m respectively. This paper is aimed to evaluate the efficiency of this type of gate for around 1.1 km (from the point -50 m to +1050 m in downstream from the gate) by (i) determining bed grain-size distribution and sediments evolution through the sewer channel, as well as their organic matter content, and (ii) identifying sections that exhibit more changes in their texture after the flush. For the first one, two series of sampling were taken from the sewer length and then analyzed in laboratory, one before flushing and second after, at same points among the sewer channel. Hence, a non-intrusive sampling instrument has undertaken to extract the sediments smaller than the fine gravels. The comparison between sediments texture after the flush operation and the initial state, revealed the most modified zones by the flush effect, regarding the sewer invert slope and hydraulic parameters in the zone up to 400 m from the gate. At this distance, despite the increase of sediment grain-size rages, D50 (median grainsize) varies between 0.6 mm and 1.1 mm compared to 0.8 mm and 10 mm before and after flushing, respectively. Overall, regarding the sewer channel invert slope, results indicate that grains smaller than sands (< 2 mm) are more transported to downstream along about 400 m from the gate: in average 69% before against 38% after the flush with more dispersion of grain-sizes distributions. Furthermore, high effect of the channel bed irregularities on the bed material evolution has been observed after the flush.

Bioremediation of Sewage Sludge Contaminated with Fluorene Using a Lipopeptide Biosurfactant

The disposal and the treatment of sewage sludge is an expensive and environmentally complex problem. In this work, a lipopeptide biosurfactant extracted from corn steep liquor was used as ecofriendly and cost-competitive alternative for the mobilization and bioremediation of fluorene in sewage sludge. Results have demonstrated that this biosurfactant has the capability to mobilize fluorene to the aqueous phase, reducing the amount of fluorene in the sewage sludge from 484.4 mg/Kg up to 413.7 mg/Kg and 196.0 mg/Kg after 1 and 27 days respectively. Furthermore, once the fluorene was extracted the lipopeptide biosurfactant contained in the aqueous phase allowed the biodegradation, up to 40.5% of the initial concentration of this polycyclic aromatic hydrocarbon.

Simulation Studies on Concentrating Type Solar Cookers

A solar dish collector has been designed, fabricated and tested for its performance on 10-03-2015 in Salem, Tamilnadu, India. The experiments on cooking vessels of coated and un-coated with 5 Liters capacity have been used for cooking Rice. The results are shown in graphs. The solar cooker is always capable of cooking food within the expected length of time and based on the solar radiation levels. With minimum cooking power, the coated pressure cooker of 5 Liters capacity cooks the food at faster manner. This is due to the conductivity of the coating material provided in the cooker.

Research on the Aeration Systems’ Efficiency of a Lab-Scale Wastewater Treatment Plant

In order to obtain efficient pollutants removal in small-scale wastewater treatment plants, uniform water flow has to be achieved. The experimental setup, designed for treating high-load wastewater (leachate), consists of two aerobic biological reactors and a lamellar settler. Both biological tanks were aerated by using three different types of aeration systems - perforated pipes, membrane air diffusers and tube ceramic diffusers. The possibility of homogenizing the water mass with each of the air diffusion systems was evaluated comparatively. The oxygen concentration was determined by optical sensors with data logging. The experimental data was analyzed comparatively for all three different air dispersion systems aiming to identify the oxygen concentration variation during different operational conditions. The Oxygenation Capacity was calculated for each of the three systems and used as performance and selection parameter. The global mass transfer coefficients were also evaluated as important tools in designing the aeration system. Even though using the tubular porous diffusers leads to higher oxygen concentration compared to the perforated pipe system (which provides medium-sized bubbles in the aqueous solution), it doesn’t achieve the threshold limit of 80% oxygen saturation in less than 30 minutes. The study has shown that the optimal solution for the studied configuration was the radial air diffusers which ensure an oxygen saturation of 80% in 20 minutes. An increment of the values was identified when the air flow was increased.

Thermodynamic Approach of Lanthanide-Iron Double Oxides Formation

Standard Gibbs energy of formation ΔGfor(298.15) of lanthanide-iron double oxides of garnet-type crystal structure R3Fe5O12 - RIG (R – are rare earth ions) from initial oxides are evaluated. The calculation is based on the data of standard entropies S298.15 and standard enthalpies ΔH298.15 of formation of compounds which are involved in the process of garnets synthesis. Gibbs energy of formation is presented as temperature function ΔGfor(T) for the range 300-1600K. The necessary starting thermodynamic data were obtained from calorimetric study of heat capacity – temperature functions and by using the semi-empirical method for calculation of ΔH298.15 of formation. Thermodynamic functions for standard temperature – enthalpy, entropy and Gibbs energy - are recommended as reference data for technological evaluations. Through the structural series of rare earth-iron garnets the correlation between thermodynamic properties and characteristics of lanthanide ions are elucidated.

Young’s Modulus Variability: Influence on Masonry Vault Behavior

This paper presents a methodology for probabilistic assessment of bearing capacity and prediction of failure mechanism of masonry vaults at the ultimate state with consideration of the natural variability of Young’s modulus of stones. First, the computation model is explained. The failure mode corresponds to the four-hinge mechanism. Based on this consideration, the study of a vault composed of 16 segments is presented. The Young’s modulus of the segments is considered as random variable defined by a mean value and a coefficient of variation. A relationship linking the vault bearing capacity to the voussoirs modulus variation is proposed. The most probable failure mechanisms, in addition to that observed in the deterministic case, are identified for each variability level as well as their probability of occurrence. The results show that the mechanism observed in the deterministic case has decreasing probability of occurrence in terms of variability, while the number of other mechanisms and their probability of occurrence increases with the coefficient of variation of Young’s modulus. This means that if a significant change in the Young’s modulus of the segments is proven, taking it into account in computations becomes mandatory, both for determining the vault bearing capacity and for predicting its failure mechanism.

Fuzzy Gauge Capability (Cg and Cgk) through Buckley Approach

Different terms of the Statistical Process Control (SPC) has sketch in the fuzzy environment. However, Measurement System Analysis (MSA), as a main branch of the SPC, is rarely investigated in fuzzy area. This procedure assesses the suitability of the data to be used in later stages or decisions of the SPC. Therefore, this research focuses on some important measures of MSA and through a new method introduces the measures in fuzzy environment. In this method, which works based on Buckley approach, imprecision and vagueness nature of the real world measurement are considered simultaneously. To do so, fuzzy version of the gauge capability (Cg and Cgk) are introduced. The method is also explained through example clearly.

The Transient Reactive Power Regulation Capability of SVC for Large Scale WECS Connected to Distribution Networks

The recent interest in alternative and renewable energy systems results in increased installed capacity ratio of such systems in total energy production of the world. Specifically, Wind Energy Conversion Systems (WECS) draw significant attention among possible alternative energy options, recently. On the contrary of the positive points of penetrating WECS in all over the world in terms of environment protection, energy independence of the countries, etc., there are significant problems to be solved for the grid connection of large scale WECS. The reactive power regulation, voltage variation suppression, etc. can be presented as major issues to be considered in this regard. Thus, this paper evaluates the application of a Static VAr Compensator (SVC) unit for the reactive power regulation and operation continuity of WECS during a fault condition. The system is modeled employing the IEEE 13 node test system. Thus, it is possible to evaluate the system performance with an overall grid simulation model close to real grid systems. The overall simulation model is developed in MATLAB/Simulink/SimPowerSystems® environments and the obtained results effectively match the target of the provided study.

Banking Union: A New Step towards Completing the Economic and Monetary Union

This study analyzes the critical gaps in the architecture of European stability and the expected role of the banking union as the new important step towards completing the Economic and Monetary Union that should enable the creation of safe and sound financial sector for the euro area market. The single rulebook together with the Single Supervisory Mechanism and the Single Resolution Mechanism - as two main pillars of the banking union, should provide a consistent application of common rules and administrative standards for supervision, recovery and resolution of banks – with the final aim of replacing the former bail-out practice with the bail-in system through which possible future bank failures would be resolved by their own funds, i.e. with minimal costs for taxpayers and real economy. In this way, the vicious circle between banks and sovereigns would be broken. It would also reduce the financial fragmentation recorded in the years of crisis as the result of divergent behaviors in risk premium, lending activities and interest rates between the core and the periphery. In addition, it should strengthen the effectiveness of monetary transmission channels, in particular the credit channels and overflows of liquidity on the money market which, due to the fragmentation of the common financial market, has been significantly disabled in period of crisis. However, contrary to all the positive expectations related to the future functioning of the banking union, major findings of this study indicate that characteristics of the economic system in which the banking union will operate should not be ignored. The euro area is an integration of strong and weak entities with large differences in economic development, wealth, assets of banking systems, growth rates and accountability of fiscal policy. The analysis indicates that low and unbalanced economic growth remains a challenge for the maintenance of financial stability and this problem cannot be resolved just by a single supervision. In many countries bank assets exceed their GDP by several times and large banks are still a matter of concern, because of their systemic importance for individual countries and the euro zone as a whole. The creation of the Single Supervisory Mechanism and the Single Resolution Mechanism is a response to the European crisis, which has particularly affected peripheral countries and caused the associated loop between the banking crisis and the sovereign debt crisis, but has also influenced banks’ balance sheets in the core countries, as the result of crossborder capital flows. The creation of the SSM and the SRM should prevent the similar episodes to happen again and should also provide a new opportunity for strengthening of economic and financial systems of the peripheral countries. On the other hand, there is a potential threat that future focus of the ECB, resolution mechanism and other relevant institutions will be extremely oriented towards large and significant banks (whereby one half of them operate in the core and most important euro area countries), and therefore it remains questionable to what extent will the common resolution funds will be used for rescue of less important institutions. Recent geopolitical developments will be the optimal indicator to show whether the previously established mechanisms are sufficient enough to maintain the adequate financial stability in the euro area market.

Analysis of DNA from Fired Cartridge Casings

DNA analysis has been widely accepted as providing valuable evidence concerning the identity of the source of biological traces. Our work has showed that DNA samples can survive on cartridges even after firing. The study also raised the possibility of determining other information such as the age of the donor. Such information may be invaluable in certain cases where spent cartridges from automatic weapons are left behind at the scene of a crime. In spite of the nature of touch evidence and exposure to high chamber temperatures during shooting, we were still capable to retrieve enough DNA for profile typing. In order to estimate age of contributor, DNA methylation levels were analyzed using EpiTect system for retrieved DNA. However, results were not conclusive, due to low amount of input DNA.

An Enhanced Fault-Tolerant Conference Key Agreement Protocol

Establishing a secure communication of Internet conferences for participants is very important. Before starting the conference, all the participants establish a common conference key to encrypt/decrypt communicated messages. It enables participants to exchange the secure messages. Nevertheless, in the conference, if there are any malicious participants who may try to upset the key generation process causing other legal participants to obtain a different conference key. In this article, we propose an improved conference key agreement with fault-tolerant capability. The proposed scheme can filter malicious participants at the beginning of the conference to ensure that all participants obtain the same conference key. Compare with other schemes, our scheme is more secure and efficient than others.

Kinetic Studies on Microbial Production of Tannase Using Redgram Husk

Tannase (tannin acyl hydrolase, E.C.3.1.1.20) is an important hydrolysable enzyme with innumerable applications and industrial potential. In the present study, a kinetic model has been developed for the batch fermentation used for the production of tannase by A.flavus MTCC 3783. Maximum tannase activity of 143.30 U/ml was obtained at 96 hours under optimum operating conditions at 35oC, an initial pH of 5.5 and with an inducer tannic acid concentration of 3% (w/v) for a fermentation period of 120 hours. The biomass concentration reaches a maximum of 6.62 g/l at 96 hours and further there was no increase in biomass concentration till the end of the fermentation. Various unstructured kinetic models were analyzed to simulate the experimental values of microbial growth, tannase activity and substrate concentration. The Logistic model for microbial growth , Luedeking - Piret model for production of tannase and Substrate utilization kinetic model for utilization of substrate were capable of predicting the fermentation profile with high coefficient of determination (R2) values of 0.980, 0.942 and 0.983 respectively. The results indicated that the unstructured models were able to describe the fermentation kinetics more effectively.

Back Bone Node Based Black Hole Detection Mechanism in Mobile Ad Hoc Networks

Mobile Ad hoc Network is a set of self-governing nodes which communicate through wireless links. Dynamic topology MANETs makes routing a challenging task. Various routing protocols are there, but due to various fundamental characteristic open medium, changing topology, distributed collaboration and constrained capability, these protocols are tend to various types of security attacks. Black hole is one among them. In this attack, malicious node represents itself as having the shortest path to the destination but that path not even exists. In this paper, we aim to develop a routing protocol for detection and prevention of black hole attack by modifying AODV routing protocol. This protocol is able to detect and prevent the black hole attack. Simulation is done using NS-2, which shows the improvement in network performance.

Associations between Game Users and Life Satisfaction: Role of Self-Esteem, Self-Efficacy and Social Capital

This study makes an integrated investigation on how life satisfaction is associated with the Korean game users' psychological variables (self-esteem, game and life self- efficacy), social variables (bonding and bridging social capital), and demographic variables (age, gender). The data used for the empirical analysis came from a representative sample survey conducted in South Korea. Results show that self-esteem and game efficacy were an important antecedent to the degree of users’ life satisfaction. Both bonding social capital and bridging social capital enhance the level of the users’ life satisfaction. The importance of perspectives as well as their implications for the game users and further associated research is explored.

Starting Characteristic Analysis of LSPM for Pumping System Considering Demagnetization

This paper presents the design process of a high performance 3-phase 3.7 kW 2-pole line start permanent magnet synchronous motor for pumping system. A method was proposed to study the starting torque characteristics considering line start with high inertia load. A d-q model including cage was built to study the synchronization capability. Time-stepping finite element method analysis was utilized to accurately predict the dynamic and transient performance, efficiency, starting current, speed curve and etc. Considering the load torque of pumps during starting stage, the rotor bar was designed with minimum demagnetization of permanent magnet caused by huge starting current.

Preparation of Protective Coating Film on Metal Alloy

A novel chromium-free protective coating films based on a zeolite coating was growing onto a FeCrAlloy metal using in – situ hydrothermal method. The zeolite film was obtained using in-situ crystallization process that is capable of coating large surfaces with complex shape and in confined spaces has been developed. The zeolite coating offers an advantage of a high mechanical stability and thermal stability. The physicochemical properties were investigated using X-ray diffraction (XRD), Electron Microscopy (SEM), Energy Dispersive X–ray Analysis (EDX) and Thermogravimetric Analysis (TGA). The transition from oxide-on-alloy wires to hydrothermally synthesised uniformly zeolite coated surfaces was followed using SEM and XRD. In addition, the robustness of the prepared coating was confirmed by subjecting these to thermal cycling (ambient to 550oC).

On the Optimality Assessment of Nanoparticle Size Spectrometry and Its Association to the Entropy Concept

Particle size distribution, the most important characteristics of aerosols, is obtained through electrical characterization techniques. The dynamics of charged nanoparticles under the influence of electric field in Electrical Mobility Spectrometer (EMS) reveals the size distribution of these particles. The accuracy of this measurement is influenced by flow conditions, geometry, electric field and particle charging process, therefore by the transfer function (transfer matrix) of the instrument. In this work, a wire-cylinder corona charger was designed and the combined fielddiffusion charging process of injected poly-disperse aerosol particles was numerically simulated as a prerequisite for the study of a multichannel EMS. The result, a cloud of particles with no uniform charge distribution, was introduced to the EMS. The flow pattern and electric field in the EMS were simulated using Computational Fluid Dynamics (CFD) to obtain particle trajectories in the device and therefore to calculate the reported signal by each electrometer. According to the output signals (resulted from bombardment of particles and transferring their charges as currents), we proposed a modification to the size of detecting rings (which are connected to electrometers) in order to evaluate particle size distributions more accurately. Based on the capability of the system to transfer information contents about size distribution of the injected particles, we proposed a benchmark for the assessment of optimality of the design. This method applies the concept of Von Neumann entropy and borrows the definition of entropy from information theory (Shannon entropy) to measure optimality. Entropy, according to the Shannon entropy, is the ''average amount of information contained in an event, sample or character extracted from a data stream''. Evaluating the responses (signals) which were obtained via various configurations of detecting rings, the best configuration which gave the best predictions about the size distributions of injected particles, was the modified configuration. It was also the one that had the maximum amount of entropy. A reasonable consistency was also observed between the accuracy of the predictions and the entropy content of each configuration. In this method, entropy is extracted from the transfer matrix of the instrument for each configuration. Ultimately, various clouds of particles were introduced to the simulations and predicted size distributions were compared to the exact size distributions.