Exploring the Destination Image of Mainland China Tourists to Taiwan by Word-of-Mouth on Web

After allowing direct flights from Mainland China to Taiwan, Chinese tourists increased according to Tourism Bureaustatistics. There are from 0.19 to 2 million tourists from 2008 to 2011. Mainland China has become the main source of Taiwan developing tourism industry. Taiwanese government should know more about comments from Chinese tourists to Taiwan in order toproperly market Taiwan tourism and enhance the overall quality of tourism. In order to understand Chinese visitors’ comments, this study adopts content analysis to analyze electronic word-of-mouth on Web. This study collects 375 blog articles of Chinese tourists from Ctrip.com as a database during 2009 to 2011. Through the qualitative data analysis the traveling destination imagesis divided into seven dimensions, such as senic spots, shopping, food and beverages, accommodations, transportation, festivals and recreation activities. Finally, this study proposes some practical managerial implication to know both positive and negative images of the seven dimensions from Chinese tourists, providing marketing strategies and suggestions to traveling agency industry.

Integrating Low and High Level Object Recognition Steps

In pattern recognition applications the low level segmentation and the high level object recognition are generally considered as two separate steps. The paper presents a method that bridges the gap between the low and the high level object recognition. It is based on a Bayesian network representation and network propagation algorithm. At the low level it uses hierarchical structure of quadratic spline wavelet image bases. The method is demonstrated for a simple circuit diagram component identification problem.

A Novel Approach for Coin Identification using Eigenvalues of Covariance Matrix, Hough Transform and Raster Scan Algorithms

In this paper we present a new method for coin identification. The proposed method adopts a hybrid scheme using Eigenvalues of covariance matrix, Circular Hough Transform (CHT) and Bresenham-s circle algorithm. The statistical and geometrical properties of the small and large Eigenvalues of the covariance matrix of a set of edge pixels over a connected region of support are explored for the purpose of circular object detection. Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain only a small number of non-zero elements, they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of the circumference pixels is identified using Raster scan algorithm which uses geometrical symmetry property. After finding circular objects, the proposed method uses the texture on the surface of the coins called texton, which are unique properties of coins, refers to the fundamental micro structure in generic natural images. This method has been tested on several real world images including coin and non-coin images. The performance is also evaluated based on the noise withstanding capability.

Generalized Method for Estimating Best-Fit Vertical Alignments for Profile Data

When the profile information of an existing road is missing or not up-to-date and the parameters of the vertical alignment are needed for engineering analysis, the engineer has to recreate the geometric design features of the road alignment using collected profile data. The profile data may be collected using traditional surveying methods, global positioning systems, or digital imagery. This paper develops a method that estimates the parameters of the geometric features that best characterize the existing vertical alignments in terms of tangents and the expressions of the curve, that may be symmetrical, asymmetrical, reverse, and complex vertical curves. The method is implemented using an Excel-based optimization method that minimizes the differences between the observed profile and the profiles estimated from the equations of the vertical curve. The method uses a 'wireframe' representation of the profile that makes the proposed method applicable to all types of vertical curves. A secondary contribution of this paper is to introduce the properties of the equal-arc asymmetrical curve that has been recently developed in the highway geometric design field.

Color Image Segmentation and Multi-Level Thresholding by Maximization of Conditional Entropy

In this work a novel approach for color image segmentation using higher order entropy as a textural feature for determination of thresholds over a two dimensional image histogram is discussed. A similar approach is applied to achieve multi-level thresholding in both grayscale and color images. The paper discusses two methods of color image segmentation using RGB space as the standard processing space. The threshold for segmentation is decided by the maximization of conditional entropy in the two dimensional histogram of the color image separated into three grayscale images of R, G and B. The features are first developed independently for the three ( R, G, B ) spaces, and combined to get different color component segmentation. By considering local maxima instead of the maximum of conditional entropy yields multiple thresholds for the same image which forms the basis for multilevel thresholding.

A Quantum Algorithm of Constructing Image Histogram

Histogram plays an important statistical role in digital image processing. However, the existing quantum image models are deficient to do this kind of image statistical processing because different gray scales are not distinguishable. In this paper, a novel quantum image representation model is proposed firstly in which the pixels with different gray scales can be distinguished and operated simultaneously. Based on the new model, a fast quantum algorithm of constructing histogram for quantum image is designed. Performance comparison reveals that the new quantum algorithm could achieve an approximately quadratic speedup than the classical counterpart. The proposed quantum model and algorithm have significant meanings for the future researches of quantum image processing.

View-Point Insensitive Human Pose Recognition using Neural Network

This paper proposes view-point insensitive human pose recognition system using neural network. Recognition system consists of silhouette image capturing module, data driven database, and neural network. The advantages of our system are first, it is possible to capture multiple view-point silhouette images of 3D human model automatically. This automatic capture module is helpful to reduce time consuming task of database construction. Second, we develop huge feature database to offer view-point insensitivity at pose recognition. Third, we use neural network to recognize human pose from multiple-view because every pose from each model have similar feature patterns, even though each model has different appearance and view-point. To construct database, we need to create 3D human model using 3D manipulate tools. Contour shape is used to convert silhouette image to feature vector of 12 degree. This extraction task is processed semi-automatically, which benefits in that capturing images and converting to silhouette images from the real capturing environment is needless. We demonstrate the effectiveness of our approach with experiments on virtual environment.

A Way of Converting Color Images to Gray Scale Ones for the Color Blinds -Reducing the Colors for Tokyo Subway Map-

We proposes a way of removing noises and reducing the number of colors contained in a JPEG image. Main purpose of this project is to convert color images to monochrome images for the color blinds. We treat the crispy color images like the Tokyo subway map. Each color in the image has an important information. But for the color blinds, similar colors cannot be distinguished. If we can convert those colors to different gray values, they can distinguish them.

Satellite Sensing for Evaluation of an Irrigation System in Cotton - Wheat Zone

Efficient utilization of existing water is a pressing need for Pakistan. Due to rising population, reduction in present storage capacity and poor delivery efficiency of 30 to 40% from canal. A study to evaluate an irrigation system in the cotton-wheat zone of Pakistan, after the watercourse lining was conducted. The study is made on the basis of cropping pattern and salinity to evaluate the system. This study employed an index-based approach of using Geographic information system with field data. The satellite images of different years were use to examine the effective area. Several combinations of the ratio of signals received in different spectral bands were used for development of this index. Near Infrared and Thermal IR spectral bands proved to be most effective as this combination helped easy detection of salt affected area and cropping pattern of the study area. Result showed that 9.97% area under salinity in 1992, 9.17% in 2000 and it left 2.29% in year 2005. Similarly in 1992, 45% area is under vegetation it improves to 56% and 65% in 2000 and 2005 respectively. On the basis of these results evaluation is done 30% performance is increase after the watercourse improvement.

A Hybrid Distributed Vision System for Robot Localization

Localization is one of the critical issues in the field of robot navigation. With an accurate estimate of the robot pose, robots will be capable of navigating in the environment autonomously and efficiently. In this paper, a hybrid Distributed Vision System (DVS) for robot localization is presented. The presented approach integrates odometry data from robot and images captured from overhead cameras installed in the environment to help reduce possibilities of fail localization due to effects of illumination, encoder accumulated errors, and low quality range data. An odometry-based motion model is applied to predict robot poses, and robot images captured by overhead cameras are then used to update pose estimates with HSV histogram-based measurement model. Experiment results show the presented approach could localize robots in a global world coordinate system with localization errors within 100mm.

Detection of Breast Cancer in the JPEG2000 Domain

Breast cancer detection techniques have been reported to aid radiologists in analyzing mammograms. We note that most techniques are performed on uncompressed digital mammograms. Mammogram images are huge in size necessitating the use of compression to reduce storage/transmission requirements. In this paper, we present an algorithm for the detection of microcalcifications in the JPEG2000 domain. The algorithm is based on the statistical properties of the wavelet transform that the JPEG2000 coder employs. Simulation results were carried out at different compression ratios. The sensitivity of this algorithm ranges from 92% with a false positive rate of 4.7 down to 66% with a false positive rate of 2.1 using lossless compression and lossy compression at a compression ratio of 100:1, respectively.

The Grey Relational Analysis of the Influence Factors of Profit in Cartoon-s Character Merchandising Rights

This paper constructs a four factors theoretical model of Chinese small and medium enterprises based on the “cartoon characters- reputation - enterprise marketing and management capabilities – protection of the cartoon image - institutional environment" by literature research, case studies and investigation. The empirical study show that the greatest impact on current merchandising rights income is the institutional environment friendliness, followed by marketing and management capabilities, input of character image protection and Cartoon characters- reputation through the real-time grey relational analysis, and the greatest impact on post-merchandising rights profit is Cartoon characters reputation, followed by the institutional environment friendliness, then marketing and management ability and input of character image protection through the time-delay grey relational analysis.

The Control Vector Scheme for Design of Planar Primitive PH curves

The PH curve can be constructed by given parameters, but the shape of the curve is not so easy to image from the value of the parameters. On the contract, Bézier curve can be constructed by the control polygon, and from the control polygon, we can image the figure of the curve. In this paper, we want to use the hodograph of Bézier curve to construct PH curve by selecting part of the control vectors, and produce other control vectors, so the property of PH curve exists.

Correction of Infrared Data for Electrical Components on a Board

In this paper, the data correction algorithm is suggested when the environmental air temperature varies. To correct the infrared data in this paper, the initial temperature or the initial infrared image data is used so that a target source system may not be necessary. The temperature data obtained from infrared detector show nonlinear property depending on the surface temperature. In order to handle this nonlinear property, Taylor series approach is adopted. It is shown that the proposed algorithm can reduce the influence of environmental temperature on the components in the board. The main advantage of this algorithm is to use only the initial temperature of the components on the board rather than using other reference device such as black body sources in order to get reference temperatures.

Unsupervised Texture Classification and Segmentation

An unsupervised classification algorithm is derived by modeling observed data as a mixture of several mutually exclusive classes that are each described by linear combinations of independent non-Gaussian densities. The algorithm estimates the data density in each class by using parametric nonlinear functions that fit to the non-Gaussian structure of the data. This improves classification accuracy compared with standard Gaussian mixture models. When applied to textures, the algorithm can learn basis functions for images that capture the statistically significant structure intrinsic in the images. We apply this technique to the problem of unsupervised texture classification and segmentation.

Error Effects on SAR Image Resolution using Range Doppler Imaging Algorithm

Synthetic Aperture Radar (SAR) is an imaging radar form by taking full advantage of the relative movement of the antenna with respect to the target. Through the simultaneous processing of the radar reflections over the movement of the antenna via the Range Doppler Algorithm (RDA), the superior resolution of a theoretical wider antenna, termed synthetic aperture, is obtained. Therefore, SAR can achieve high resolution two dimensional imagery of the ground surface. In addition, two filtering steps in range and azimuth direction provide accurate enough result. This paper develops a simulation in which realistic SAR images can be generated. Also, the effect of velocity errors in the resulting image has also been investigated. Taking some velocity errors into account, the simulation results on the image resolution would be presented. Most of the times, algorithms need to be adjusted for particular datasets, or particular applications.

Shift Invariant Support Vector Machines Face Recognition System

In this paper, we present a new method for incorporating global shift invariance in support vector machines. Unlike other approaches which incorporate a feature extraction stage, we first scale the image and then classify it by using the modified support vector machines classifier. Shift invariance is achieved by replacing dot products between patterns used by the SVM classifier with the maximum cross-correlation value between them. Unlike the normal approach, in which the patterns are treated as vectors, in our approach the patterns are treated as matrices (or images). Crosscorrelation is computed by using computationally efficient techniques such as the fast Fourier transform. The method has been tested on the ORL face database. The tests indicate that this method can improve the recognition rate of an SVM classifier.

Discontinuous Galerkin Method for Total Variation Minimization on Inpainting Problem

This paper is concerned with the numerical minimization of energy functionals in BV ( ) (the space of bounded variation functions) involving total variation for gray-scale 1-dimensional inpainting problem. Applications are shown by finite element method and discontinuous Galerkin method for total variation minimization. We include the numerical examples which show the different recovery image by these two methods.

ROI Based Embedded Watermarking of Medical Images for Secured Communication in Telemedicine

Medical images require special safety and confidentiality because critical judgment is done on the information provided by medical images. Transmission of medical image via internet or mobile phones demands strong security and copyright protection in telemedicine applications. Here, highly secured and robust watermarking technique is proposed for transmission of image data via internet and mobile phones. The Region of Interest (ROI) and Non Region of Interest (RONI) of medical image are separated. Only RONI is used for watermark embedding. This technique results in exact recovery of watermark with standard medical database images of size 512x512, giving 'correlation factor' equals to 1. The correlation factor for different attacks like noise addition, filtering, rotation and compression ranges from 0.90 to 0.95. The PSNR with weighting factor 0.02 is up to 48.53 dBs. The presented scheme is non blind and embeds hospital logo of 64x64 size.

A Technique for Improving the Performance of Median Smoothers at the Corners Characterized by Low Order Polynomials

Median filters with larger windows offer greater smoothing and are more robust than the median filters of smaller windows. However, the larger median smoothers (the median filters with the larger windows) fail to track low order polynomial trends in the signals. Due to this, constant regions are produced at the signal corners, leading to the loss of fine details. In this paper, an algorithm, which combines the ability of the 3-point median smoother in preserving the low order polynomial trends and the superior noise filtering characteristics of the larger median smoother, is introduced. The proposed algorithm (called the combiner algorithm in this paper) is evaluated for its performance on a test image corrupted with different types of noise and the results obtained are included.