Abstract: Facebook, Twitter, Weibo, and other social media and significant e-commerce sites generate a massive amount of online texts, which can be used to analyse people’s opinions or sentiments for better decision-making. So, sentiment analysis, especially the fine-grained sentiment analysis, is a very active research topic. In this paper, we survey various methods for fine-grained sentiment analysis, including traditional sentiment lexicon-based methods, ma-chine learning-based methods, and deep learning-based methods in aspect/target/attribute-based sentiment analysis tasks. Besides, we discuss their advantages and problems worthy of careful studies in the future.
Abstract: Nowadays, dialogue systems increasingly become the
way for humans to access many computer systems. So, humans
can interact with computers in natural language. A dialogue
system consists of three parts: understanding what humans say in
natural language, managing dialogue, and generating responses in
natural language. In this paper, we survey deep learning based
methods for dialogue management, response generation and dialogue
evaluation. Specifically, these methods are based on neural network,
long short-term memory network, deep reinforcement learning,
pre-training and generative adversarial network. We compare these
methods and point out the further research directions.
Abstract: Sentiment analysis is a very active research topic.
Every day, Facebook, Twitter, Weibo, and other social media,
as well as significant e-commerce websites, generate a massive
amount of comments, which can be used to analyse peoples
opinions or emotions. The existing methods for sentiment analysis
are based mainly on sentiment dictionaries, machine learning, and
deep learning. The first two kinds of methods rely on heavily
sentiment dictionaries or large amounts of labelled data. The third
one overcomes these two problems. So, in this paper, we focus
on the third one. Specifically, we survey various sentiment analysis
methods based on convolutional neural network, recurrent neural
network, long short-term memory, deep neural network, deep belief
network, and memory network. We compare their futures, advantages,
and disadvantages. Also, we point out the main problems of
these methods, which may be worthy of careful studies in the
future. Finally, we also examine the application of deep learning in
multimodal sentiment analysis and aspect-level sentiment analysis.
Abstract: An essential task in the field of artificial intelligence is
to allow computers to interact with people through natural language.
Therefore, researches such as virtual assistants and dialogue systems
have received widespread attention from industry and academia. The
response generation plays a crucial role in dialogue systems, so to
push forward the research on this topic, this paper surveys various
methods for response generation. We sort out these methods into
three categories. First one includes finite state machine methods,
framework methods, and instance methods. The second contains
full-text indexing methods, ontology methods, vast knowledge base
method, and some other methods. The third covers retrieval methods
and generative methods. We also discuss some hybrid methods based
knowledge and deep learning. We compare their disadvantages and
advantages and point out in which ways these studies can be improved
further. Our discussion covers some studies published in leading
conferences such as IJCAI and AAAI in recent years.
Abstract: Natural language often conveys emotions of speakers.
Therefore, sentiment analysis on what people say is prevalent in the
field of natural language process and has great application value
in many practical problems. Thus, to help people understand its
application value, in this paper, we survey various applications of
sentiment analysis, including the ones in online business and offline
business as well as other types of its applications. In particular,
we give some application examples in intelligent customer service
systems in China. Besides, we compare the applications of sentiment
analysis on Twitter, Weibo, Taobao and Facebook, and discuss
some challenges. Finally, we point out the challenges faced in the
applications of sentiment analysis and the work that is worth being
studied in the future.
Abstract: This paper compares the multipath mitigation
performance of code correlation reference waveform receivers
with variable and fixed window width, for binary offset carrier
and multiplexed binary offset carrier signals typically used in
global navigation satellite systems. In the variable window width
method, such width is iteratively reduced until the distortion on
the discriminator with multipath is eliminated. This distortion is
measured as the Euclidean distance between the actual discriminator
(obtained with the incoming signal), and the local discriminator
(generated with a local copy of the signal). The variable window
width have shown better performance compared to the fixed window
width. In particular, the former yields zero error for all delays for
the BOC and MBOC signals considered, while the latter gives
rather large nonzero errors for small delays in all cases. Due to
its computational simplicity, the variable window width method is
perfectly suitable for implementation in low-cost receivers.
Abstract: Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues of the noise subspace, improve the divergence of small eigenvalues in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.
Abstract: The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.
Abstract: Software-defined networking (SDN) provides a solution
for scalable network framework with decoupled control and data
plane. However, this architecture also induces a particular distributed
denial-of-service (DDoS) attack that can affect or even overwhelm
the SDN network. DDoS attack detection problem has to date been
mostly researched as entropy comparison problem. However, this
problem lacks the utilization of SDN, and the results are not accurate.
In this paper, we propose a DDoS attack detection method, which
interprets DDoS detection as a signature matching problem and is
formulated as Earth Mover’s Distance (EMD) model. Considering
the feasibility and accuracy, we further propose to define the cost
function of EMD to be a generalized Kullback-Leibler divergence.
Simulation results show that our proposed method can detect DDoS
attacks by comparing EMD values with the ones computed in the case
without attacks. Moreover, our method can significantly increase the
true positive rate of detection.
Abstract: With the rapid development of information technology, project management has gained more and more attention recently. Based on CDIO, this paper proposes some teaching reform ideas for software project management curriculum. We first change from Teacher-centered classroom to Student-centered and adopt project-driven, scenario animation show, teaching rhythms, case study and team work practice to improve students' learning enthusiasm. Results showed these attempts have been well received and very effective; as well, students prefer to learn with this curriculum more than before the reform.
Abstract: Mass customization production increases the difficulty of the production line layout planning. The material distribution process for variety of parts is very complex, which greatly increases the cost of material handling and logistics. In response to this problem, this paper presents an approach of production line layout planning based on complexity measurement. Firstly, by analyzing the influencing factors of equipment layout, the complexity model of production line is established by using information entropy theory. Then, the cost of the part logistics is derived considering different variety of parts. Furthermore, the function of optimization including two objectives of the lowest cost, and the least configuration complexity is built. Finally, the validity of the function is verified in a case study. The results show that the proposed approach may find the layout scheme with the lowest logistics cost and the least complexity. Optimized production line layout planning can effectively improve production efficiency and equipment utilization with lowest cost and complexity.
Abstract: The performance of the tightening equipment will decline with the working process in manufacturing system. The main manifestations are the randomness and discretization degree increasing of the tightening performance. To evaluate the degradation tendency of the tightening performance accurately, a complexity measurement approach based on Kolmogorov entropy is presented. At first, the states of performance index are divided for calibrating the discrete degree. Then the complexity measurement model based on Kolmogorov entropy is built. The model describes the performance degradation tendency of tightening equipment quantitatively. At last, a study case is applied for verifying the efficiency and validity of the approach. The research achievement shows that the presented complexity measurement can effectively evaluate the degradation tendency of the tightening equipment. It can provide theoretical basis for preventive maintenance and life prediction of equipment.
Abstract: Text similarity measurement is a fundamental issue in
many textual applications such as document clustering, classification,
summarization and question answering. However, prevailing approaches
based on Vector Space Model (VSM) more or less suffer
from the limitation of Bag of Words (BOW), which ignores the semantic
relationship among words. Enriching document representation
with background knowledge from Wikipedia is proven to be an effective
way to solve this problem, but most existing methods still
cannot avoid similar flaws of BOW in a new vector space. In this
paper, we propose a novel text similarity measurement which goes
beyond VSM and can find semantic affinity between documents.
Specifically, it is a unified graph model that exploits Wikipedia as
background knowledge and synthesizes both document representation
and similarity computation. The experimental results on two different
datasets show that our approach significantly improves VSM-based
methods in both text clustering and classification.
Abstract: Social-economic variables influence transportation
demand largely. Analyses of discrete choice model consider
social-economic variables to study traveler-s mode choice and
demand. However, to calibrate the discrete choice model needs to have
plenty of questionnaire survey. Also, an aggregative model is
proposed. The historical data of passenger volumes for high speed rail
and domestic civil aviation are employed to calibrate and validate the
model. In this study, models with different social-economic variables,
which are oil price, GDP per capita, CPI and economic growth rate,
are compared. From the results, the model with the oil price is better
than models with the other social-economic variables.
Abstract: Company mergers and acquisitions reached their peak
in the twenty-first century. Mergers and acquisitions have become one
of the competitive strategies for external growth. In general, it is
believed that mergers and acquisitions can create synergies. However,
they require complete information technology system and service
integration, especially in the banking industry. Much of the research
has focused on performance evaluation, shareholder equity allocation,
or even the increase of company market value after the merger and
acquisition, whereas few scholars have focused on information system
integration post merger and acquisition. This study indicates the role
of information systems after a merger and acquisition, explaining the
benefits of information system integration using a merger and
acquisition case in the banking industry as an example. In addition, we
discuss factors that affect the performance of information system
integration, and utilize system dynamics to interpret the relationship
among factors that affect information system integration performance
in the banking industry after a merger and acquisition.
Abstract: Non-stationary trend in R-R interval series is
considered as a main factor that could highly influence the evaluation
of spectral analysis. It is suggested to remove trends in order to obtain
reliable results. In this study, three detrending methods, the
smoothness prior approach, the wavelet and the empirical mode
decomposition, were compared on artificial R-R interval series with
four types of simulated trends. The Lomb-Scargle periodogram was
used for spectral analysis of R-R interval series. Results indicated that
the wavelet method showed a better overall performance than the other
two methods, and more time-saving, too. Therefore it was selected for
spectral analysis of real R-R interval series of thirty-seven healthy
subjects. Significant decreases (19.94±5.87% in the low frequency
band and 18.97±5.78% in the ratio (p
Abstract: In illumination variant face recognition, existing
methods extracting face albedo as light normalized image may lead to
loss of extensive facial details, with light template discarded. To
improve that, a novel approach for realistic facial texture
reconstruction by combining original image and albedo image is
proposed. First, light subspaces of different identities are established
from the given reference face images; then by projecting the original
and albedo image into each light subspace respectively, texture
reference images with corresponding lighting are reconstructed and
two texture subspaces are formed. According to the projections in
texture subspaces, facial texture with normal light can be synthesized.
Due to the combination of original image, facial details can be
preserved with face albedo. In addition, image partition is applied to
improve the synthesization performance. Experiments on Yale B and
CMUPIE databases demonstrate that this algorithm outperforms the
others both in image representation and in face recognition.
Abstract: Photoplethysmography is a simple measurement of the
variation in blood volume in tissue. It detects the pulse signal of heart
beat as well as the low frequency signal of vasoconstriction and
vasodilation. The transmission type measurement is limited to only a
few specific positions for example the index finger that have a short
path length for light. The reflectance type measurement can be
conveniently applied on most parts of the body surface. This study
analyzed the factors that determine the quality of reflectance
photoplethysmograph signal including the emitter-detector distance,
wavelength, light intensity, and optical properties of skin tissue.
Light emitting diodes (LEDs) with four different visible
wavelengths were used as the light emitters. A phototransistor was
used as the light detector. A micro translation stage adjusts the
emitter-detector distance from 2 mm to 15 mm.
The reflective photoplethysmograph signals were measured on
different sites. The optimal emitter-detector distance was chosen to
have a large dynamic range for low frequency drifting without signal
saturation and a high perfusion index. Among these four wavelengths,
a yellowish green (571nm) light with a proper emitter-detection
distance of 2mm is the most suitable for obtaining a steady and reliable
reflectance photoplethysmograph signal
Abstract: Re-entrant scheduling is an important search problem
with many constraints in the flow shop. In the literature, a number of
approaches have been investigated from exact methods to
meta-heuristics. This paper presents a genetic algorithm that encodes
the problem as multi-level chromosomes to reflect the dependent
relationship of the re-entrant possibility and resource consumption.
The novel encoding way conserves the intact information of the data
and fastens the convergence to the near optimal solutions. To test the
effectiveness of the method, it has been applied to the
resource-constrained re-entrant flow shop scheduling problem.
Computational results show that the proposed GA performs better than
the simulated annealing algorithm in the measure of the makespan
Abstract: In this paper we proposed a novel method to acquire
the ROI (Region of interest) of unsupervised and touch-less palmprint
captured from a web camera in real-time. We use Viola-Jones
approach and skin model to get the target area in real time. Then an
innovative course-to-fine approach to detect the key points on the hand
is described. A new algorithm is used to find the candidate key points
coarsely and quickly. In finely stage, we verify the hand key points
with the shape context descriptor. To make the user much comfortable,
it can process the hand image with different poses, even the hand is
closed. Experiments show promising result by using the proposed
method in various conditions.