Abstract: With the increasing number of people reviewing
products online in recent years, opinion sharing websites has become
the most important source of customers’ opinions. Unfortunately,
spammers generate and post fake reviews in order to promote or
demote brands and mislead potential customers. These are notably
destructive not only for potential customers, but also for business
holders and manufacturers. However, research in this area is not
adequate, and many critical problems related to spam detection have
not been solved to date. To provide green researchers in the domain
with a great aid, in this paper, we have attempted to create a highquality
framework to make a clear vision on review spam-detection
methods. In addition, this report contains a comprehensive collection
of detection metrics used in proposed spam-detection approaches.
These metrics are extremely applicable for developing novel
detection methods.
Abstract: Ambient Computing or Ambient Intelligence (AmI) is
emerging area in computer science aiming to create intelligently
connected environments and Internet of Things. In this paper, we
propose communication middleware architecture for AmI. This
middleware architecture addresses problems of communication,
networking, and abstraction of applications, although there are other
aspects (e.g. HCI and Security) within general AmI framework.
Within this middleware architecture, any application developer might
address HCI and Security issues with extensibility features of this
platform.
Abstract: In order to retrieve images efficiently from a large
database, a unique method integrating color and texture features
using genetic programming has been proposed. Opponent color
histogram which gives shadow, shade, and light intensity invariant
property is employed in the proposed framework for extracting color
features. For texture feature extraction, fast discrete curvelet
transform which captures more orientation information at different
scales is incorporated to represent curved like edges. The recent
scenario in the issues of image retrieval is to reduce the semantic gap
between user’s preference and low level features. To address this
concern, genetic algorithm combined with relevance feedback is
embedded to reduce semantic gap and retrieve user’s preference
images. Extensive and comparative experiments have been conducted
to evaluate proposed framework for content based image retrieval on
two databases, i.e., COIL-100 and Corel-1000. Experimental results
clearly show that the proposed system surpassed other existing
systems in terms of precision and recall. The proposed work achieves
highest performance with average precision of 88.2% on COIL-100
and 76.3% on Corel, the average recall of 69.9% on COIL and 76.3%
on Corel. Thus, the experimental results confirm that the proposed
content based image retrieval system architecture attains better
solution for image retrieval.
Abstract: The introduction of a multitude of new and interactive
e-commerce information technology (IT) artifacts has impacted
adoption research. Rather than solely functioning as productivity
tools, new IT artifacts assume the roles of interaction mediators and
social actors. This paper describes the varying roles assumed by IT
artifacts, and proposes and distinguishes between four distinct foci of
how the artifacts are evaluated. It further proposes a theoretical
model that maps the different views of IT artifacts to four distinct
types of evaluations.
Abstract: Scripts are one of the basic text resources to understand
broadcasting contents. Topic modeling is the method to get the
summary of the broadcasting contents from its scripts. Generally,
scripts represent contents descriptively with directions and speeches,
and provide scene segments that can be seen as semantic units.
Therefore, a script can be topic modeled by treating a scene segment
as a document. Because scene segments consist of speeches mainly,
however, relatively small co-occurrences among words in the scene
segments are observed. This causes inevitably the bad quality of
topics by statistical learning method. To tackle this problem, we
propose a method to improve topic quality with additional word
co-occurrence information obtained using scene similarities. The
main idea of improving topic quality is that the information that
two or more texts are topically related can be useful to learn high
quality of topics. In addition, more accurate topical representations
lead to get information more accurate whether two texts are related
or not. In this paper, we regard two scene segments are related
if their topical similarity is high enough. We also consider that
words are co-occurred if they are in topically related scene segments
together. By iteratively inferring topics and determining semantically
neighborhood scene segments, we draw a topic space represents
broadcasting contents well. In the experiments, we showed the
proposed method generates a higher quality of topics from Korean
drama scripts than the baselines.
Abstract: This paper proposes an APPLE scheme that aims at providing absolute and proportional throughput guarantees, and maximizing system throughput simultaneously for wireless LANs with homogeneous and heterogenous traffic. We formulate our objectives as an optimization problem, present its exact and approximate solutions, and prove the existence and uniqueness of the approximate solution. Simulations validate that APPLE scheme is accurate, and the approximate solution can well achieve the desired objectives already.
Abstract: In this research, we propose to conduct diagnostic and
predictive analysis about the key factors and consequences of urban
population relocation. To achieve this goal, urban simulation models
extract the urban development trends as land use change patterns from
a variety of data sources. The results are treated as part of urban big
data with other information such as population change and economic
conditions. Multiple data mining methods are deployed on this data to
analyze nonlinear relationships between parameters. The result
determines the driving force of population relocation with respect to
urban sprawl and urban sustainability and their related parameters.
This work sets the stage for developing a comprehensive urban
simulation model for catering to specific questions by targeted users. It
contributes towards achieving sustainability as a whole.
Abstract: The current web has become a modern encyclopedia,
where people share their thoughts and ideas on various topics around
them. This kind of encyclopedia is very useful for other people who
are looking for answers to their questions. However, with the
growing popularity of social networking and blogging and ever
expanding network services, there has also been a growing diversity
of technologies along with a different structure of individual web
sites. It is therefore difficult to directly find a relevant answer for a
common Internet user. This paper presents a web application for the
real-time end-to-end analysis of selected Internet trends where the
trend can be whatever the people post online. The application
integrates fully configurable tools for data collection and analysis
using selected webometric algorithms, and for its chronological
visualization to user. It can be assumed that the application facilitates
the users to evaluate the quality of various products that are
mentioned online.
Abstract: In this paper, an approach for the liver tumor detection
in computed tomography (CT) images is represented. The detection
process is based on classifying the features of target liver cell to
either tumor or non-tumor. Fractional differential (FD) is applied for
enhancement of Liver CT images, with the aim of enhancing texture
and edge features. Later on, a fusion method is applied to merge
between the various enhanced images and produce a variety of
feature improvement, which will increase the accuracy of
classification. Each image is divided into NxN non-overlapping
blocks, to extract the desired features. Support vector machines
(SVM) classifier is trained later on a supplied dataset different from
the tested one. Finally, the block cells are identified whether they are
classified as tumor or not. Our approach is validated on a group of
patients’ CT liver tumor datasets. The experiment results
demonstrated the efficiency of detection in the proposed technique.
Abstract: In this corporate world, the technology of Web
services has grown rapidly and its significance for the development
of web based applications gradually rises over time. The success of
Business to Business integration rely on finding novel partners and
their services in a global business environment. However, the
selection of the most suitable Web service from the list of services
with the identical functionality is more vital. The satisfaction level of
the customer and the provider’s reputation of the Web service are
primarily depending on the range it reaches the customer’s
requirements. In most cases, the customer of the Web service feels
that he is spending for the service which is undelivered. This is
because the customer always thinks that the real functionality of the
web service is not reached. This will lead to change of the service
frequently. In this paper, a framework is proposed to evaluate the
Quality of Service (QoS) and its cost that makes the optimal
correlation between each other. In addition, this research work
proposes some management decision against the functional deviancy
of the web service that is guaranteed at time of selection.
Abstract: We investigate the large scale of networks in the
context of network survivability under attack. We use appropriate
techniques to evaluate and the attacker-based- and the defenderbased-
network survivability. The attacker is unaware of the operated
links by the defender. Each attacked link has some pre-specified
probability to be disconnected. The defender choice is so that to
maximize the chance of successfully sending the flow to the
destination node. The attacker however will select the cut-set with
the highest chance to be disabled in order to partition the network.
Moreover, we extend the problem to the case of selecting the best p
paths to operate by the defender and the best k cut-sets to target by
the attacker, for arbitrary integers p,k>1. We investigate some
variations of the problem and suggest polynomial-time solutions.
Abstract: Advances in spatial and spectral resolution of satellite
images have led to tremendous growth in large image databases. The
data we acquire through satellites, radars, and sensors consists of
important geographical information that can be used for remote
sensing applications such as region planning, disaster management.
Spatial data classification and object recognition are important tasks
for many applications. However, classifying objects and identifying
them manually from images is a difficult task. Object recognition is
often considered as a classification problem, this task can be
performed using machine-learning techniques. Despite of many
machine-learning algorithms, the classification is done using
supervised classifiers such as Support Vector Machines (SVM) as the
area of interest is known. We proposed a classification method,
which considers neighboring pixels in a region for feature extraction
and it evaluates classifications precisely according to neighboring
classes for semantic interpretation of region of interest (ROI). A
dataset has been created for training and testing purpose; we
generated the attributes by considering pixel intensity values and
mean values of reflectance. We demonstrated the benefits of using
knowledge discovery and data-mining techniques, which can be on
image data for accurate information extraction and classification from
high spatial resolution remote sensing imagery.
Abstract: Segmentation of left ventricle (LV) from cardiac
ultrasound images provides a quantitative functional analysis of the
heart to diagnose disease. Active Shape Model (ASM) is widely used
for LV segmentation, but it suffers from the drawback that
initialization of the shape model is not sufficiently close to the target,
especially when dealing with abnormal shapes in disease. In this work,
a two-step framework is improved to achieve a fast and efficient LV
segmentation. First, a robust and efficient detection based on Hough
forest localizes cardiac feature points. Such feature points are used to
predict the initial fitting of the LV shape model. Second, ASM is
applied to further fit the LV shape model to the cardiac ultrasound
image. With the robust initialization, ASM is able to achieve more
accurate segmentation. The performance of the proposed method is
evaluated on a dataset of 810 cardiac ultrasound images that are mostly
abnormal shapes. This proposed method is compared with several
combinations of ASM and existing initialization methods. Our
experiment results demonstrate that accuracy of the proposed method
for feature point detection for initialization was 40% higher than the
existing methods. Moreover, the proposed method significantly
reduces the number of necessary ASM fitting loops and thus speeds up
the whole segmentation process. Therefore, the proposed method is
able to achieve more accurate and efficient segmentation results and is
applicable to unusual shapes of heart with cardiac diseases, such as left
atrial enlargement.
Abstract: Digital images are widely used in computer
applications. To store or transmit the uncompressed images
requires considerable storage capacity and transmission bandwidth.
Image compression is a means to perform transmission or storage of
visual data in the most economical way. This paper explains about
how images can be encoded to be transmitted in a multiplexing
time-frequency domain channel. Multiplexing involves packing
signals together whose representations are compact in the working
domain. In order to optimize transmission resources each 4 × 4
pixel block of the image is transformed by a suitable polynomial
approximation, into a minimal number of coefficients. Less than
4 × 4 coefficients in one block spares a significant amount of
transmitted information, but some information is lost. Different
approximations for image transformation have been evaluated as
polynomial representation (Vandermonde matrix), least squares +
gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev
polynomials or singular value decomposition (SVD). Results have
been compared in terms of nominal compression rate (NCR),
compression ratio (CR) and peak signal-to-noise ratio (PSNR)
in order to minimize the error function defined as the difference
between the original pixel gray levels and the approximated
polynomial output. Polynomial coefficients have been later encoded
and handled for generating chirps in a target rate of about two
chirps per 4 × 4 pixel block and then submitted to a transmission
multiplexing operation in the time-frequency domain.
Abstract: In wireless sensor network, sensor node transmits the
sensed data to the sink node in multi-hop communication
periodically. This high traffic induces congestion at the node which is
present one-hop distance to the sink node. The packet transmission
and reception rate of these nodes should be very high, when
compared to other sensor nodes in the network. Therefore, the energy
consumption of that node is very high and this effect is known as the
“funneling effect”. The tree based-data aggregation technique
(TBDA) is used to reduce the energy consumption of the node. The
throughput of the overall performance shows a considerable decrease
in the number of packet transmissions to the sink node. The proposed
scheme, TBDA, avoids the funneling effect and extends the lifetime
of the wireless sensor network. The average case time complexity for
inserting the node in the tree is O(n log n) and for the worst case time
complexity is O(n2).
Abstract: In this paper, an attempt has been made for the design
of a robotic library using an intelligent system. The robot works on
the ARM microprocessor, motor driver circuit with 5 degrees of
freedom with Wi-Fi and GPS based communication protocol. The
authenticity of the library books is controlled by RFID. The proposed
robotic library system is facilitated with embedded system and ARM.
In this library issuance system, the previous potential readers’
authentic review reports have been taken into consideration for
recommending suitable books to the deserving new users and the
issuance of books or periodicals is based on the users’ decision. We
have conjectured that the Wi-Fi based robotic library management
system would allow fast transaction of books issuance and it also
produces quality readers.
Abstract: Test automation allows performing difficult and time
consuming manual software testing tasks efficiently, quickly and
repeatedly. However, development and maintenance of automated
tests is expensive, so it needs a proper prioritization what to automate
first. This paper describes a simple yet efficient approach for such
prioritization of test cases based on the effort needed for both manual
execution and software test automation. The suggested approach is
very flexible because it allows working with a variety of assessment
methods, and adding or removing new candidates at any time. The
theoretical ideas presented in this article have been successfully
applied in real world situations in several software companies by the
authors and their colleagues including testing of real estate websites,
cryptographic and authentication solutions, OSGi-based middleware
framework that has been applied in various systems for smart homes,
connected cars, production plants, sensors, home appliances, car head
units and engine control units (ECU), vending machines, medical
devices, industry equipment and other devices that either contain or
are connected to an embedded service gateway.
Abstract: Currently, there are few user friendly Weigh-in-
Motion (WIM) data analysis softwares available which can produce
traffic input data for the recently developed AASHTOWare pavement
Mechanistic-Empirical (ME) design software. However, these
softwares have only rudimentary Quality Control (QC) processes.
Therefore, they cannot properly deal with erroneous WIM data. As
the pavement performance is highly sensible to the quality of WIM
data, it is highly recommended to use more refined QC process on
raw WIM data to get a good result. This study develops a userfriendly
software, which can produce traffic input for the ME design
software. This software takes the raw data (Class and Weight data)
collected from the WIM station and processes it with a sophisticated
QC procedure. Traffic data such as traffic volume, traffic distribution,
axle load spectra, etc. can be obtained from this software; which can
directly be used in the ME design software.
Abstract: The purpose of this study is analyzing the relationship
between trust and social capital of people with using Social Network
Analysis. In this study, two aspects of social capital will be focused:
Bonding, homophilous social capital (BoSC), and Bridging,
heterophilous social capital (BrSC). These two aspects diverge each
other regarding to the social theories. The other concept of the study
is Trust (Tr), namely interpersonal trust, willing to ascribe good
intentions to and have confidence in the words and actions of other
people. In this study, the sample group, 61 people, was selected from
a private firm from the defense industry. The relation between
BoSC/BrSC and Tr is shown by using Social Network Analysis
(SNA) and statistical analysis with Likert type-questionnaire. The
results of the analysis show the Cronbach’s alpha value is 0.756 and
social capital values (BoSC/BrSC) is not correlated with Tr values of
the people.
Abstract: Information technology has been gaining more and
more space whether in industry, commerce or even for personal use,
but the misuse of it brings harm to the environment and human health
as a result. Contribute to the sustainability of the planet is to
compensate the environment, all or part of what withdraws it. The
green computing also came to propose practical for use in IT in an
environmentally correct way in aid of strategic management and
communication. This work focuses on showing how a mobile
application can help businesses reduce costs and reduced
environmental impacts caused by its processes, through a case study
of a public company in Brazil.