Abstract: Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.
Abstract: In a time period populated by legacy newspaper readers who throw around the term “fake news” as though it has long been a part of the lexicon, journalism schools must convince would-be students that their degree is still viable and that they are not teaching a curriculum of deception. As such, journalism schools’ academic administrators tasked with creating and maintaining conversant curricula must stay ahead of legacy newspaper industry trends – both in the print and online products – and ensure that what is being taught in the classroom is both fresh and appropriate to the demands of the evolving legacy newspaper industry. This study examines the information obtained from the result of interviews of journalism academic administrators in order to identify institutional pedagogy for recent journalism school graduates interested in pursuing careers at legacy newspapers. This research also explores the existing relationship between journalism school academic administrators and legacy newspaper editors. The results indicate the value administrators put on various academy teachings, and they also highlight a perceived disconnect between journalism academic administrators and legacy newspaper hiring editors.
Abstract: Fake news detection research is still in the early stage as this is a relatively new phenomenon in the interest raised by society. Machine learning helps to solve complex problems and to build AI systems nowadays and especially in those cases where we have tacit knowledge or the knowledge that is not known. We used machine learning algorithms and for identification of fake news; we applied three classifiers; Passive Aggressive, Naïve Bayes, and Support Vector Machine. Simple classification is not completely correct in fake news detection because classification methods are not specialized for fake news. With the integration of machine learning and text-based processing, we can detect fake news and build classifiers that can classify the news data. Text classification mainly focuses on extracting various features of text and after that incorporating those features into classification. The big challenge in this area is the lack of an efficient way to differentiate between fake and non-fake due to the unavailability of corpora. We applied three different machine learning classifiers on two publicly available datasets. Experimental analysis based on the existing dataset indicates a very encouraging and improved performance.
Abstract: In the present article, it is observed that the constant advancement of issues related to misinformation impacts the guarantee of the public policy cycle. Thus, it is found that the dissemination of false information has a direct influence on each of the component stages of this cycle. Therefore, in order to maintain scientific and theoretical credibility in the qualitative analysis process, it was necessary to logically interpose the concepts of firehosing of falsehood, fake news, public policy cycle, as well as using the epistemological and pragmatic mechanism at the intersection of such academic concepts, such as the scientific method. It was found, through the analysis of official documents and public notes, how the multiple theoretical perspectives evidence the commitment of the provision and elaboration of public policies, verifying the way in which the fake news impact each part of the process in this atmosphere.
Abstract: Ancient books are significant culture inheritors and their background textures convey the potential history information. However, multi-style texture recovery of ancient books has received little attention. Restricted by insufficient ancient textures and complex handling process, the generation of ancient textures confronts with new challenges. For instance, training without sufficient data usually brings about overfitting or mode collapse, so some of the outputs are prone to be fake. Recently, image generation and style transfer based on deep learning are widely applied in computer vision. Breakthroughs within the field make it possible to conduct research upon multi-style texture recovery of ancient books. Under the circumstances, we proposed a network of layout analysis and image fusion system. Firstly, we trained models by using Deep Convolution Generative against Networks (DCGAN) to synthesize multi-style ancient textures; then, we analyzed layouts based on the Position Rearrangement (PR) algorithm that we proposed to adjust the layout structure of foreground content; at last, we realized our goal by fusing rearranged foreground texts and generated background. In experiments, diversified samples such as ancient Yi, Jurchen, Seal were selected as our training sets. Then, the performances of different fine-turning models were gradually improved by adjusting DCGAN model in parameters as well as structures. In order to evaluate the results scientifically, cross entropy loss function and Fréchet Inception Distance (FID) are selected to be our assessment criteria. Eventually, we got model M8 with lowest FID score. Compared with DCGAN model proposed by Radford at el., the FID score of M8 improved by 19.26%, enhancing the quality of the synthetic images profoundly.
Abstract: Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.
Abstract: In the digital age, the spread of the mobile world and the nature of the cyberspace, offers many new opportunities for the prevalence of the fundamental right to free expression, and therefore, for free speech and freedom of the press; however, these new information communication technologies carry many new challenges. Defamation, censorship, fake news, misleading information, hate speech, breach of copyright etc., are only some of the violations, all of which can be derived from the harmful exercise of freedom of expression, all which become more salient in the internet. Here raises the question: how can we eliminate these problems, and practice our fundamental freedom rightfully? To answer this question, we should understand the elements and the characteristic of the nature of freedom of expression, and the role of the actors whose duties and responsibilities are crucial in the prevalence of this fundamental freedom. To achieve this goal, this paper will explore the European practice to understand instructions found in the case-law of the European Court of Human rights for the rightful exercise of freedom of expression.
Abstract: The fashion industry represents a significant portion of
the global gross domestic product, however, it is plagued by cheap
imitators that infringe on the trademarks which destroys the fashion
industry's hard work and investment. While eventually the copycats
would be found and stopped, the damage has already been done, sales
are missed and direct and indirect jobs are lost. The infringer thrives
on two main facts: the time it takes to discover them and the lack of
tracking technologies that can help the consumer distinguish them.
Blockchain technology is a new emerging technology that provides a
distributed encrypted immutable and fault resistant ledger. Blockchain
presents a ripe technology to resolve the infringement epidemic
facing the fashion industry. The significance of the study is that a
new approach leveraging the state of the art blockchain technology
coupled with artificial intelligence is used to create a framework
addressing the fashion infringement problem. It transforms the current
focus on legal enforcement, which is difficult at best, to consumer
awareness that is far more effective. The framework, Crypto CopyCat,
creates an immutable digital asset representing the actual product
to empower the customer with a near real time query system. This
combination emphasizes the consumer's awareness and appreciation
of the product's authenticity, while provides real time feedback to
the producer regarding the fake replicas. The main findings of this
study are that implementing this approach can delay the fake product
penetration of the original product market, thus allowing the original
product the time to take advantage of the market. The shift in the
fake adoption results in reduced returns, which impedes the copycat
market and moves the emphasis to the original product innovation.
Abstract: Today’s internet world is highly prone to various online attacks, of which the most harmful attack is phishing. The attackers host the fake websites which are very similar and look alike. We propose an image based authentication using steganography and visual cryptography to prevent phishing. This paper presents a secure steganographic technique for true color (RGB) images and uses Discrete Cosine Transform to compress the images. The proposed method hides the secret data inside the cover image. The use of visual cryptography is to preserve the privacy of an image by decomposing the original image into two shares. Original image can be identified only when both qualified shares are simultaneously available. Individual share does not reveal the identity of the original image. Thus, the existence of the secret message is hard to be detected by the RS steganalysis.
Abstract: With the increasing number of people reviewing
products online in recent years, opinion sharing websites has become
the most important source of customers’ opinions. Unfortunately,
spammers generate and post fake reviews in order to promote or
demote brands and mislead potential customers. These are notably
destructive not only for potential customers, but also for business
holders and manufacturers. However, research in this area is not
adequate, and many critical problems related to spam detection have
not been solved to date. To provide green researchers in the domain
with a great aid, in this paper, we have attempted to create a highquality
framework to make a clear vision on review spam-detection
methods. In addition, this report contains a comprehensive collection
of detection metrics used in proposed spam-detection approaches.
These metrics are extremely applicable for developing novel
detection methods.
Abstract: Social networking sites such as Twitter and Facebook
attracts over 500 million users across the world, for those users, their
social life, even their practical life, has become interrelated. Their
interaction with social networking has affected their life forever.
Accordingly, social networking sites have become among the main
channels that are responsible for vast dissemination of different kinds
of information during real time events. This popularity in Social
networking has led to different problems including the possibility of
exposing incorrect information to their users through fake accounts
which results to the spread of malicious content during life events.
This situation can result to a huge damage in the real world to the
society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting the
fake accounts on Twitter. The study determines the minimized set of
the main factors that influence the detection of the fake accounts on
Twitter, and then the determined factors are applied using different
classification techniques. A comparison of the results of these
techniques has been performed and the most accurate algorithm is
selected according to the accuracy of the results. The study has been
compared with different recent researches in the same area; this
comparison has proved the accuracy of the proposed study. We claim
that this study can be continuously applied on Twitter social network
to automatically detect the fake accounts; moreover, the study can be
applied on different social network sites such as Facebook with minor
changes according to the nature of the social network which are
discussed in this paper.
Abstract: Control of honey frauds is needed in Ecuador to
protect bee keepers and consumers because simple syrups and new
syrups with eucalyptus are sold as genuine honeys. Authenticity of
Ecuadorian commercial honeys was tested with a vortex emulsion
consisting on one volume of honey:water (1:1) dilution, and two
volumes of diethyl ether. This method allows a separation of phases
in one minute to discriminate genuine honeys that form three phase
and fake honeys that form two phases; 34 of the 42 honeys analyzed
from five provinces of Ecuador were genuine. This was confirmed
with 1H NMR spectra of honey dilutions in deuterated water with an
enhanced amino acid region with signals for proline, phenylalanine
and tyrosine. Classic quality indicators were also tested with this
method (sugars, HMF), indicators of fermentation (ethanol, acetic
acid), and residues of citric acid used in the syrup manufacture. One
of the honeys gave a false positive for genuine, being an admixture of
genuine honey with added syrup, evident for the high sucrose.
Sensory analysis was the final confirmation to recognize the honey
groups studied here, namely honey produced in combs by Apis
mellifera, fake honey, and honey produced in cerumen pots by
Geotrigona, Melipona, and Scaptotrigona. Chloroform extractions of
honey were also done to search lipophilic additives in NMR spectra.
This is a valuable contribution to protect honey consumers, and to
develop the beekeeping industry in Ecuador.
Abstract: In this paper we are presenting some spamming
techniques their behaviour and possible solutions. We have analyzed
how Spammers enters into online social networking sites (OSNSs) to
target them and diverse techniques used by them for this purpose.
Spamming is very common issue in present era of Internet
especially through Online Social Networking Sites (like Facebook,
Twitter, and Google+ etc.). Spam messages keep wasting Internet
bandwidth and the storage space of servers. On social networking
sites; spammers often disguise themselves by creating fake accounts
and hijacking user’s accounts for personal gains. They behave like
normal user and they continue to change their spamming strategy.
Following spamming techniques are discussed in this paper like
clickjacking, social engineered attacks, cross site scripting, URL
shortening, and drive by download. We have used elgg framework
for demonstration of some of spamming threats and respective
implementation of solutions.
Abstract: One of the most challenging times in operation of big industrial plant or utilities is the time that alert lamp of Bently Nevada connection in main board substation turn on and show the alert condition of machine. All of the maintenance groups usually make a lot of discussion with operation and together rather this alert signal is real or fake. This will be more challenging when condition monitoring vibrationdata shows 1X(X=current rotor frequency) in fast Fourier transform(FFT) and vibration phase trends show 90 degree shift between two non-contact probedirections with overall high radial amplitude amounts. In such situations, CM (condition monitoring) groups usually suspicious about unbalance in rotor. In this paper, four critical case histories related to SIEMENS V94.2 Gas Turbines in Iran power industry discussed in details. Furthermore, probe looseness and fake (unreal) trip in gas turbine power plants discussed. In addition, critical operation decision in alert condition in power plants discussed in details.
Abstract: Diversification of the processing of crops is a very important way of reducing food insecurity, perishability of most perishable crops and generates verities. Sweet potato has been diversified in various ways by researchers through processing into different forms for consumption. The study considered diversifying the crop into different drinks by combining it with different high nutrient acceptable cereal. There was significant relationship between the educational background of the respondents and level of acceptability of the sweet potato drinks (χ 2 = 1.033 and P = 0.05). Interestingly, significant relationship existed between the most preferred sweet potato drink by the respondents and level of acceptability of the sweet potato drinks (r = 0.394, P = 0.031). The high level of acceptability of the drinks will lead to enhanced production of the crops required for the drinks that would assist in income generation and alleviating food and nutrition insecurity.
Abstract: EPC Class-1 Generation-2 UHF tags, one of Radio
frequency identification or RFID tag types, is expected that most
companies are planning to use it in the supply chain in the short term
and in consumer packaging in the long term due to its inexpensive
cost. Because of the very cost, however, its resources are extremely
scarce and it is hard to have any valuable security algorithms in it. It
causes security vulnerabilities, in particular cloning the tags for
counterfeits. In this paper, we propose a product authentication
solution for anti-counterfeiting at application level in the supply chain
and mobile RFID environment. It aims to become aware of
distribution of spurious products with fake RFID tags and to provide a
product authentication service to general consumers with mobile
RFID devices like mobile phone or PDA which has a mobile RFID
reader. We will discuss anti-counterfeiting mechanisms which are
required to our proposed solution and address requirements that the
mechanisms should have.
Abstract: This paper proposes an easy-to-use instruction hiding
method to protect software from malicious reverse engineering
attacks. Given a source program (original) to be protected, the
proposed method (1) takes its modified version (fake) as an input,
(2) differences in assembly code instructions between original and
fake are analyzed, and, (3) self-modification routines are introduced
so that fake instructions become correct (i.e., original instructions)
before they are executed and that they go back to fake ones after
they are executed. The proposed method can add a certain amount
of security to a program since the fake instructions in the resultant
program confuse attackers and it requires significant effort to discover
and remove all the fake instructions and self-modification routines.
Also, this method is easy to use (with little effort) because all a user
(who uses the proposed method) has to do is to prepare a fake source
code by modifying the original source code.
Abstract: e-mail has become an important means of electronic
communication but the viability of its usage is marred by Unsolicited
Bulk e-mail (UBE) messages. UBE consists of many types
like pornographic, virus infected and 'cry-for-help' messages as well
as fake and fraudulent offers for jobs, winnings and medicines. UBE
poses technical and socio-economic challenges to usage of e-mails.
To meet this challenge and combat this menace, we need to
understand UBE. Towards this end, the current paper presents a
content-based textual analysis of more than 2700 body enhancement
medicinal UBE. Technically, this is an application of Text Parsing
and Tokenization for an un-structured textual document and we
approach it using Bag Of Words (BOW) and Vector Space Document
Model techniques. We have attempted to identify the most
frequently occurring lexis in the UBE documents that advertise
various products for body enhancement. The analysis of such top
100 lexis is also presented. We exhibit the relationship between
occurrence of a word from the identified lexis-set in the given UBE
and the probability that the given UBE will be the one advertising for
fake medicinal product. To the best of our knowledge and survey of
related literature, this is the first formal attempt for identification of
most frequently occurring lexis in such UBE by its textual analysis.
Finally, this is a sincere attempt to bring about alertness against and
mitigate the threat of such luring but fake UBE.
Abstract: This article presents the developments of efficient
algorithms for tablet copies comparison. Image recognition has
specialized use in digital systems such as medical imaging,
computer vision, defense, communication etc. Comparison between
two images that look indistinguishable is a formidable task. Two
images taken from different sources might look identical but due to
different digitizing properties they are not. Whereas small variation
in image information such as cropping, rotation, and slight
photometric alteration are unsuitable for based matching
techniques. In this paper we introduce different matching
algorithms designed to facilitate, for art centers, identifying real
painting images from fake ones. Different vision algorithms for
local image features are implemented using MATLAB. In this
framework a Table Comparison Computer Tool “TCCT" is
designed to facilitate our research. The TCCT is a Graphical Unit
Interface (GUI) tool used to identify images by its shapes and
objects. Parameter of vision system is fully accessible to user
through this graphical unit interface. And then for matching, it
applies different description technique that can identify exact
figures of objects.
Abstract: Since communications between tag and reader in RFID
system are by radio, anyone can access the tag and obtain its any
information. And a tag always replies with the same ID so that it is
hard to distinguish between a real and a fake tag. Thus, there are many
security problems in today-s RFID System. Firstly, unauthorized
reader can easily read the ID information of any Tag. Secondly,
Adversary can easily cheat the legitimate reader using the collected
Tag ID information, such as the any legitimate Tag. These security
problems can be typically solved by encryption of messages
transmitted between Tag and Reader and by authentication for Tag.
In this paper, to solve these security problems on RFID system, we
propose the Tag Authentication Scheme based on self shrinking
generator (SSG). SSG Algorithm using in our scheme is proposed by
W.Meier and O.Staffelbach in EUROCRYPT-94. This Algorithm is
organized that only one LFSR and selection logic in order to generate
random stream. Thus it is optimized to implement the hardware logic
on devices with extremely limited resource, and the output generating
from SSG at each time do role as random stream so that it is allow our
to design the light-weight authentication scheme with security against
some network attacks. Therefore, we propose the novel tag
authentication scheme which use SSG to encrypt the Tag-ID
transmitted from tag to reader and achieve authentication of tag.