Abstract: The goal of this study is to analyze if search queries carried out in search engines such as Google, can offer emotional information about the user that performs them. Knowing the emotional state in which the Internet user is located can be a key to achieve the maximum personalization of content and the detection of worrying behaviors. For this, two studies were carried out using tools with advanced natural language processing techniques. The first study determines if a query can be classified as positive, negative or neutral, while the second study extracts emotional content from words and applies the categorical and dimensional models for the representation of emotions. In addition, we use search queries in Spanish and English to establish similarities and differences between two languages. The results revealed that text search queries performed by users on the Internet can be classified emotionally. This allows us to better understand the emotional state of the user at the time of the search, which could involve adapting the technology and personalizing the responses to different emotional states.
Abstract: Diagnosis error problem is frequent and one of the most important safety problems today. One of the main objectives of our work is to propose an ontological representation that takes into account the diagnostic criteria in order to improve the diagnostic. We choose pneumonia disease since it is one of the frequent diseases affected by diagnosis errors and have harmful effects on patients. To achieve our aim, we use a semi-automated method to integrate diverse knowledge sources that include publically available pneumonia disease guidelines from international repositories, biomedical ontologies and electronic health records. We follow the principles of the Open Biomedical Ontologies (OBO) Foundry. The resulting ontology covers symptoms and signs, all the types of pneumonia, antecedents, pathogens, and diagnostic testing. The first evaluation results show that most of the terms are covered by the ontology. This work is still in progress and represents a first and major step toward a development of a diagnosis decision support system for pneumonia.
Abstract: Risk assessment and the knowledge provided through this process is a crucial part of any decision-making process in the management of risks and uncertainties. Failure in assessment of risks can cause inadequacy in the entire process of risk management, which in turn can lead to failure in achieving organisational objectives as well as having significant damaging consequences on populations affected by the potential risks being assessed. The choice of tools and techniques in risk assessment can influence the degree and scope of decision-making and subsequently the risk response strategy. There are various available qualitative and quantitative tools and techniques that are deployed within the broad process of risk assessment. The sheer diversity of tools and techniques available to practitioners makes it difficult for organisations to consistently employ the most appropriate methods. This tools and techniques adaptation is rendered more difficult in public risk regulation organisations due to the sensitive and complex nature of their activities. This is particularly the case in areas relating to the environment, food, and human health and safety, when organisational goals are tied up with societal, political and individuals’ goals at national and international levels. Hence, recognising, analysing and evaluating different decision support tools and techniques employed in assessing risks in public risk management organisations was considered. This research is part of a mixed method study which aimed to examine the perception of risk assessment and the extent to which organisations practise risk assessment’ tools and techniques. The study adopted a semi-structured questionnaire with qualitative and quantitative data analysis to include a range of public risk regulation organisations from the UK, Germany, France, Belgium and the Netherlands. The results indicated the public risk management organisations mainly use diverse tools and techniques in the risk assessment process. The primary hazard analysis; brainstorming; hazard analysis and critical control points were described as the most practiced risk identification techniques. Within qualitative and quantitative risk analysis, the participants named the expert judgement, risk probability and impact assessment, sensitivity analysis and data gathering and representation as the most practised techniques.
Abstract: The Internet has grown into a powerful medium for information dispersion and social interaction that leads to a rapid growth of social media which allows users to easily post their emotions and perspectives regarding certain topics online. Our research aims at using natural language processing and text mining techniques to explore the public emotions expressed on Twitter by analyzing the sentiment behind tweets. In this paper, we propose a composite kernel method that integrates tree kernel with the linear kernel to simultaneously exploit both the tree representation and the distributed emotion keyword representation to analyze the syntactic and content information in tweets. The experiment results demonstrate that our method can effectively detect public emotion of tweets while outperforming the other compared methods.
Abstract: Motion recognition from videos is actually a very
complex task due to the high variability of motions. This paper
describes the challenges of human motion recognition, especially
motion representation step with relevant features. Our descriptor
vector is inspired from Laban Movement Analysis method. We
propose discriminative features using the Random Forest algorithm
in order to remove redundant features and make learning algorithms
operate faster and more effectively. We validate our method on
MSRC-12 and UTKinect datasets.
Abstract: Focus on reducing energy consumption in existing
buildings at large scale, e.g. in cities or countries, has been
increasing in recent years. In order to reduce energy consumption
in existing buildings, political incentive schemes are put in place and
large scale investments are made by utility companies. Prioritising
these investments requires a comprehensive overview of the energy
consumption in the existing building stock, as well as potential
energy-savings. However, a building stock comprises thousands
of buildings with different characteristics making it difficult to
model energy consumption accurately. Moreover, the complexity of
the building stock makes it difficult to convey model results to
policymakers and other stakeholders. In order to manage the complexity of the building stock, building
archetypes are often employed in building stock energy models
(BSEMs). Building archetypes are formed by segmenting the building
stock according to specific characteristics. Segmenting the building
stock according to building type and building age is common, among
other things because this information is often easily available. This
segmentation makes it easy to convey results to non-experts. However, using a single archetypical building to represent all
buildings in a segment of the building stock is associated with
loss of detail. Thermal characteristics are aggregated while other
characteristics, which could affect the energy efficiency of a building,
are disregarded. Thus, using a simplified representation of the
building stock could come at the expense of the accuracy of the
model. The present study evaluates the accuracy of a conventional
archetype-based BSEM that segments the building stock according
to building type- and age. The accuracy is evaluated in terms of the
archetypes’ ability to accurately emulate the average energy demands
of the corresponding buildings they were meant to represent. This is
done for the buildings’ energy demands as a whole as well as for
relevant sub-demands. Both are evaluated in relation to the type- and
the age of the building. This should provide researchers, who use
archetypes in BSEMs, with an indication of the expected accuracy
of the conventional archetype model, as well as the accuracy lost in
specific parts of the calculation, due to use of the archetype method.
Abstract: Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.
Abstract: Business processes are crucial for organizations and
help businesses to evaluate and optimize their performance and
processes against current and future-state business goals. Outsourcing
business processes to the cloud becomes popular due to a wide
varsity of benefits and cost-saving. However, cloud outsourcing raises
enterprise data security concerns, which must be incorporated in
Business Process Model and Notation (BPMN). This paper, presents
SeCloudBPMN, a lightweight extension for BPMN which extends the
BPMN to explicitly support the security threats in the cloud as an
outsourcing environment. SeCloudBPMN helps business’s security
experts to outsource business processes to the cloud considering
different threats from inside and outside the cloud. In this way,
appropriate security countermeasures could be considered to preserve
data security in business processes outsourcing to the cloud.
Abstract: This paper explores Zohra Drif’s memoir, Inside the Battle of Algiers, which narrates her desires as a student to become a revolutionary activist. She exemplified, in her narrative, the different roles, she and her fellows performed as combatants in the Casbah during the Algerian Revolution 1954-1962. This book review aims to evaluate the concept of women’s agency through education and language learning, and its impact on empowering women’s desires. Close-reading method and thematic analysis are used to explore the text. The analysis identified themes that refine the meaning of agency which are social and cultural supports, education, and language proficiency. These themes aim to contribute to the representation in Inside the Battle of Algiers of a woman guerrilla who engaged herself to perform national acts of resistance.
Abstract: This work aims to analyze the locative structure used by the locative games of the company Niantic. To fulfill this objective, a literature review on the representation and simulation of cities was developed; interviews with Ingress players and playing Ingress. Relating these data, it was possible to deepen the relationship between the virtual and the real to create the simulation of cities and their cultural objects in locative games. Cities representation associates geo-location provided by the Global Positioning System (GPS), with augmented reality and digital image, and provides a new paradigm in the city interaction with its parts and real and virtual world elements, homeomorphic to real world. Bibliographic review of papers related to the representation and simulation study and their application in locative games was carried out and is presented in the present paper. The cities representation and simulation concepts in locative games, and how this setting enables the flow and immersion in urban space, are analyzed. Some examples of games are discussed for this new setting development, which is a mix of real and virtual world. Finally, it was proposed a Locative Structure for electronic games using the concepts of heterotrophic representations and isotropic representations conjoined with immediacy and hypermediacy.
Abstract: Requirements modeling and analysis are important in successful information systems' maintenance. Unified Modeling Language (UML) class diagrams are useful standards for modeling information systems. To our best knowledge, there is a lack of a systems development methodology described by the organism metaphor. The core concept of this metaphor is adaptation. Using the knowledge representation and reasoning approach and ontologies to adopt new requirements are emergent in recent years. This paper proposes an organic methodology which is based on constructivism theory. This methodology is a knowledge representation and reasoning approach to analyze new requirements in the class diagrams maintenance. The process and rules in the proposed methodology automatically analyze inconsistencies in the class diagram. In the big data era, developing an automatic tool based on the proposed methodology to analyze large amounts of class diagram data is an important research topic in the future.
Abstract: Multi-modal film boiling simulations are carried out on adaptive octree grids. The liquid-vapor interface is captured using the volume-of-fluid framework adjusted to account for exchanges of mass, momentum, and energy across the interface. Surface tension effects are included using a volumetric source term in the momentum equations. The phase change calculations are conducted based on the exact location and orientation of the interface; however, the source terms are calculated using the mixture variables to be consistent with the one field formulation used to represent the entire fluid domain. The numerical model on octree representation of the computational grid is first verified using test cases including advection tests in severely deforming velocity fields, gravity-based instabilities and bubble growth in uniformly superheated liquid under zero gravity. The model is then used to simulate both single and multi-modal film boiling simulations. The octree grid is dynamically adapted in order to maintain the highest grid resolution on the instability fronts using markers of interface location, volume fraction, and thermal gradients. The method thus provides an efficient platform to simulate fluid instabilities with or without phase change in the presence of body forces like gravity or shear layer instabilities.
Abstract: Load forecasting has become crucial in recent years
and become popular in forecasting area. Many different power
forecasting models have been tried out for this purpose. Electricity
load forecasting is necessary for energy policies, healthy and reliable
grid systems. Effective power forecasting of renewable energy load
leads the decision makers to minimize the costs of electric utilities
and power plants. Forecasting tools are required that can be used
to predict how much renewable energy can be utilized. The purpose
of this study is to explore the effectiveness of LSTM-based neural
networks for estimating renewable energy loads. In this study, we
present models for predicting renewable energy loads based on
deep neural networks, especially the Long Term Memory (LSTM)
algorithms. Deep learning allows multiple layers of models to learn
representation of data. LSTM algorithms are able to store information
for long periods of time. Deep learning models have recently been
used to forecast the renewable energy sources such as predicting
wind and solar energy power. Historical load and weather information
represent the most important variables for the inputs within the
power forecasting models. The dataset contained power consumption
measurements are gathered between January 2016 and December
2017 with one-hour resolution. Models use publicly available data
from the Turkish Renewable Energy Resources Support Mechanism.
Forecasting studies have been carried out with these data via deep
neural networks approach including LSTM technique for Turkish
electricity markets. 432 different models are created by changing
layers cell count and dropout. The adaptive moment estimation
(ADAM) algorithm is used for training as a gradient-based optimizer
instead of SGD (stochastic gradient). ADAM performed better than
SGD in terms of faster convergence and lower error rates. Models
performance is compared according to MAE (Mean Absolute Error)
and MSE (Mean Squared Error). Best five MAE results out of
432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting
performance of the proposed LSTM models gives successful results
compared to literature searches.
Abstract: In the digital era, in which we have an avalanche of information, it is not new that the Internet has brought new modes of communication and knowledge access. Characterized by the multiplicity of discourses, opinions, beliefs and cultures, the web is a space of political-ideological dimensions where people (who often do not know each other) interact and create representations, deconstruct stereotypes, and redefine identities. Currently, the translator needs to be able to deal with digital spaces ranging from specific software to social media, which inevitably impact on his professional life. One of the most impactful ways of being seen in cyberspace is the participation in social networking groups. In addition to its ability to disseminate information among participants, social networking groups allow a significant personal and social exposure. Such exposure is due to the visibility of each participant achieved not only on its personal profile page, but also in each comment or post the person makes in the groups. The objective of this paper is to study the representations of translators and translation process on the Internet, more specifically in publications in two Brazilian groups of great influence on the Facebook: "Translators/Interpreters" and "Translators, Interpreters and Curious". These chosen groups represent the changes the network has brought to the profession, including the way translators are seen and see themselves. The analyzed posts allowed a reading of what common sense seems to think about the translator as opposed to what the translators seem to think about themselves as a professional class. The results of the analysis lead to the conclusion that these two positions are antagonistic and sometimes represent conflict of interests: on the one hand, the society in general consider the translator’s work something easy, therefore it is not necessary to be well remunerated; on the other hand, the translators who know how complex a translation process is and how much it takes to be a good professional. The results also reveal that social networking sites such as Facebook provide more visibility, but it takes a more active role from the translator to achieve a greater appreciation of the profession and more recognition of the role of the translator, especially in face of increasingly development of automatic translation programs.
Abstract: Minimizing the weight in flexible structures means
reducing material and costs as well. However, these structures could
become prone to vibrations. Attenuating these vibrations has become
a pivotal engineering problem that shifted the focus of many research
endeavors. One technique to do that is to design and implement
an active control system. This system is mainly composed of a
vibrating structure, a sensor to perceive the vibrations, an actuator
to counteract the influence of disturbances, and finally a controller to
generate the appropriate control signals. In this work, two different
techniques are explored to create two different mathematical models
of an active control system. The first model is a finite element model
with a reduced number of nodes and it is called a super-element.
The second model is in the form of state-space representation, i.e.
a set of partial differential equations. The damping coefficients are
calculated and incorporated into both models. The effectiveness of
these models is demonstrated when the system is excited by its first
natural frequency and an active control strategy is developed and
implemented to attenuate the resulting vibrations. Results from both
modeling techniques are presented and compared.
Abstract: This paper presents a road vehicle detection approach for the intelligent transportation system. This approach mainly uses low-cost magnetic sensor and associated data collection system to collect magnetic signals. This system can measure the magnetic field changing, and it also can detect and count vehicles. We extend Mel Frequency Cepstral Coefficients to analyze vehicle magnetic signals. Vehicle type features are extracted using representation of cepstrum, frame energy, and gap cepstrum of magnetic signals. We design a 2-dimensional map algorithm using Vector Quantization to classify vehicle magnetic features to four typical types of vehicles in Australian suburbs: sedan, VAN, truck, and bus. Experiments results show that our approach achieves a high level of accuracy for vehicle detection and classification.
Abstract: The issues that limit application interoperability is lack of common vocabulary, common structure, application domain knowledge ontology based semantic technology provides solutions that resolves application interoperability issues. Ontology is broadly used in diverse applications such as artificial intelligence, bioinformatics, biomedical, information integration, etc. Ontology can be used to interpret the knowledge of various domains. To reuse, enrich the available ontologies and reduce the duplication of ontologies of the same domain, there is a strong need to integrate the ontologies of the particular domain. The integrated ontology gives complete knowledge about the domain by sharing this comprehensive domain ontology among the groups. As per the literature survey there is no well-defined methodology to represent knowledge of a whole domain. The current research addresses a systematic methodology for knowledge representation using multiple sub-ontologies at different levels that addresses application interoperability and enables semantic information retrieval. The current method represents complete knowledge of a domain by importing concepts from multiple sub ontologies of same and relative domains that reduces ontology duplication, rework, implementation cost through ontology reusability.
Abstract: From a multi-science point of view, we analyze threats to security resulting from globalization of international information space and information and communication aggression of Russia. A definition of Ruschism is formulated as an ideology supporting aggressive actions of modern Russia against the Euro-Atlantic community. Stages of the hybrid war Russia is leading against Ukraine are described, including the elements of subversive activity of the special services, the activation of the military phase and the gradual shift of the focus of confrontation to the realm of information and communication technologies. We reveal an emergence of a threat for democratic states resulting from the destabilizing impact of a target state’s mass media and social networks being exploited by Russian secret services under freedom-of-speech disguise. Thus, we underline the vulnerability of cyber- and information security of the network society in regard of hybrid war. We propose to define the latter a synergetic war. Our analysis is supported with a long-term qualitative monitoring of representation of top state officials on popular TV channels and Facebook. From the memetics point of view, we have detected a destructive psycho-information technology used by the Kremlin, a kind of information catastrophe, the essence of which is explained in detail. In the conclusion, a comprehensive plan for information protection of the public consciousness and mentality of Euro-Atlantic citizens from the aggression of the enemy is proposed.
Abstract: Abrasive jet machining is one of the promising non-traditional machining processes which uses mechanical energy (pressure and velocity) for machining various materials. The process parameters that influence the metal removal rate are kerfs, surface finish, depth of cut, air pressure, and distance between nozzle and work piece, nozzle diameter, abrasive type, abrasive shape, and mass flow rate of abrasive particles. The abrasive particles coming out with high pressure not only hits work surface but also passes through the nozzle resulting in erosion. This paper focuses mainly on the effect of different parameters on the erosion of nozzle in Abrasive jet machining. Three different types of nozzles made of sapphire, tungsten carbide, and high carbon high chromium steel (HCHCS) are used for machining glass and the erosion of these nozzles are calculated. The results are shown in tabular form and graphical representation.
Abstract: Empirical deterministic models have been developed to predict roughness progression of heavy duty spray sealed pavements for a dataset representing rural arterial roads. The dataset provides a good representation of the relevant network and covers a wide range of operating and environmental conditions. A sample with a large size of historical time series data for many pavement sections has been collected and prepared for use in multilevel regression analysis. The modelling parameters include road roughness as performance parameter and traffic loading, time, initial pavement strength, reactivity level of subgrade soil, climate condition, and condition of drainage system as predictor parameters. The purpose of this paper is to report the approaches adopted for models development and validation. The study presents multilevel models that can account for the correlation among time series data of the same section and to capture the effect of unobserved variables. Study results show that the models fit the data very well. The contribution and significance of relevant influencing factors in predicting roughness progression are presented and explained. The paper concludes that the analysis approach used for developing the models confirmed their accuracy and reliability by well-fitting to the validation data.