Emotion Detection in Twitter Messages Using Combination of Long Short-Term Memory and Convolutional Deep Neural Networks

One of the most significant issues as attended a lot in recent years is that of recognizing the sentiments and emotions in social media texts. The analysis of sentiments and emotions is intended to recognize the conceptual information such as the opinions, feelings, attitudes and emotions of people towards the products, services, organizations, people, topics, events and features in the written text. These indicate the greatness of the problem space. In the real world, businesses and organizations are always looking for tools to gather ideas, emotions, and directions of people about their products, services, or events related to their own. This article uses the Twitter social network, one of the most popular social networks with about 420 million active users, to extract data. Using this social network, users can share their information and opinions about personal issues, policies, products, events, etc. It can be used with appropriate classification of emotional states due to the availability of its data. In this study, supervised learning and deep neural network algorithms are used to classify the emotional states of Twitter users. The use of deep learning methods to increase the learning capacity of the model is an advantage due to the large amount of available data. Tweets collected on various topics are classified into four classes using a combination of two Bidirectional Long Short Term Memory network and a Convolutional network. The results obtained from this study with an average accuracy of 93%, show good results extracted from the proposed framework and improved accuracy compared to previous work.

Cirrhosis Mortality Prediction as Classification Using Frequent Subgraph Mining

In this work, we use machine learning and data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. Our work applies modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.

Efficient Pre-Processing of Single-Cell Assay for Transposase Accessible Chromatin with High-Throughput Sequencing Data

The primary tool currently used to pre-process 10X chromium single-cell ATAC-seq data is Cell Ranger, which can take very long to run on standard datasets. To facilitate rapid pre-processing that enables reproducible workflows, we present a suite of tools called scATAK for pre-processing single-cell ATAC-seq data that is 15 to 18 times faster than Cell Ranger on mouse and human samples. Our tool can also calculate chromatin interaction potential matrices and generate open chromatin signal and interaction traces for cell groups. We use scATAK tool to explore the chromatin regulatory landscape of a healthy adult human brain and unveil cell-type specific features, and show that it provides a convenient and computational efficient approach for pre-processing single-cell ATAC-seq data.

The Use of Knowledge Management Systems and ICT Service Desk Management to Minimize the Digital Divide Experienced in the Museum Sector

Since the introduction of ServiceNow, the UK’s Science Museum Group’s (SMG) ICT service desk portal, there has not been an analysis of the tools available to SMG staff for Just-in-time knowledge acquisition (Knowledge Management Systems) and reporting ICT incidents with a focus on an aspect of professional identity namely, gender. Therefore, it is important for SMG to investigate the apparent disparities so that solutions can be derived to minimize this digital divide if one exists. This study is conducted in the milieu of UK museums, galleries, arts, academic, charitable, and cultural heritage sector. It is acknowledged at SMG that there are challenges with keeping up with an ever-changing digital landscape. Subsequently, this entails the rapid upskilling of staff and developing an infrastructure that supports just-in-time technological knowledge acquisition and reporting technology related issues. This problem was addressed by analysing ServiceNow ICT incident reports and reports from knowledge articles from a six-month period from February to July. This study found a statistically significant relationship between gender and reporting an ICT incident. There is also a significant relationship between gender and the priority level of ICT incident. Interestingly, there is no statistically significant relationship between gender and reading knowledge articles. Additionally, there is no statistically significant relationship between gender and reporting an ICT incident related to the knowledge article that was read by staff. The knowledge acquired from this study is useful to service desk management practice as it will help to inform the creation of future knowledge articles and ICT incident reporting processes.

Comparison of Deep Convolutional Neural Networks Models for Plant Disease Identification

Identification of plant diseases has been performed using machine learning and deep learning models on the datasets containing images of healthy and diseased plant leaves. The current study carries out an evaluation of some of the deep learning models based on convolutional neural network architectures for identification of plant diseases. For this purpose, the publicly available New Plant Diseases Dataset, an augmented version of PlantVillage dataset, available on Kaggle platform, containing 87,900 images has been used. The dataset contained images of 26 diseases of 14 different plants and images of 12 healthy plants. The CNN models selected for the study presented in this paper are AlexNet, ZFNet, VGGNet (four models), GoogLeNet, and ResNet (three models). The selected models are trained using PyTorch, an open-source machine learning library, on Google Colaboratory. A comparative study has been carried out to analyze the high degree of accuracy achieved using these models. The highest test accuracy and F1-score of 99.59% and 0.996, respectively, were achieved by using GoogLeNet with Mini-batch momentum based gradient descent learning algorithm.

A Mixed Approach to Assess Information System Risk, Operational Risk, and Congolese Microfinance Institutions Performance

Well organized digitalization and information systems have been selected as relevant measures to mitigate operational risks within organizations. Unfortunately, information system comes with new threats that can cause severe damage and quick organization lockout. This study aims to measure perceived information system risks and their effects on operational risks within the microfinance institution in D.R. Congo. Also, the factors influencing the operational risk are to be identified, and the link between operational risk with other risks and performance is to be assessed. The study proposes a research model drawn on the combination of Resources-Based-View, dynamic capabilities, the agency theory, the Information System Security Model, and social theories of risk. Therefore, we suggest adopting a mixed methods research with the sole aim of increasing the literature that already exists on perceived operational risk assessment and its link with other risk and performance, with a focus on information system risks.

Psychodidactic Strategies to Facilitate the Flow of Logical Thinking in the Preparation of Academic Documents

The preparation of academic documents, such as thesis, articles and research projects, is one of the requirements of the higher educational level. These documents demand the implementation of logical argumentative thinking which is experienced and executed with difficulty. To mitigate the effect of these difficulties we designed a thesis seminar, with which we have seven years of experience. It is taught in a graduate program in Psychology at the National Autonomous University of Mexico. In this seminar we use the Toulmin model as a mental heuristic and for the application of a set of psychodidactic strategies that facilitate the elaboration of the plot and culmination of the thesis. The efficiency in obtaining the degree in the groups exposed to the seminar has increased by 94% compared to the 10% that existed in the generations that were not exposed to the seminar. In this article we will emphasize the psychodidactic strategies used. The Toulmin model alone does not guarantee the success achieved. A set of actions of a psychological nature (almost psychotherapeutic) and didactics of the teacher also seem to contribute. These are actions that derive from an understanding of the psychological, epistemological and ontogenetic obstacles and the most frequent errors in which thought tends to fall when it is demanded a logical course. We have grouped the strategies into three groups: 1) strategies to facilitate logical thinking, 2) strategies to strengthen the scientific self and 3) strategies to facilitate the act of writing the text. In this work we delve into each of them.

The Role of Synthetic Data in Aerial Object Detection

The purpose of this study is to explore the characteristics of developing a machine learning application using synthetic data. The study is structured to develop the application for the purpose of deploying the computer vision model. The findings discuss the realities of attempting to develop a computer vision model for practical purpose, and detail the processes, tools and techniques that were used to meet accuracy requirements. The research reveals that synthetic data represent another variable that can be adjusted to improve the performance of a computer vision model. Further, a suite of tools and tuning recommendations are provided.

A Commercial Building Plug Load Management System That Uses Internet of Things Technology to Automatically Identify Plugged-In Devices and Their Locations

Plug and process loads (PPLs) account for a large portion of U.S. commercial building energy use. There is a huge potential to reduce whole building consumption by targeting PPLs for energy savings measures or implementing some form of plug load management (PLM). Despite this potential, there has yet to be a widely adopted commercial PLM technology. This paper describes the Automatic Type and Location Identification System (ATLIS), a PLM system framework with automatic and dynamic load detection (ADLD). ADLD gives PLM systems the ability to automatically identify devices as they are plugged into the outlets of a building. The ATLIS framework takes advantage of smart, connected devices to identify device locations in a building, meter and control their power, and communicate this information to a central database. ATLIS includes five primary capabilities: location identification, communication, control, energy metering, and data storage. A laboratory proof of concept (PoC) demonstrated all but the energy metering capability, and these capabilities were validated using a series of system tests. The PoC was able to identify when a device was plugged into an outlet and the location of the device in the building. When a device was moved, the PoC’s dashboard and database were automatically updated with the new location. The PoC implemented controls to devices from the system dashboard so that devices maintained correct schedules regardless of where they were plugged in within the building. ATLIS’s primary technology application is improved PLM, but other applications include asset management, energy audits, and interoperability for grid-interactive efficient buildings. An ATLIS-based system could also be used to direct power to critical devices, such as ventilators, during a brownout or blackout. Such a framework is an opportunity to make PLM more widespread and reduce the amount of energy consumed by PPLs in current and future commercial buildings.

An Overview of Technology Availability to Support Remote Decentralized Clinical Trials

Developing new medicine and health solutions and improving patient health currently rely on the successful execution of clinical trials, which generate relevant safety and efficacy data. For their success, recruitment and retention of participants are some of the most challenging aspects of protocol adherence. Main barriers include: i) lack of awareness of clinical trials; ii) long distance from the clinical site; iii) the burden on participants, including the duration and number of clinical visits, and iv) high dropout rate. Most of these aspects could be addressed with a new paradigm, namely the Remote Decentralized Clinical Trials (RDCTs). Furthermore, the COVID-19 pandemic has highlighted additional advantages and challenges for RDCTs in practice, allowing participants to join trials from home and not depending on site visits, etc. Nevertheless, RDCTs should follow the process and the quality assurance of conventional clinical trials, which involve several processes. For each part of the trial, the Building Blocks, existing software and technologies were assessed through a systematic search. The technology needed to perform RDCTs is widely available and validated but is yet segmented and developed in silos, as different software solutions address different parts of the trial and at various levels. The current paper is analyzing the availability of technology to perform RDCTs, identifying gaps and providing an overview of Basic Building Blocks and functionalities that need to be covered to support the described processes.

Gas Injection Transport Mechanism for Shale Oil Recovery

The United States is now energy self-sufficient due to the production of shale oil reserves. With more than half of it being tapped daily in the United States, these unconventional reserves are massive and provide immense potential for future energy demands. Drilling horizontal wells and fracking are the primary methods for developing these reserves. Regrettably, recovery efficiency is rarely greater than 10%. Gas injection enhanced oil recovery offers a significant benefit in optimizing recovery of shale oil. This could be either through huff and puff, gas flooding, and cyclic gas injection. Methane, nitrogen, and carbon (IV) oxide, among other high-pressure gases, can be injected. Operators use Darcy's law to assess a reservoir's productive capacity, but they are unaware that the law may not apply to shale oil reserves. This is due to the fact that, unlike pressure differences alone, diffusion, concentration, and gas selection all play a role in the flow of gas injected into the wellbore. The reservoir drainage and oil sweep efficiency rates are determined by the transport method. This research evaluates the parameters that influence gas injection transport mechanism. Understanding the process could accelerate recovery by two to three times.

Titanium Dioxide Modified with Glutathione as Potential Drug Carrier with Reduced Toxic Properties

The paper presents a process to obtain glutathione-modified titanium oxide nanoparticles. The processes were carried out in a microwave radiation field. The influence of the molar ratio of glutathione to titanium oxide and the effect of the fold of NaOH vs. stoichiometric amount on the size of the formed TiO2 nanoparticles was determined. The physicochemical properties of the obtained products were evaluated using dynamic light scattering (DLS), transmission electron microscope- energy-dispersive X-ray spectroscopy (TEM-EDS), low-temperature nitrogen adsorption method (BET), X-Ray Diffraction (XRD) and Fourier-transform infrared spectroscopy (FTIR) microscopy methods. The size of TiO2 nanoparticles was characterized from 30 nm to 336 nm. The release of titanium ions from the prepared products was evaluated. These studies were carried out using different media in which the powders were incubated for a specific time. These were: water, SBF and Ringer's solution. The release of titanium ions from modified products is weaker compared to unmodified titanium oxide nanoparticles. The reduced release of titanium ions may allow the use of such modified materials as substances in drug delivery systems.

Performance Evaluation of Minimum Quantity Lubrication on EN3 Mild Steel Turning

Lubrication, cooling and chip removal are the desired functions of any cutting fluid. Conventional or flood lubrication requires high volume flow rate and cost associated with this is higher. In addition, flood lubrication possesses health risks to machine operator. To avoid these consequences, dry machining and minimum quantity are two alternatives. Dry machining cannot be a suited alternative as it can generate greater heat and poor surface finish. Here, turning work is carried out on a Lathe machine using EN3 Mild steel. Variable cutting speeds and depth of cuts are provided and corresponding temperatures and surface roughness values were recorded. Experimental results are analyzed by Minitab software. Regression analysis, main effect plot, and interaction plot conclusion are drawn by using ANOVA. There is a 95.83% reduction in the use of cutting fluid. MQL gives a 9.88% reduction in tool temperature, this will improve tool life. MQL produced a 17.64% improved surface finish. MQL appears to be an economical and environmentally compatible lubrication technique for sustainable manufacturing.

Separation of Composites for Recycling: Measurement of Electrostatic Charge of Carbon and Glass Fiber Particles

Composite waste from manufacturing can consist of different fiber materials, including blends of different fiber. Commercially, the recycling of composite waste is currently limited to carbon fiber waste and recycling glass fiber waste is currently not economically viable due to the low cost of virgin glass fiber and the reduced mechanical properties of the recovered fibers. For this reason, the recycling of hybrid fiber materials, where carbon fiber is blended with glass fibers, cannot be processed economically. Therefore, a separation method is required to remove the glass fiber materials during the recycling process. An electrostatic separation method is chosen for this work because of the significant difference between carbon and glass fiber electrical properties. In this study, an experimental rig has been developed to measure the electrostatic charge achievable as the materials are passed through a tube. A range of particle lengths (80-100 µm, 6 mm and 12 mm), surface state conditions (0%SA, 2%SA and 6%SA), and several tube wall materials have been studied. A polytetrafluoroethylene (PTFE) tube and recycled fiber without sizing agent were identified as the most suitable parameters for the electrical separation method. It was also found that shorter fiber lengths helped to encourage particle flow and attain higher charge values. These findings can be used to develop a separation process to enable the cost-effective recycling of hybrid fiber composite waste. 

Energy Management System with Temperature Rise Prevention on Hybrid Ships

Marine shipping has now become one of the major worldwide contributors to pollution and greenhouse gas emissions. Hybrid ships technology based on multiple energy sources has taken a great scope of research to get rid of ship emissions and cut down fuel expenses. Insufficiency between power generated and the demand load to withstand the transient behavior on ships during severe climate conditions will lead to a blackout. Thus, an efficient energy management system (EMS) is a mandatory scope for achieving higher system efficiency while enhancing the lifetime of the onboard storage systems is another salient EMS scope. Considering energy storage system conditions, both the battery state of charge (SOC) and temperature represent important parameters to prevent any malfunction of the storage system that eventually degrades the whole system. In this paper, a two battery packs ratio fuzzy logic control model is proposed. The overall aim is to control the charging/discharging current while including both the battery SOC and temperature in the energy management system. The full designs of the proposed controllers are described and simulated using Matlab. The results prove the successfulness of the proposed controller in stabilizing the system voltage during both loading and unloading while keeping the energy storage system in a healthy condition.

Software Product Quality Evaluation Model with Multiple Criteria Decision Making Analysis

This paper presents a software product quality evaluation model based on the ISO/IEC 25010 quality model. The evaluation characteristics and sub characteristics were identified from the ISO/IEC 25010 quality model. The multidimensional structure of the quality model is based on characteristics such as functional suitability, performance efficiency, compatibility, usability, reliability, security, maintainability, and portability, and associated sub characteristics. Random numbers are generated to establish the decision maker’s importance weights for each sub characteristics. Also, random numbers are generated to establish the decision matrix of the decision maker’s final scores for each software product against each sub characteristics. Thus, objective criteria importance weights and index scores for datasets were obtained from the random numbers. In the proposed model, five different software product quality evaluation datasets under three different weight vectors were applied to multiple criteria decision analysis method, preference analysis for reference ideal solution (PARIS) for comparison, and sensitivity analysis procedure. This study contributes to provide a better understanding of the application of MCDMA methods and ISO/IEC 25010 quality model guidelines in software product quality evaluation process.

Sustainable Balanced Scorecard for Kaizen Evaluation: Comparative Study between Egypt and Japan

Continuous improvement activities are becoming a key organizational success factor; those improvement activities include but are not limited to kaizen, six sigma, lean production, and continuous improvement projects. Kaizen is a Japanese philosophy of continuous improvement by making small incremental changes to improve an organization’s performance, reduce costs, reduce delay time, reduce waste in production, etc. This research aims at proposing a measuring system for kaizen activities from a sustainable balanced scorecard perspective. A survey was developed and disseminated among kaizen experts in both Egypt and Japan with the purpose of allocating key performance indicators for both kaizen process (critical success factors) and result (kaizen benefits) into the five sustainable balanced scorecard perspectives. This research contributes to the extant literature by presenting a kaizen measurement of both kaizen process and results that will illuminate the benefits of using kaizen. Also, the presented measurement can help in the sustainability of kaizen implementation across various sectors and industries. Thus, grasping the full benefits of kaizen implementation will contribute to the spread of kaizen understanding and practice. Also, this research provides insights on the social and cultural differences that would influence the kaizen success. Determining the combination of the proper kaizen measures could be used by any industry, whether service or manufacturing for better kaizen activities measurement. The comparison between Japanese implementation of kaizen, as the pioneers of continuous improvement, and Egyptian implementation will help recommending better practices of kaizen in Egypt and contributing to the 2030 sustainable development goals. The study results reveal that there is no significant difference in allocating kaizen benefits between Egypt and Japan. However, with regard to the critical success factors some differences appeared reflecting the social differences and understanding between both countries, a single integrated measurement was reached between the Egyptian and Japanese allocation highlighting the Japanese experts’ opinion as the ultimate criterion for selection.

Technological Applications in Automobile Manufacturing Sector: A Case Study Analysis

The research focuses on the applicable technologies in the automobile industry and their effects on the productivity and annual revenue of the industry. A study has been conducted on six major automobile manufacturing industries represented in this research as M1, M2, M3, M4, M5 and M6. The results indicate that M1, which is a pioneer in technological applications, remains the market leader, followed by M5 and M2 taking the second and third positions, respectively. M3, M6 and M4 are the followers and are placed next in positions. It has also been observed that M1 and M2 have entered into an agreement to share the basic structural technologies and they maintain long-term and trusted relationships with their suppliers through the Keiretsu system. With technological giants such as Apple, Microsoft, Uber and Google entering the automobile industry in recent years, an upward trend is expected in the futuristic market with self-driving cars to dominate the automobile sector. To keep up with the market trend, it is essential for automobile manufacturers to understand the importance of developing technological capabilities and skills to be competitive in the marketplace.

Adaptive Control Strategy of Robot Polishing Force Based on Position Impedance

Manual polishing has problems such as high labor intensity, low production efficiency and difficulty in guaranteeing the consistency of polishing quality. The use of robot polishing instead of manual polishing can effectively avoid these problems. Polishing force directly affects the quality of polishing, so accurate tracking and control of polishing force is one of the most important conditions for improving the accuracy of robot polishing. The traditional force control strategy is difficult to adapt to the strong coupling of force control and position control during the robot polishing process. Therefore, based on the analysis of force-based impedance control and position-based impedance control, this paper proposed a type of adaptive controller. Based on force feedback control of active compliance control, the controller can adaptively estimate the stiffness and position of the external environment and eliminate the steady-state force error produced by traditional impedance control. The simulation results of the model show that the adaptive controller has good adaptability to changing environmental positions and environmental stiffness, and can accurately track and control polishing force.

Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase

Additive Friction Stir Manufacturing, or AFSM, is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. There is still a lack in understanding of the physical phenomena taking place during the process. This research aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system due to pure friction. An analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable, due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes through a numerical modeling followed by an experimental validation to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.