The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence (AI) is invaluable in identifying crime. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISAs). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The proposed framework development is implemented using the Java Agent Development Framework, Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISAs and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5% of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Impact of Network Workload between Virtualization Solutions on a Testbed Environment for Cybersecurity Learning

The adoption of modern lightweight virtualization often comes with new threats and network vulnerabilities. This paper seeks to assess this with a different approach studying the behavior of a testbed built with tools such as Kernel-based Virtual Machine (KVM), LinuX Containers (LXC) and Docker, by performing stress tests within a platform where students experiment simultaneously with cyber-attacks, and thus observe the impact on the campus network and also find the best solution for cyber-security learning. Interesting outcomes can be found in the literature comparing these technologies. It is, however, difficult to find results of the effects on the global network where experiments are carried out. Our work shows that other physical hosts and the faculty network were impacted while performing these trials. The problems found are discussed, as well as security solutions and the adoption of new network policies.

Possibilities for Testing User Experience and User Interface Design on Mobile Devices

In an era when everything is increasingly digital, consumers are always looking for new options in solutions to their everyday needs. In this context, mobile apps are developing at an exponential pace. One of the fastest growing segments of mobile technologies is, obviously, e-commerce. It can be predicted that mobile commerce will record nearly three times the global growth of e-commerce across all platforms, which indicates its importance in the given segment. The current coronavirus pandemic is also changing many of the existing paradigms both socially, economically, and technologically, which has a major impact on changing consumer behavior and the emphasis on simplification and clarity of mobile solutions. This is the area that User Experience (UX) and User Interface (UI) designers deal with. Their task is to design a sufficiently attractive and interesting solution that will be available on all mobile devices and at the same time will be easy enough for the customer/visitor to get to the destination or to get the necessary information in a few clicks. The basis for changes in UX design can now be obtained not only through online analytical tools, but also through neuromarketing, especially in the case of mobile devices. The paper highlights possibilities for testing UX design applications on mobile devices using a special platform that combines a stationary eye camera (eye tracking) and facial analysis (facial coding).

WormHex: A Volatile Memory Analysis Tool for Retrieval of Social Media Evidence

Social media applications are increasingly being used in our everyday communications. These applications utilise end-to-end encryption mechanisms which make them suitable tools for criminals to exchange messages. These messages are preserved in the volatile memory until the device is restarted. Therefore, volatile forensics has become an important branch of digital forensics. In this study, the WormHex tool was developed to inspect the memory dump files for Windows and Mac based workstations. The tool supports digital investigators by enabling them to extract valuable data written in Arabic and English through web-based WhatsApp and Twitter applications. The results confirm that social media applications write their data into the memory, regardless of the operating system running the application, with there being no major differences between Windows and Mac.

Design and Analysis of Low-Power, High Speed and Area Efficient 2-Bit Digital Magnitude Comparator in 90nm CMOS Technology Using Gate Diffusion Input

Digital magnitude comparators based on Gate Diffusion Input (GDI) implementation technique are high speed and area-efficient, and they consume less power as compared to other implementation techniques. However, they are less efficient for some logic gates and have no full voltage swing. In this paper, we made a performance comparison between the GDI implementation technique and other implementation methods, such as Static CMOS, Pass Transistor Logic (PTL), and Transmission Gate (TG) in 90 nm, 120 nm, and 180 nm CMOS technologies using BSIM4 MOS model. We proposed a methodology (hybrid implementation) of implementing digital magnitude comparators which significantly improved the power, speed, area, and voltage swing requirements. Simulation results revealed that the hybrid implementation of digital magnitude comparators show a 10.84% (power dissipation), 41.6% (propagation delay), 47.95% (power-delay product (PDP)) improvement compared to the usual GDI implementation method. We used Microwind & Dsch Version 3.5 as well as the Tanner EDA 16.0 tools for simulation purposes.

Cyber Security Enhancement via Software-Defined Pseudo-Random Private IP Address Hopping

Obfuscation is one of the most useful tools to prevent network compromise. Previous research focused on the obfuscation of the network communications between external-facing edge devices. This work proposes the use of two edge devices, external and internal facing, which communicates via private IPv4 addresses in a software-defined pseudo-random IP hopping. This methodology does not require additional IP addresses and/or resources to implement. Statistical analyses demonstrate that the hopping surface must be at least 1e3 IP addresses in size with a broad standard deviation to minimize the possibility of coincidence of monitored and communication IPs. The probability of breaking the hopping algorithm requires a collection of at least 1e6 samples, which for large hopping surfaces will take years to collect. The probability of dropped packets is controlled via memory buffers and the frequency of hops and can be reduced to levels acceptable for video streaming. This methodology provides an impenetrable layer of security ideal for information and supervisory control and data acquisition systems.

Clustering for Detection of Population Groups at Risk from Anticholinergic Medication

Anticholinergic medication has been associated with events such as falls, delirium, and cognitive impairment in older patients. To further assess this, anticholinergic burden scores have been developed to quantify risk. A risk model based on clustering was deployed in a healthcare management system to cluster patients into multiple risk groups according to anticholinergic burden scores of multiple medicines prescribed to patients to facilitate clinical decision-making. To do so, anticholinergic burden scores of drugs were extracted from the literature which categorizes the risk on a scale of 1 to 3. Given the patients’ prescription data on the healthcare database, a weighted anticholinergic risk score was derived per patient based on the prescription of multiple anticholinergic drugs. This study was conducted on 300,000 records of patients currently registered with a major regional UK-based healthcare provider. The weighted risk scores were used as inputs to an unsupervised learning algorithm (mean-shift clustering) that groups patients into clusters that represent different levels of anticholinergic risk. This work evaluates the association between the average risk score and measures of socioeconomic status (index of multiple deprivation) and health (index of health and disability). The clustering identifies a group of 15 patients at the highest risk from multiple anticholinergic medication. Our findings show that this group of patients is located within more deprived areas of London compared to the population of other risk groups. Furthermore, the prescription of anticholinergic medicines is more skewed to female than male patients, suggesting that females are more at risk from this kind of multiple medication. The risk may be monitored and controlled in a healthcare management system that is well-equipped with tools implementing appropriate techniques of artificial intelligence.

The CommonSense Platform for Conducting Multiple Participant Field-Experiments Using Mobile-Phones

This paper presents CommonSense, a platform that provides researchers with the infrastructure and tools that enable the efficient and smooth creation, execution and processing of multiple participant experiments taking place outside the laboratory environment. The platform provides the infrastructure and tools to accompany the researchers throughout the life cycle of an experiment – from its inception, through its execution, to its processing and termination. The approach of our platform is based on providing a comprehensive solution, which puts emphasis on the support for the entire life-cycle of an experiment, starting from its definition, the setting up and the configuration of the platform, through the management of the experiment itself and its post processing. Some of the components that support those processes are constructed and configured automatically from the experiment definition.

Elegant: An Intuitive Software Tool for Interactive Learning of Power System Analysis

A common complaint from power system analysis students lies in the overly complex tools they need to learn and use just to simulate very basic systems or just to check the answers to power system calculations. The most basic power system studies are power-flow solutions and short-circuit calculations. This paper presents a simple tool with an intuitive interface to perform both these studies and assess its performance in comparison with existent commercial solutions. With this in mind, Elegant is a pure Python software tool for learning power system analysis developed for undergraduate and graduate students. It solves the power-flow problem by iterative numerical methods and calculates bolted short-circuit fault currents by modeling the network in the domain of symmetrical components. Elegant can be used with a user-friendly Graphical User Interface (GUI) and automatically generates human-readable reports of the simulation results. The tool is exemplified using a typical Brazilian regional system with 18 buses. This study performs a comparative experiment with 1 undergraduate and 4 graduate students who attempted the same problem using both Elegant and a commercial tool. It was found that Elegant significantly reduces the time and labor involved in basic power system simulations while still providing some insights into real power system designs.

Wildfires Assessed by Remote Sense Images and Burned Land Monitoring

The tools described in this paper enable the location of burned areas where took place the annihilation of natural habitats and establishes a baseline for major changes in forest ecosystems during recovery. Moreover, the result allows the follow up of the surface fuel loading, allowing the evaluation and guidance of restoration measures to remote areas by phased time planning. This case study implements the evaluation of burned areas that suffered successive wildfires in Portugal mainland during the summer of 2017, killing more than 60 people. The goal is to show that this evaluation can be done with remote sense data free of charges in a simple laptop, with open-source software, describing the not-so-simple methodology step by step, to make it accessible for local workers in the areas attained, where the availability of information is essential for the immediate planning of mitigation measures, such as restoring road access, allocate funds for the recovery of human dwellings and assess further needs for restoration of the ecological system. Wildfires also devastate forest ecosystems having a direct impact on vegetation cover and killing or driving away the animal population, besides loss of all crops in rural areas that are essential as local resources. The economic interests are also attained, as the pinewood burned becomes useless for the noblest applications, so its value decreases, and resin extraction ends for several years.

A Timed and Colored Petri Nets for Modeling and Verifying Cloud System Elasticity

Elasticity is the essential property of cloud computing. As the name suggests, it constitutes the ability of a cloud system to adjust resource provisioning in relation to fluctuating workloads. There are two types of elasticity operations, vertical and horizontal. In this work, we are interested in horizontal scaling, which is ensured by two mechanisms; scaling in and scaling out. Following the sizing of the system, we can adopt scaling in the event of over-supply and scaling out in the event of under-supply. In this paper, we propose a formal model, based on temporized and colored Petri nets (TdCPNs), for the modeling of the duplication and the removal of a virtual machine from a server. This model is based on formal Petri Nets (PNs) modeling language. The proposed models are edited, verified, and simulated with two examples implemented in colored Petri nets (CPNs)tools, which is a modeling tool for colored and timed PNs.

Digital Learning and Entrepreneurship Education: Changing Paradigms

Entrepreneurship is an essential source of economic growth and a prominent factor influencing socio-economic development. Entrepreneurship education educates and enhances entrepreneurial activity. This study aims to understand current trends in entrepreneurship education and evaluate the effectiveness of diverse entrepreneurship education programs. An increasing number of universities offer entrepreneurship education courses to create and successfully continue entrepreneurial ventures. Despite the prevalence of entrepreneurship education, research studies lack inconsistency about the effectiveness of entrepreneurship education to promote and develop entrepreneurship. Strategies to develop entrepreneurial attitudes and intentions among individuals are hindered by a lack of understanding of entrepreneurs' educational purposes, components, methodology, and resources required. Lack of adequate entrepreneurship education has been linked with low self-efficacy and lack of entrepreneurial intent. Moreover, in the age of digitisation and during the COVID-19 pandemic, digital learning platforms (e.g. online entrepreneurship education courses and programs) and other digital tools (e.g. digital game-based entrepreneurship education) have become more relevant to entrepreneurship education. This paper contributes to the continuation of academic literature in entrepreneurship education by evaluating and assessing current trends in entrepreneurship education programs, leading to better understanding to reduce gaps between entrepreneurial development requirements and higher education institutions.

Telehealth Ecosystem: Challenge and Opportunity

Technological innovation plays a crucial role in virtual healthcare services. A growing number of telehealth platforms are concentrating on using digital tools to improve the quality and availability of care. As a result, telehealth represents an opportunity to redesign the way health services are delivered. The research objective is to discover a new business model for digital health services and related industries to participate with telehealth solutions. The business opportunity is valuable for healthcare investors as a startup company to further investigations or implement the telehealth platform. The paper presents a digital healthcare business model and business opportunities to related industries. These include digital healthcare services extending from a traditional business model and use cases of business opportunities to related industries. Although there are enormous business opportunities, telehealth is still challenging due to the patient adaption and digital transformation process within a healthcare organization.

A Real-Time Monitoring System of the Supply Chain Conditions, Products and Means of Transport

Real-time monitoring of the supply chain conditions and procedures is a critical element for the optimal coordination and safety of the deliveries, as well as for the minimization of the delivery time and cost. Real time monitoring requires IoT data streams, which are related to the conditions of the products and the means of transport (e.g., location, temperature/humidity conditions, kinematic state, ambient light conditions, etc.). These streams are generated by battery-based IoT tracking devices, equipped with appropriate sensors, and are transmitted to a cloud-based back-end system. Proper handling and processing of the IoT data streams, using predictive and artificial intelligence algorithms, can provide significant and useful results, which can be exploited by the supply chain stakeholders in order to enhance their financial benefits, as well as the efficiency, security, transparency, coordination and sustainability of the supply chain procedures. The technology, the features and the characteristics of a complete, proprietary system, including hardware, firmware and software tools - developed in the context of a co-funded R&D program - are addressed and presented in this paper. 

A Program Based on Artistic and Musical Activities to Acquire Educational Concepts for Children with Learning Difficulties

The study aims to identify the extent of effectiveness of the artistic formation program using some types of pastes to reduce the hyperactivity of the kindergarten children with learning difficulties. The researchers have discussed the aforesaid topic, where the research sample included 120 children of ages between 5 to 6 years, from five schools for special needs, learning disability section, Cairo Governorate. The study used the quasi-empirical method, which depends on designing one group using the pre& post application measurements for the group to validate both, hypothesis and effectiveness of the program. The variables of the study were specified as follows; artistic formation program using Paper Mache as an independent variable, and its effect on the skills of kindergarten child with learning disabilities, as a dependent variable. The researchers utilized the application of an artistic formation program consisting of artistic and musical skills for kindergarten children with learning disabilities. The tools of the study, designed by the researchers, included: observation card used for recording the culling paper using pulp molding skills for kindergarten children with learning difficulties during practicing the artistic formation activity. Additionally, there was a program utilizing Artistic and Musical Activities for kindergarten children with learning disabilities to acquire educational concepts. The study was composed of 20 lessons for fine art activities and 20 lessons for musical activities, with obligation of giving the musical lesson with art lesson in one session to cast on the kindergarten child some educational concepts.

Sustainable Engineering: Synergy of BIM and Environmental Assessment Tools in the Hong Kong Construction Industry

The construction industry plays an important role in environmental and carbon emissions as it consumes a huge amount of natural resources and energy. Sustainable engineering involves the process of planning, design, procurement, construction and delivery in which the whole building and construction process resulting from building and construction can be effectively and sustainability managed to achieve the use of natural resources. Implementation of sustainable technology development and innovation, adoption of the advanced construction process and facilitate the facilities management to implement the energy and waste control more accurately and effectively. Study and research in the relationship of BIM and environment assessment tools lack a clear discussion. In this paper, we will focus on the synergy of BIM technology and sustainable engineering in the AEC industry and outline the key factors which enhance the use of advanced innovation, technology and method and define the role of stakeholders to achieve zero-carbon emission toward the Paris Agreement to limit global warming to well below 2°C above pre-industrial levels. A case study of the adoption of Building Information Modeling (BIM) and environmental assessment tools in Hong Kong will be discussed in this paper.

Emotion Detection in Twitter Messages Using Combination of Long Short-Term Memory and Convolutional Deep Neural Networks

One of the most significant issues as attended a lot in recent years is that of recognizing the sentiments and emotions in social media texts. The analysis of sentiments and emotions is intended to recognize the conceptual information such as the opinions, feelings, attitudes and emotions of people towards the products, services, organizations, people, topics, events and features in the written text. These indicate the greatness of the problem space. In the real world, businesses and organizations are always looking for tools to gather ideas, emotions, and directions of people about their products, services, or events related to their own. This article uses the Twitter social network, one of the most popular social networks with about 420 million active users, to extract data. Using this social network, users can share their information and opinions about personal issues, policies, products, events, etc. It can be used with appropriate classification of emotional states due to the availability of its data. In this study, supervised learning and deep neural network algorithms are used to classify the emotional states of Twitter users. The use of deep learning methods to increase the learning capacity of the model is an advantage due to the large amount of available data. Tweets collected on various topics are classified into four classes using a combination of two Bidirectional Long Short Term Memory network and a Convolutional network. The results obtained from this study with an average accuracy of 93%, show good results extracted from the proposed framework and improved accuracy compared to previous work.

Efficient Pre-Processing of Single-Cell Assay for Transposase Accessible Chromatin with High-Throughput Sequencing Data

The primary tool currently used to pre-process 10X chromium single-cell ATAC-seq data is Cell Ranger, which can take very long to run on standard datasets. To facilitate rapid pre-processing that enables reproducible workflows, we present a suite of tools called scATAK for pre-processing single-cell ATAC-seq data that is 15 to 18 times faster than Cell Ranger on mouse and human samples. Our tool can also calculate chromatin interaction potential matrices and generate open chromatin signal and interaction traces for cell groups. We use scATAK tool to explore the chromatin regulatory landscape of a healthy adult human brain and unveil cell-type specific features, and show that it provides a convenient and computational efficient approach for pre-processing single-cell ATAC-seq data.

The Use of Knowledge Management Systems and ICT Service Desk Management to Minimize the Digital Divide Experienced in the Museum Sector

Since the introduction of ServiceNow, the UK’s Science Museum Group’s (SMG) ICT service desk portal, there has not been an analysis of the tools available to SMG staff for Just-in-time knowledge acquisition (Knowledge Management Systems) and reporting ICT incidents with a focus on an aspect of professional identity namely, gender. Therefore, it is important for SMG to investigate the apparent disparities so that solutions can be derived to minimize this digital divide if one exists. This study is conducted in the milieu of UK museums, galleries, arts, academic, charitable, and cultural heritage sector. It is acknowledged at SMG that there are challenges with keeping up with an ever-changing digital landscape. Subsequently, this entails the rapid upskilling of staff and developing an infrastructure that supports just-in-time technological knowledge acquisition and reporting technology related issues. This problem was addressed by analysing ServiceNow ICT incident reports and reports from knowledge articles from a six-month period from February to July. This study found a statistically significant relationship between gender and reporting an ICT incident. There is also a significant relationship between gender and the priority level of ICT incident. Interestingly, there is no statistically significant relationship between gender and reading knowledge articles. Additionally, there is no statistically significant relationship between gender and reporting an ICT incident related to the knowledge article that was read by staff. The knowledge acquired from this study is useful to service desk management practice as it will help to inform the creation of future knowledge articles and ICT incident reporting processes.

The Role of Synthetic Data in Aerial Object Detection

The purpose of this study is to explore the characteristics of developing a machine learning application using synthetic data. The study is structured to develop the application for the purpose of deploying the computer vision model. The findings discuss the realities of attempting to develop a computer vision model for practical purpose, and detail the processes, tools and techniques that were used to meet accuracy requirements. The research reveals that synthetic data represent another variable that can be adjusted to improve the performance of a computer vision model. Further, a suite of tools and tuning recommendations are provided.