The Study of Digital Transformation Skills and Competencies Framework at Umm Alqura University

The lack of digital transformation professionals could prevent Saudi Arabia’s universities from providing digital services. The task of understanding what digital skills are needed within an organization, measuring the existing skills, and developing or attracting talents is a complex task. This paper provides a comprehensive analysis of the digital transformation skills needed in the organizations who seek digital transformation and identifies the skills and competencies framework DigSC built on Skills Framework for the Informational Age (SFIA) framework that is adopted by the Ministry of Communications and Information Technology (MCIT) in Saudi Arabia. The framework adopted identifies the main digital transformation skills clusters, categories and levels of responsibilities for each job description to fill the gap between this requirement and the digital skills supplied by the Umm Alqura University (UQU).

Investigation into the Role of Leadership in the Management of Digital Transformation for Small and Medium Enterprises

Digital technology is transforming the landscape of the industrial sector at a precedential level by connecting people, processes, and machines in real-time. It represents the means for a new pathway to achieve innovative, dynamic competitive advantages, deliver unique customers’ values, and sustain critical relationships. Thus, success in a constantly changing environment is governed by the ability of an organization to revolutionize their business models, deliver innovative solutions, and capture values from big data analytics and insights. Businesses need to re-strategize operations and develop extra capabilities to cope with the necessity for additional flexibility and agility. The traditional “command and control” leadership style is structurally and operationally incompatible with the digital era. In this paper, the authors discuss how transformational leaders can act as a glue in the social, organizational context, which is crucial to enable the workforce and develop a psychological attachment to the digital vision.

The Pedagogical Integration of Digital Technologies in Initial Teacher Training

The use of Digital Technologies in teaching and learning processes is currently a reality, namely in initial teacher training. This study aims at knowing the digital reality of students in initial teacher training in order to improve training in the educational use of ICT and to promote digital technology integration strategies in an educational context. It is part of the IFITIC Project "Innovate with ICT in Initial Teacher Training to Promote Methodological Renewal in Pre-school Education and in the 1st and 2nd Basic Education Cycle" which involves the School of Education, Polytechnic of Porto and Institute of Education, University of Minho. The Project aims at rethinking educational practice with ICT in the initial training of future teachers in order to promote methodological innovation in Pre-school Education and in the 1st and 2nd Cycles of Basic Education. A qualitative methodology was used, in which a questionnaire survey was applied to teachers in initial training. For data analysis, the techniques of content analysis with the support of NVivo software were used. The results point to the following aspects: a) future teachers recognize that they have more technical knowledge about ICT than pedagogical knowledge. This result makes sense if we consider the objective of Basic Education, so that the gaps can be filled in the Master's Course by students who wish to follow the teaching; b) the respondents are aware that the integration of digital resources contributes positively to students' learning and to the life of children and young people, which also promotes preparation in life; c) to be a teacher in the digital age there is a need for the development of digital literacy, lifelong learning and the adoption of new ways of teaching how to learn. Thus, this study aims to contribute to a reflection on the teaching profession in the digital age.

A Game-Based Product Modelling Environment for Non-Engineer

In the last 20 years, Knowledge Based Engineering (KBE) has shown its advantages in product development in different engineering areas such as automation, mechanical, civil and aerospace engineering in terms of digital design automation and cost reduction by automating repetitive design tasks through capturing, integrating, utilising and reusing the existing knowledge required in various aspects of the product design. However, in primary design stages, the descriptive information of a product is discrete and unorganized while knowledge is in various forms instead of pure data. Thus, it is crucial to have an integrated product model which can represent the entire product information and its associated knowledge at the beginning of the product design. One of the shortcomings of the existing product models is a lack of required knowledge representation in various aspects of product design and its mapping to an interoperable schema. To overcome the limitation of the existing product model and methodologies, two key factors are considered. First, the product model must have well-defined classes that can represent the entire product information and its associated knowledge. Second, the product model needs to be represented in an interoperable schema to ensure a steady data exchange between different product modelling platforms and CAD software. This paper introduced a method to provide a general product model as a generative representation of a product, which consists of the geometry information and non-geometry information, through a product modelling framework. The proposed method for capturing the knowledge from the designers through a knowledge file provides a simple and efficient way of collecting and transferring knowledge. Further, the knowledge schema provides a clear view and format on the data that needed to be gathered in order to achieve a unified knowledge exchange between different platforms. This study used a game-based platform to make product modelling environment accessible for non-engineers. Further the paper goes on to test use case based on the proposed game-based product modelling environment to validate the effectiveness among non-engineers.

Development of an Intelligent Decision Support System for Smart Viticulture

The Internet of Things (IoT) represents the best option for smart vineyard applications, even if it is necessary to integrate the technologies required for the development. This article is based on the research and the results obtained in the DISAVIT project. For Smart Agriculture, the project aims to provide a trustworthy, intelligent, integrated vineyard management solution that is based on the IoT. To have interoperability through the use of a multiprotocol technology (being the future connected wireless IoT) it is necessary to adopt an agnostic approach, providing a reliable environment to address cyber security, IoT-based threats and traceability through blockchain-based design, but also creating a concept for long-term implementations (modular, scalable). The ones described above represent the main innovative technical aspects of this project. The DISAVIT project studies and promotes the incorporation of better management tools based on objective data-based decisions, which are necessary for agriculture adapted and more resistant to climate change. It also exploits the opportunities generated by the digital services market for smart agriculture management stakeholders. The project's final result aims to improve decision-making, performance, and viticulturally infrastructure and increase real-time data accuracy and interoperability. Innovative aspects such as end-to-end solutions, adaptability, scalability, security and traceability, place our product in a favorable situation over competitors. None of the solutions in the market meet every one of these requirements by a unique product being innovative.

Entrepreneur Universal Education System: Future Evolution

The success of education is dependent on evolution and adaptation, while the traditional system has worked before, one type of education evolved with the digital age is virtual education that has influenced efficiency in today’s learning environments. Virtual learning has indeed proved its efficiency to overcome the drawbacks of the physical environment such as time, facilities, location, etc., but despite what it had accomplished, the educational system over all is not adequate for being a productive system yet. Earning a degree is not anymore enough to obtain a career job; it is simply missing the skills and creativity. There are always two sides of a coin; a college degree or a specialized certificate, each has its own merits, but having both can put you on a successful IT career path. For many of job-seeking individuals across world to have a clear meaningful goal for work and education and positively contribute the community, a productive correlation and cooperation among employers, universities alongside with the individual technical skills is a must for generations to come. Fortunately, the proposed research “Entrepreneur Universal Education System” is an evolution to meet the needs of both employers and students, in addition to gaining vital and real-world experience in the chosen fields is easier than ever. The new vision is to empower the education to improve organizations’ needs which means improving the world as its primary goal, adopting universal skills of effective thinking, effective action, effective relationships, preparing the students through real-world accomplishment and encouraging them to better serve their organization and their communities faster and more efficiently.

A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators

With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.

Privacy Protection Principles of Omnichannel Approach

The advent of the Internet, mobile devices and social media is revolutionizing the experience of retail customers by linking multiple sources through various channels. Omnichannel retailing is a retailing that combines multiple channels to allow customers to seamlessly leverage all the distribution information online and offline while shopping. Therefore, today data are an asset more critical than ever for all organizations. Nonetheless, because of its heterogeneity through platforms, developers are currently facing difficulties in dealing with personal data. Considering the possibilities of omnichannel communication, this paper presents channel categorization that could enhance the customer experience of omnichannel center called hyper center. The purpose of this paper is fundamentally to describe the connection between the omnichannel hyper center and the customer, with particular attention to privacy protection. The first phase was finding the most appropriate channels of communication for hyper center. Consequently, a selection of widely used communication channels has been identified and analyzed with regard to the effect requirements for optimizing user experience. The evaluation criteria are divided into 3 groups: general, user profile and channel options. For each criterion the weight of importance for omnichannel communication was defined. The most important thing was to consider how the hyper center can make user identification while respecting the privacy protection requirements. The study carried out also shows what customer experience across digital networks would look like, based on an omnichannel approach owing to privacy protection principles.

Digital Transformation of Payment Systems Using Field Service Management

Like many other industries, the payment industry has been affected by digital transformation. The importance of digital transformation in the payment industry is very crucial. Because the payment industry is considered a leading industry in digital and emerging technologies, and the digitalization of other industries such as retail, health, and telecommunication, it also depends on the growth rate of digitalized payment systems. One of the technological innovations in service management is Field Service Management (FSM). Despite the widespread use of FSM in various industries such as petrochemical, health, maintenance, etc., this technology can also be recruited in the payment industry, transforming the payment industry into a more agile and efficient one. Accordingly, the present study pays close attention to the application of FSM in the payment industry. Given the importance of merchants' bargaining power in the payment industry, this study aims to use FSM in the digital transformation initiative with a targeted focus on providing real-time services to merchants. The research method consists of three parts. Firstly, conducting the review of past research, applications of FSM in the payment industry are considered. In the next step, merchants' benefits such as emotional, functional, economic, and social benefits in using FSM are identified using in-depth interviews and content analysis methods. The related business model in helping the payment industry transforming into a more agile and efficient industry is considered in the following step. The results revealed the 10 main pillars required to realize the digital transformation of payment systems using FSM.

Embedded Semantic Segmentation Network Optimized for Matrix Multiplication Accelerator

Autonomous driving systems require high reliability to provide people with a safe and comfortable driving experience. However, despite the development of a number of vehicle sensors, it is difficult to always provide high perceived performance in driving environments that vary from time to season. The image segmentation method using deep learning, which has recently evolved rapidly, provides high recognition performance in various road environments stably. However, since the system controls a vehicle in real time, a highly complex deep learning network cannot be used due to time and memory constraints. Moreover, efficient networks are optimized for GPU environments, which degrade performance in embedded processor environments equipped simple hardware accelerators. In this paper, a semantic segmentation network, matrix multiplication accelerator network (MMANet), optimized for matrix multiplication accelerator (MMA) on Texas instrument digital signal processors (TI DSP) is proposed to improve the recognition performance of autonomous driving system. The proposed method is designed to maximize the number of layers that can be performed in a limited time to provide reliable driving environment information in real time. First, the number of channels in the activation map is fixed to fit the structure of MMA. By increasing the number of parallel branches, the lack of information caused by fixing the number of channels is resolved. Second, an efficient convolution is selected depending on the size of the activation. Since MMA is a fixed, it may be more efficient for normal convolution than depthwise separable convolution depending on memory access overhead. Thus, a convolution type is decided according to output stride to increase network depth. In addition, memory access time is minimized by processing operations only in L3 cache. Lastly, reliable contexts are extracted using the extended atrous spatial pyramid pooling (ASPP). The suggested method gets stable features from an extended path by increasing the kernel size and accessing consecutive data. In addition, it consists of two ASPPs to obtain high quality contexts using the restored shape without global average pooling paths since the layer uses MMA as a simple adder. To verify the proposed method, an experiment is conducted using perfsim, a timing simulator, and the Cityscapes validation sets. The proposed network can process an image with 640 x 480 resolution for 6.67 ms, so six cameras can be used to identify the surroundings of the vehicle as 20 frame per second (FPS). In addition, it achieves 73.1% mean intersection over union (mIoU) which is the highest recognition rate among embedded networks on the Cityscapes validation set.

Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus

The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.

Process Optimization and Automation of Information Technology Services in a Heterogenic Digital Environment

With customers’ ever-increasing expectations for fast services provisioning for all their business needs, information technology (IT) organizations, as business partners, have to cope with this demanding environment and deliver their services in the most effective and efficient way. The purpose of this paper is to identify optimization and automation opportunities for the top requested IT services in a heterogenic digital environment and widely spread customer base. In collaboration with systems, processes, and subject matter experts (SMEs), the processes in scope were approached by analyzing four-year related historical data, identifying and surveying stakeholders, modeling the as-is processes, and studying systems integration/automation capabilities. This effort resulted in identifying several pain areas, including standardization, unnecessary customer and IT involvement, manual steps, systems integration, and performance measurement. These pain areas were addressed by standardizing the top five requested IT services, eliminating/automating 43 steps, and utilizing a single platform for end-to-end process execution. In conclusion, the optimization of IT service request processes in a heterogenic digital environment and widely spread customer base is challenging, yet achievable without compromising the service quality and customers’ added value. Further studies can focus on measuring the value of the eliminated/automated process steps to quantify the enhancement impact. Moreover, a similar approach can be utilized to optimize other IT service requests, with a focus on business criticality.

Spatial Data Science for Data Driven Urban Planning: The Youth Economic Discomfort Index for Rome

Today, a consistent segment of the world’s population lives in urban areas, and this proportion will vastly increase in the next decades. Therefore, understanding the key trends in urbanization, likely to unfold over the coming years, is crucial to the implementation of sustainable urban strategies. In parallel, the daily amount of digital data produced will be expanding at an exponential rate during the following years. The analysis of various types of data sets and its derived applications have incredible potential across different crucial sectors such as healthcare, housing, transportation, energy, and education. Nevertheless, in city development, architects and urban planners appear to rely mostly on traditional and analogical techniques of data collection. This paper investigates the prospective of the data science field, appearing to be a formidable resource to assist city managers in identifying strategies to enhance the social, economic, and environmental sustainability of our urban areas. The collection of different new layers of information would definitely enhance planners' capabilities to comprehend more in-depth urban phenomena such as gentrification, land use definition, mobility, or critical infrastructural issues. Specifically, the research results correlate economic, commercial, demographic, and housing data with the purpose of defining the youth economic discomfort index. The statistical composite index provides insights regarding the economic disadvantage of citizens aged between 18 years and 29 years, and results clearly display that central urban zones and more disadvantaged than peripheral ones. The experimental set up selected the city of Rome as the testing ground of the whole investigation. The methodology aims at applying statistical and spatial analysis to construct a composite index supporting informed data-driven decisions for urban planning.

Design and Simulation of Heartbeat Measurement System Using Arduino Microcontroller in Proteus

If a person can monitor his/her heart rate regularly then he/she can detect heart disease early and thus he/she can enjoy longer life span. Therefore, this disease should be taken seriously. Hence, many health care devices and monitoring systems are being designed to keep track of the heart disease. This work reports a design and simulation processes of an Arduino microcontroller based heart rate measurement and monitoring system in Proteus environment. Clipping sensors were utilized to sense the heart rate of an individual from the finger tips. It is a digital device and uses mainly infrared (IR) transmitter (mainly IR LED) and receiver (mainly IR photo-transistor or IR photo-detector). When the heart pumps the blood and circulates it among the blood vessels of the body, the changed blood pressure is detected by the transmitter and then reflected back to the receiver accordingly. The reflected signals are then processed inside the microcontroller through a software written assembly language and appropriate heart rate (HR) is determined by it in beats per minute (bpm) from the detected signal for a duration of 10 seconds and display the same in bpm on the LCD screen in digital format. The designed system was simulated on several persons with varying ages, for example, infants, adult persons and active athletes. Simulation results were found very satisfactory.

The OLOS® Way to Cultural Heritage: User Interface with Anthropomorphic Characteristics

Augmented Reality and Augmented Intelligence are radically changing information technology. The path that starts from the keyboard and then, passing through milestones such as Siri, Alexa and other vocal avatars, reaches a more fluid and natural communication with computers, thus converting the dichotomy between man and machine into a harmonious interaction, now heads unequivocally towards a new IT paradigm, where holographic computing will play a key role. The OLOS® platform contributes substantially to this trend in that it infuses computers with human features, by transferring the gestures and expressions of persons of flesh and bones to anthropomorphic holographic interfaces which in turn will use them to interact with real-life humans. In fact, we could say, boldly but with a solid technological background to back the statement, that OLOS® gives reality to an altogether new entity, placed at the exact boundary between nature and technology, namely the holographic human being. Holographic humans qualify as the perfect carriers for the virtual reincarnation of characters handed down from history and tradition. Thus, they provide for an innovative and highly immersive way of experiencing our cultural heritage as something alive and pulsating in the present.

An Exploratory Study of the Student’s Learning Experience by Applying Different Tools for e-Learning and e-Teaching

E-learning is becoming more and more common every day. For online, hybrid or traditional face-to-face programs, there are some e-teaching platforms like Google classroom, Blackboard, Moodle and Canvas, and there are platforms for full e-learning like Coursera, edX or Udemy. These tools are changing the way students acquire knowledge at schools; however, in today’s changing world that is not enough. As students’ needs and skills change and become more complex, new tools will need to be added to keep them engaged and potentialize their learning. This is especially important in the current global situation that is changing everything: the Covid-19 pandemic. Due to Covid-19, education had to make an unexpected switch from face-to-face courses to digital courses. In this study, the students’ learning experience is analyzed by applying different e-tools and following the Tec21 Model and a flexible and digital model, both developed by the Tecnologico de Monterrey University. The evaluation of the students’ learning experience has been made by the quantitative PrEmo method of emotions. Findings suggest that the quantity of e-tools used during a course does not affect the students’ learning experience as much as how a teacher links every available tool and makes them work as one in order to keep the student engaged and motivated.

A Case Study of the Digital Translation of the Lucy Lloyd and Wilhelm Bleek |Xam and !Kun Notebooks into The Digital Bleek and Lloyd

This paper will examine the digitization process of the |Xam and !Kun notebooks, authored by Lucy Lloyd, Dorothea Bleek and Wilhelm Bleek, and their collaborators |a!kunta, ||kabbo, ≠kasin, Dia!kwain, !kweiten ta ||ken, |han≠kass'o, !nanni, Tamme, |uma, and Da during the 19th century. Detail will be provided about the status of the archive, the creation of the digital archive and selected research projects linked to the archive. The Digital Bleek and Lloyd project is an example of institutional collaboration by the University of Cape Town, University of South Africa, Iziko South African Museum, the National Library of South Africa and the Western Cape Provincial Archives and Records Service. The contemporary value of the archive will be discussed in relation to its current manifestation as a collection of archival and digital objects, each with its own set of properties and archival risk factors. This tension between the two ways to access the archive will be interrogated to shed light on the slippages between the digital object and the archival object. The primary argument is that the process of digitization generates an ontological shift in the status of the archival object. The secondary argument is an engagement with practices to curate the encounters with these ontologically shifted objects and how to relate to each as a contemporary viewer. In conclusion this paper will argue for regarding these archival objects according to the interpretive framework utilized to engage secular relics.

A Socio-Technical Approach to Cyber-Risk Assessment

Evaluating the levels of cyber-security risks within an enterprise is most important in protecting its information system, services and all its digital assets against security incidents (e.g. accidents, malicious acts, massive cyber-attacks). The existing risk assessment methodologies (e.g. eBIOS, OCTAVE, CRAMM, NIST-800) adopt a technical approach considering as attack factors only the capability, intention and target of the attacker, and not paying attention to the attacker’s psychological profile and personality traits. In this paper, a socio-technical approach is proposed in cyber risk assessment, in order to achieve more realistic risk estimates by considering the personality traits of the attackers. In particular, based upon principles from investigative psychology and behavioural science, a multi-dimensional, extended, quantifiable model for an attacker’s profile is developed, which becomes an additional factor in the cyber risk level calculation.

Colada Sweet Like Mercy: Gender Stereotyping in Twitter Conversations by Big Brother Naija 2019 Viewers

This study explores how a reality TV show which aired in Nigeria in 2019 (Big Brother Naija - BBN), played a role in enhancing gender-biased conversations among its viewers and social media followers. Thematic analysis is employed here to study Twitter conversations among BBN 2019 followers, which ensued after the show had stopped airing. The study reveals that the show influenced the way viewers and fans engaged with each other, as well as with the show’s participants, on Twitter, and argues that, despite having aired for a short period of time, BBN 2019 was able to draw people together and provide a community where viewers could engage with each other online. Though the show aired on TV, the viewers found a digital space where they could air their views, react to what was happening on the show, as well as simply catch up on action that they probably missed. Within these digital communities, viewers expressed their attractions, disgust and identities, most of these having a form of reference to sexuality and gender identities and roles, as were also portrayed by the show’s producers both on TV and on social media.

Optimization of the Dental Direct Digital Imaging by Applying the Self-Recognition Technology

This paper is intended to introduce the technology to solve some of the deficiencies of the direct digital radiology. Nowadays, digital radiology is the latest progression in dental imaging, which has become an essential part of dentistry. There are two main parts of the direct digital radiology comprised of an intraoral X-ray machine and a sensor (digital image receptor). The dentists and the dental nurses experience afflictions during the taking image process by the direct digital X-ray machine. For instance, sometimes they need to readjust the sensor in the mouth of the patient to take the X-ray image again due to the low quality of that. Another problem is, the position of the sensor may move in the mouth of the patient and it triggers off an inappropriate image for the dentists. It means that it is a time-consuming process for dentists or dental nurses. On the other hand, taking several the X-ray images brings some problems for the patient such as being harmful to their health and feeling pain in their mouth due to the pressure of the sensor to the jaw. The author provides a technology to solve the above-mentioned issues that is called “Self-Recognition Direct Digital Radiology” (SDDR). This technology is based on the principle that the intraoral X-ray machine is capable to diagnose the location of the sensor in the mouth of the patient automatically. In addition, to solve the aforementioned problems, SDDR technology brings out fewer environmental impacts in comparison to the previous version.