Abstract: The vocational education, enhanced by technology and Recognition of Prior Learning (RPL) is going to be the main ingredient of the future of education. This is coming from the various issues of the current educational system like cost, time, type of course, type of curriculum, unemployment, to name the major reasons. Most millennials like to perform and learn rather than learning how to perform. This is the essence of vocational education be it any field from cooking, painting, plumbing to modern technologies using computers. Even a more theoretical course like entrepreneurship can be taught as to be an entrepreneur and learn about its nuances. The best way to learn accountancy is actually keeping accounts for a small business or grocer and learn the ropes of accountancy and finance. The purpose of this study is to investigate the relationship between vocational skills, RPL and new technologies with future employability. This study implies that individual's knowledge and skills are essential aspects to be emphasized in future education and to give credit for prior experience for future employability. Virtual reality can be used to stimulate workplace situations for vocational learning for fields like hospitality, medical emergencies, healthcare, draughtsman ship, building inspection, quantity surveying, estimation, to name a few. All disruptions in future education, especially vocational education, are going to be technology driven with the advent of AI, ML, IoT, VR, VI etc. Vocational education not only helps institutes cut costs drastically, but allows all students to have hands-on experiences, rather than to be observers. The earlier experiential learning theory and the recent theory of knowledge and skills-based learning modified and applied to the vocational education and development of skills is the proposed contribution of this paper. Apart from secondary research study on major scholarly articles, books, primary research using interviews, questionnaire surveys have been used to validate and test the reliability of the suggested model using Partial Least Square- Structural Equation Method (PLS-SEM), the factors being assimilated using an existing literature review. Major findings have been that there exists high relationship between the vocational skills, RPL, new technology to the future employability through mediation of future employability skills.
Abstract: This paper presents the experience in an eLearning training project that is being implemented for electrical planning engineers from the national Mexican utility Comision Federal de Electricidad (CFE) Distribution. This modality is implemented and will be used in the utility for training purposes to help personnel in their daily technical activities. One important advantage of this training project is that once it is implemented and applied, financial resources will be saved by CFE Distribution Company because online training will be used in all the country; the infrastructure for the eLearning training will be uploaded in computational servers installed in the National CFE Distribution Training Department, in Ciudad de Mexico, and can be used in workplaces of 16 Distribution Divisions and 150 Zones of CFE Distribution. In this way, workers will not need to travel to the National Training Department, saving enormous efforts, financial, and human resources.
Abstract: Current server systems are responsible for critical applications that run in different infrastructures, such as the cloud, physical machines, and virtual machines. A common challenge that these systems face are the various hardware faults that may occur due to the high load, among other reasons, which translates to errors resulting in malfunctions or even server downtime. The most important hardware parts, that are causing most of the errors, are the CPU, RAM, and the hard drive - HDD. In this work, we investigate selected CPU, RAM, and HDD errors, observed or simulated in kernel ring buffer log files from GNU/Linux servers. Moreover, a severity characterization is given for each error type. Understanding these errors is crucial for the efficient analysis of kernel logs that are usually utilized for monitoring servers and diagnosing faults. In addition, to support the previous analysis, we present possible ways of simulating hardware errors in RAM and HDD, aiming to facilitate the testing of methods for detecting and tackling the above issues in a server running on GNU/Linux.
Abstract: Nowadays, data center industry faces strong challenges for increasing the speed and data processing capacities while at the same time is trying to keep their devices a suitable working temperature without penalizing that capacity. Consequently, the cooling systems of this kind of facilities use a large amount of energy to dissipate the heat generated inside the servers, and developing new cooling techniques or perfecting those already existing would be a great advance in this type of industry. The installation of a temperature sensor matrix distributed in the structure of each server would provide the necessary information for collecting the required data for obtaining a temperature profile instantly inside them. However, the number of temperature probes required to obtain the temperature profiles with sufficient accuracy is very high and expensive. Therefore, other less intrusive techniques are employed where each point that characterizes the server temperature profile is obtained by solving differential equations through simulation methods, simplifying data collection techniques but increasing the time to obtain results. In order to reduce these calculation times, complicated and slow computational fluid dynamics simulations are replaced by simpler and faster finite element method simulations which solve the Burgers‘ equations by backward, forward and central discretization techniques after simplifying the energy and enthalpy conservation differential equations. The discretization methods employed for solving the first and second order derivatives of the obtained Burgers‘ equation after these simplifications are the key for obtaining results with greater or lesser accuracy regardless of the characteristic truncation error.
Abstract: 2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design.
Abstract: SQL injection is one of the most common types of attacks and has a very critical impact on web servers. In the worst case, an attacker can perform post-exploitation after a successful SQL injection attack. In the case of forensics web servers, web server analysis is closely related to log file analysis. But sometimes large file sizes and different log types make it difficult for investigators to look for traces of attackers on the server. The purpose of this paper is to help investigator take appropriate steps to investigate when the web server gets attacked. We use attack scenarios using SQL injection attacks including PHP backdoor injection as post-exploitation. We perform post-mortem analysis of web server logs based on Hypertext Transfer Protocol (HTTP) POST and HTTP GET method approaches that are characteristic of SQL injection attacks. In addition, we also propose structured analysis method between the web server application log file, database application, and other additional logs that exist on the webserver. This method makes the investigator more structured to analyze the log file so as to produce evidence of attack with acceptable time. There is also the possibility that other attack techniques can be detected with this method. On the other side, it can help web administrators to prepare their systems for the forensic readiness.
Abstract: Security of context-aware ubiquitous systems is
paramount, and authentication plays an important aspect in cloud
computing and ubiquitous computing. Trust management has been
identified as vital component for establishing and maintaining
successful relational exchanges between trading partners in cloud
and ubiquitous systems. Establishing trust is the way to build good
relationship with both client and provider which positive activates will
increase trust level, otherwise destroy trust immediately. We propose
a new context-aware authentication system using a trust management
system between client and server, and between servers, a trust which
induces partnership, thus to a close cooperation between these servers.
We defined the rules (algorithms), as well as the formulas to manage
and calculate the trusting degrees depending on context, in order to
uniquely authenticate a user, thus a single sign-on, and to provide
him better services.
Abstract: Internet of Things (IoT) is a powerful industry system, which end-devices are interconnected and automated, allowing the devices to analyze data and execute actions based on the analysis. The IoT technology leverages the technology of Radio-Frequency Identification (RFID) and Wireless Sensor Network (WSN), including mobile and sensor. These technologies contribute to the evolution of IoT. However, due to more devices are connected each other in the Internet, and data from various sources exchanged between things, confidentiality of the data becomes a major concern. This paper focuses on one of the major challenges in IoT; authentication, in order to preserve data integrity and confidentiality are in place. A few solutions are reviewed based on papers from the last few years. One of the proposed solutions is securing the communication between IoT devices and cloud servers with Elliptic Curve Cryptograhpy (ECC) based mutual authentication protocol. This solution focuses on Hyper Text Transfer Protocol (HTTP) cookies as security parameter. Next proposed solution is using keyed-hash scheme protocol to enable IoT devices to authenticate each other without the presence of a central control server. Another proposed solution uses Physical Unclonable Function (PUF) based mutual authentication protocol. It emphasizes on tamper resistant and resource-efficient technology, which equals a 3-way handshake security protocol.
Abstract: The Extended Enterprise Resource Planning (ERPII)
system usually requires massive amounts of storage space, powerful
servers, and large upfront and ongoing investments to purchase and
manage the software and the related hardware which are not
affordable for organizations. In recent decades, organizations prefer
to adapt their business structures with new technologies for
remaining competitive in the world economy. Therefore, cloud
computing (which is one of the tools of information technology (IT))
is a modern system that reveals the next-generation application
architecture. Also, cloud computing has had some advantages that
reduce costs in many ways such as: lower upfront costs for all
computing infrastructure and lower cost of maintaining and
supporting. On the other hand, traditional ERPII is not responding for
huge amounts of data and relations between the organizations. In this
study, based on a literature study, ERPII is investigated in the context
of cloud computing where the organizations operate more efficiently.
Also, ERPII conditions have a response to needs of organizations in
large amounts of data and relations between the organizations.
Abstract: The psychological present has an actual extension.
When a sequence of instantaneous stimuli falls in this short interval
of time, observers perceive a compresence of events in succession
and the temporal order depends on the qualitative relationships
between the perceptual properties of the events. Two experiments
were carried out to study the influence of perceptual grouping, with
and without temporal displacement, on the duration of auditory
sequences. The psychophysical method of adjustment was adopted.
The first experiment investigated the effect of temporal displacement
of a white noise on sequence duration. The second experiment
investigated the effect of temporal displacement, along the pitch
dimension, on temporal shortening of sequence. The results suggest
that the temporal order of sounds, in the case of temporal
displacement, is organized along the pitch dimension.
Abstract: This paper describes the use of the Internet as a feature to enhance the security of our software that is going to be distributed/sold to users potentially all over the world. By placing in a secure server some of the features of the secure software, we increase the security of such software. The communication between the protected software and the secure server is done by a double lock algorithm. This paper also includes an analysis of intruders and describes possible responses to detect threats.
Abstract: Cloud Computing is a convenient model for on-demand networks that uses shared pools of virtual configurable computing resources, such as servers, networks, storage devices, applications, etc. The cloud serves as an environment for companies and organizations to use infrastructure resources without making any purchases and they can access such resources wherever and whenever they need. Cloud computing is useful to overcome a number of problems in various Information Technology (IT) domains such as Geographical Information Systems (GIS), Scientific Research, e-Governance Systems, Decision Support Systems, ERP, Web Application Development, Mobile Technology, etc. Companies can use Cloud Computing services to store large amounts of data that can be accessed from anywhere on Earth and also at any time. Such services are rented by the client companies where the actual rent depends upon the amount of data stored on the cloud and also the amount of processing power used in a given time period. The resources offered by the cloud service companies are flexible in the sense that the user companies can increase or decrease their storage requirements or the processing power requirements at any time, thus minimizing the overall rental cost of the service they receive. In addition, the Cloud Computing service providers offer fast processors and applications software that can be shared by their clients. This is especially important for small companies with limited budgets which cannot afford to purchase their own expensive hardware and software. This paper is an overview of the Cloud Computing, giving its types, principles, advantages, and disadvantages. In addition, the paper gives some example engineering applications of Cloud Computing and makes suggestions for possible future applications in the field of engineering.
Abstract: In this paper, the flow of different classes of patients
into a hospital is modelled and analyzed by using the queueing
network analyzer (QNA) algorithm and discrete event simulation.
Input data for QNA are the rate and variability parameters of the
arrival and service times in addition to the number of servers in each
facility. Patient flows mostly match real flow for a hospital in Egypt.
Based on the analysis of the waiting times, two approaches are
suggested for improving performance: Separating patients into
service groups, and adopting different service policies for sequencing
patients through hospital units. The separation of a specific group of
patients, with higher performance target, to be served separately from
the rest of patients requiring lower performance target, requires the
same capacity while improves performance for the selected group of
patients with higher target. Besides, it is shown that adopting the
shortest processing time and shortest remaining processing time
service policies among other tested policies would results in,
respectively, 11.47% and 13.75% reduction in average waiting time
relative to first come first served policy.
Abstract: In this paper, we focus on the reliability and performance analysis of Computer Centre (CC) at Yobe State University, Damaturu, Nigeria. The CC consists of three servers: one database mail server, one redundant and one for sharing with the client computers in the CC (called as a local server). Observing the different possibilities of the functioning of the CC, the analysis has been done to evaluate the various popular measures of reliability such as availability, reliability, mean time to failure (MTTF), profit analysis due to the operation of the system. The system can ultimately fail due to the failure of router, redundant server before repairing the mail server and switch failure. The system can also partially fail when a local server fails. The failed devices have restored according to Least Recently Used (LRU) techniques. The system can also fail entirely due to a cooling failure of the server, electricity failure or some natural calamity like earthquake, fire tsunami, etc. All the failure rates are assumed to be constant and follow exponential time distribution, while the repair follows two types of distributions: i.e. general and Gumbel-Hougaard family copula distribution.
Abstract: Cloud computing, a technology that is made possible through virtualization within networks represents a shift from the traditional ownership of infrastructure and other resources by distinct organization to a more scalable pattern in which computer resources are rented online to organizations on either as a pay-as-you-use basis or by subscription. In other words, cloud computing entails the renting of computing resources (such as storage space, memory, servers, applications, networks, etc.) by a third party to its clients on a pay-as-go basis. It is a new innovative technology that is globally embraced because of its renowned benefits, profound of which is its cost effectiveness on the part of organizations engaged with its services. In Nigeria, the services are provided either directly to companies mostly by the key IT players such as Microsoft, IBM, and Google; or in partnership with some other players such as Infoware, Descasio, and Sunnet. This action enables organizations to rent IT resources on a pay-as-you-go basis thereby salvaging them from wastages accruable on acquisition and maintenance of IT resources such as ownership of a separate data centre. This paper intends to appraise the challenges of cloud computing adoption in Nigeria, bearing in mind the country’s peculiarities’ in terms of infrastructural development. The methodologies used in this paper include the use of research questionnaires, formulated hypothesis, and the testing of the formulated hypothesis. The major findings of this paper include the fact that there are some addressable challenges to the adoption of cloud computing in Nigeria. Furthermore, the country will gain significantly if the challenges especially in the area of infrastructural development are well addressed. This is because the research established the fact that there are significant gains derivable by the adoption of cloud computing by organizations in Nigeria. However, these challenges can be overturned by concerted efforts in the part of government and other stakeholders.
Abstract: We study four models of a three server queueing system with Bernoulli schedule optional server vacations. Customers arriving at the system one by one in a Poisson process are provided identical exponential service by three parallel servers according to a first-come, first served queue discipline. In model A, all three servers may be allowed a vacation at one time, in Model B at the most two of the three servers may be allowed a vacation at one time, in model C at the most one server is allowed a vacation, and in model D no server is allowed a vacation. We study steady the state behavior of the four models and obtain steady state probability generating functions for the queue size at a random point of time for all states of the system. In model D, a known result for a three server queueing system without server vacations is derived.
Abstract: In a conventional network, most network devices, such as routers, are dedicated devices that do not have much variation in capacity. In recent years, a new concept of network functions virtualisation (NFV) has come into use. The intention is to implement a variety of network functions with software on general-purpose servers and this allows the network operator to select their capacities and locations without any constraints. This paper focuses on the allocation of NFV-based routing functions which are one of critical network functions, and presents the virtual routing function allocation algorithm that minimizes the total power consumption. In addition, this study presents the useful allocation policy of virtual routing functions, based on an evaluation with a ladder-shaped network model. This policy takes the ratio of the power consumption of a routing function to that of a circuit and traffic distribution between areas into consideration. Furthermore, the present paper shows that there are cases where the use of NFV-based routing functions makes it possible to reduce the total power consumption dramatically, in comparison to a conventional network, in which it is not economically viable to distribute small-capacity routing functions.
Abstract: We present a prioritized, limited multi-server processor sharing (PS) system where each server has various capacities, and N (≥2) priority classes are allowed in each PS server. In each prioritized, limited server, different service ratio is assigned to each class request, and the number of requests to be processed is limited to less than a certain number. Routing strategies of such prioritized, limited multi-server PS systems that take into account the capacity of each server are also presented, and a performance evaluation procedure for these strategies is discussed. Practical performance measures of these strategies, such as loss probability, mean waiting time, and mean sojourn time, are evaluated via simulation. In the PS server, at the arrival (or departure) of a request, the extension (shortening) of the remaining sojourn time of each request receiving service can be calculated by using the number of requests of each class and the priority ratio. Utilising a simulation program which executes these events and calculations, the performance of the proposed prioritized, limited multi-server PS rule can be analyzed. From the evaluation results, most suitable routing strategy for the loss or waiting system is clarified.
Abstract: Nowadays, big companies such as Google, Microsoft,
which have adequate proxy servers, have perfectly implemented
their web crawlers for a certain website in parallel. But due to
lack of expensive proxy servers, it is still a puzzle for researchers
to crawl large amounts of information from a single website in
parallel. In this case, it is a good choice for researchers to use
free public proxy servers which are crawled from the Internet. In
order to improve efficiency of web crawler, the following two issues
should be considered primarily: (1) Tasks may fail owing to the
instability of free proxy servers; (2) A proxy server will be blocked
if it visits a single website frequently. In this paper, we propose
Proxisch, an optimization approach of large-scale unstable proxy
servers scheduling, which allow anyone with extremely low cost to
run a web crawler efficiently. Proxisch is designed to work efficiently
by making maximum use of reliable proxy servers. To solve second
problem, it establishes a frequency control mechanism which can
ensure the visiting frequency of any chosen proxy server below the
website’s limit. The results show that our approach performs better
than the other scheduling algorithms.
Abstract: One of the leading problems in Cyber Security today
is the emergence of targeted attacks conducted by adversaries with
access to sophisticated tools. These attacks usually steal senior level
employee system privileges, in order to gain unauthorized access to
confidential knowledge and valuable intellectual property. Malware
used for initial compromise of the systems are sophisticated and
may target zero-day vulnerabilities. In this work we utilize common
behaviour of malware called ”beacon”, which implies that infected
hosts communicate to Command and Control servers at regular
intervals that have relatively small time variations. By analysing
such beacon activity through passive network monitoring, it is
possible to detect potential malware infections. So, we focus on
time gaps as indicators of possible C2 activity in targeted enterprise
networks. We represent DNS log files as a graph, whose vertices
are destination domains and edges are timestamps. Then by using
four periodicity detection algorithms for each pair of internal-external
communications, we check timestamp sequences to identify the
beacon activities. Finally, based on the graph structure, we infer the
existence of other infected hosts and malicious domains enrolled in
the attack activities.