Software Product Quality Evaluation Model with Multiple Criteria Decision Making Analysis

This paper presents a software product quality evaluation model based on the ISO/IEC 25010 quality model. The evaluation characteristics and sub characteristics were identified from the ISO/IEC 25010 quality model. The multidimensional structure of the quality model is based on characteristics such as functional suitability, performance efficiency, compatibility, usability, reliability, security, maintainability, and portability, and associated sub characteristics. Random numbers are generated to establish the decision maker’s importance weights for each sub characteristics. Also, random numbers are generated to establish the decision matrix of the decision maker’s final scores for each software product against each sub characteristics. Thus, objective criteria importance weights and index scores for datasets were obtained from the random numbers. In the proposed model, five different software product quality evaluation datasets under three different weight vectors were applied to multiple criteria decision analysis method, preference analysis for reference ideal solution (PARIS) for comparison, and sensitivity analysis procedure. This study contributes to provide a better understanding of the application of MCDMA methods and ISO/IEC 25010 quality model guidelines in software product quality evaluation process.

Sustainable Balanced Scorecard for Kaizen Evaluation: Comparative Study between Egypt and Japan

Continuous improvement activities are becoming a key organizational success factor; those improvement activities include but are not limited to kaizen, six sigma, lean production, and continuous improvement projects. Kaizen is a Japanese philosophy of continuous improvement by making small incremental changes to improve an organization’s performance, reduce costs, reduce delay time, reduce waste in production, etc. This research aims at proposing a measuring system for kaizen activities from a sustainable balanced scorecard perspective. A survey was developed and disseminated among kaizen experts in both Egypt and Japan with the purpose of allocating key performance indicators for both kaizen process (critical success factors) and result (kaizen benefits) into the five sustainable balanced scorecard perspectives. This research contributes to the extant literature by presenting a kaizen measurement of both kaizen process and results that will illuminate the benefits of using kaizen. Also, the presented measurement can help in the sustainability of kaizen implementation across various sectors and industries. Thus, grasping the full benefits of kaizen implementation will contribute to the spread of kaizen understanding and practice. Also, this research provides insights on the social and cultural differences that would influence the kaizen success. Determining the combination of the proper kaizen measures could be used by any industry, whether service or manufacturing for better kaizen activities measurement. The comparison between Japanese implementation of kaizen, as the pioneers of continuous improvement, and Egyptian implementation will help recommending better practices of kaizen in Egypt and contributing to the 2030 sustainable development goals. The study results reveal that there is no significant difference in allocating kaizen benefits between Egypt and Japan. However, with regard to the critical success factors some differences appeared reflecting the social differences and understanding between both countries, a single integrated measurement was reached between the Egyptian and Japanese allocation highlighting the Japanese experts’ opinion as the ultimate criterion for selection.

Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis

The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluates the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.

Machine Learning Methods for Flood Hazard Mapping

This paper proposes a neural network approach for assessing flood hazard mapping. The core of the model is a machine learning component fed by frequency ratios, namely statistical correlations between flood event occurrences and a selected number of topographic properties. The classification capability was compared with the flood hazard mapping River Basin Plans (Piani Assetto Idrogeologico, acronimed as PAI) designed by the Italian Institute for Environmental Research and Defence, ISPRA (Istituto Superiore per la Protezione e la Ricerca Ambientale), encoding four different increasing flood hazard levels. The study area of Piemonte, an Italian region, has been considered without loss of generality. The frequency ratios may be used as a standalone block to model the flood hazard mapping. Nevertheless, the mixture with a neural network improves the classification power of several percentage points, and may be proposed as a basic tool to model the flood hazard map in a wider scope.

Multiple Criteria Decision Making Analysis for Selecting and Evaluating Fighter Aircraft

In this paper, multiple criteria decision making analysis technique, is presented for ranking and selection of a set of determined alternatives - fighter aircraft - which are associated with a set of decision factors. In fighter aircraft design, conflicting decision criteria, disciplines, and technologies are always involved in the design process. Multiple criteria decision making analysis techniques can be helpful to effectively deal with such situations and make wise design decisions. Multiple criteria decision making analysis theory is a systematic mathematical approach for dealing with problems which contain uncertainties in decision making. The feasibility and contributions of applying the multiple criteria decision making analysis technique in fighter aircraft selection analysis is explored. In this study, an integrated framework incorporating multiple criteria decision making analysis technique in fighter aircraft analysis is established using entropy objective weighting method. An improved integrated multiple criteria decision making analysis method is utilized to aggregate the multiple decision criteria into one composite figure of merit, which serves as an objective function in the decision process. Therefore, it is demonstrated that the suitable multiple criteria decision making analysis method with decision solution provides an effective objective function for the decision making analysis. Considering that the inherent uncertainties and the weighting factors have crucial decision impacts on the fighter aircraft evaluation, seven fighter aircraft models for the multiple design criteria in terms of the weighting factors are constructed. The proposed multiple criteria decision making analysis model is based on integrated entropy index procedure, and additive multiple criteria decision making analysis theory. Hence, the applicability of proposed technique for fighter aircraft selection problem is considered. The constructed multiple criteria decision making analysis model can provide efficient decision analysis approach for uncertainty assessment of the decision problem. Consequently, the fighter aircraft alternatives are ranked based their final evaluation scores, and sensitivity analysis is conducted.

Hydrochemical Contamination Profiling and Spatial-Temporal Mapping with the Support of Multivariate and Cluster Statistical Analysis

The aim of this work was to test a methodology able to generate spatial-temporal maps that can synthesize simultaneously the trends of distinct hydrochemical indicators in an old radium-uranium tailings dam deposit. Multidimensionality reduction derived from principal component analysis and subsequent data aggregation derived from clustering analysis allow to identify distinct hydrochemical behavioral profiles and generate synthetic evolutionary hydrochemical maps.

Threshold Concepts in TESOL: A Thematic Analysis of Disciplinary Guiding Principles

The notion of Threshold Concepts has offered a fertile new perspective on the transformative effects of mastery of particular concepts on student understanding of subject matter and their developing identities as inductees into disciplinary discourse communities. Only by successfully traversing essential knowledge thresholds can neophytes achieve the more sophisticated understandings of subject matter possessed by mature members of a discipline. This paper uses thematic analysis of disciplinary guiding principles to identify nine candidate Threshold Concepts that appear to underpin effective TESOL practice. The relationship between these candidate TESOL Threshold Concepts, TESOL principles, and TESOL instructional techniques appears to be amenable to a schematic representation based on superordinate categories of TESOL practitioner concern and, as such, offers an alternative to the view of Threshold Concepts as a privileged subset of disciplinary core concepts. The paper concludes by exploring the potential of a Threshold Concepts framework to productively inform TESOL initial teacher education (ITE) and in-service education and training (INSET).

Participatory Financial Inclusion Hypothesis: A Preliminary Empirical Validation Using Survey Design

In Nigeria, enormous efforts/resources had, over the years, been expended on promoting financial inclusion (FI); however, it is seemingly discouraging that many of its self-declared targets on FI remained unachieved, especially amongst the Rural Dwellers and Actors in the Informal Sectors (RDAIS). Expectedly, many reasons had been earmarked for these failures: low literacy level, huge informal/rural sectors etc. This study posits that in spite of these truly-debilitating factors, these FI policy failures could have been avoided or mitigated if the principles of active and better-managed citizens’ participation had been strictly followed in the (re)design/implementation of its FI policies. In other words, in a bid to mitigate the prevalent financial exclusion (FE) in Nigeria, this study hypothesizes the significant positive impact of involving the RDAIS in policy-wide decision making in the FI domain, backed by a preliminary empirical validation. Also, the study introduces the RDAIS-focused Participatory Financial Inclusion Policy (PFIP) as a major FI policy regeneration/improvement tool. The three categories of respondents that served as research subjects are FI experts in Nigeria (n = 72), RDAIS from the very rural/remote village of Unguwar Dogo in Northern Nigeria (n = 43) and RDAIS from another rural village of Sekere (n = 56) in the Southern region of Nigeria. Using survey design (5-point Likert scale questionnaires), random/stratified sampling, and descriptive/inferential statistics, the study often recorded independent consensus (amongst these three categories of respondents) that RDAIS’s active participation in iterative FI policy initiation, (re)design, implementation, (re)evaluation could indeed give improved FI outcomes. However, few questionnaire items also recorded divergent opinions and various statistically (in)significant differences on the mean scores of these three categories. The PFIP (or any customized version of it) should then be carefully integrated into the NFIS of Nigeria (and possibly in the NFIS of other developing countries) to truly/fully provide FI policy integration for these excluded RDAIS and arrest the prevalence of FE.

HaskellFL: A Tool for Detecting Logical Errors in Haskell

Understanding and using the functional paradigm is a challenge for many programmers. Looking for logical errors in code may take a lot of a developer’s time when a program grows in size. In order to facilitate both processes, this paper presents HaskellFL, a tool that uses fault localization techniques to locate a logical error in Haskell code. The Haskell subset used in this work is sufficiently expressive for those studying Functional Programming to get immediate help debugging their code and to answer questions about key concepts associated with the functional paradigm. HaskellFL was tested against Functional Programming assignments submitted by students enrolled at the Functional Programming class at the Federal University of Minas Gerais and against exercises from the Exercism Haskell track that are publicly available in GitHub. This work also evaluated the effectiveness of two fault localization techniques, Tarantula and Ochiai, in the Haskell context. Furthermore, the EXAM score was chosen to evaluate the tool’s effectiveness, and results showed that HaskellFL reduced the effort needed to locate an error for all tested scenarios. The results also showed that the Ochiai method was more effective than Tarantula.

Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., entropy, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one-class classification (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, principal component analysis (PCA), kernel principal component analysis (KPCA), and autoassociative neural network (ANN) are presented and their performance are compared. It is also shown that, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 95%.

A Medical Vulnerability Scoring System Incorporating Health and Data Sensitivity Metrics

With the advent of complex software and increased connectivity, security of life-critical medical devices is becoming an increasing concern, particularly with their direct impact to human safety. Security is essential, but it is impossible to develop completely secure and impenetrable systems at design time. Therefore, it is important to assess the potential impact on security and safety of exploiting a vulnerability in such critical medical systems. The common vulnerability scoring system (CVSS) calculates the severity of exploitable vulnerabilities. However, for medical devices, it does not consider the unique challenges of impacts to human health and privacy. Thus, the scoring of a medical device on which a human life depends (e.g., pacemakers, insulin pumps) can score very low, while a system on which a human life does not depend (e.g., hospital archiving systems) might score very high. In this paper, we present a Medical Vulnerability Scoring System (MVSS) that extends CVSS to address the health and privacy concerns of medical devices. We propose incorporating two new parameters, namely health impact and sensitivity impact. Sensitivity refers to the type of information that can be stolen from the device, and health represents the impact to the safety of the patient if the vulnerability is exploited (e.g., potential harm, life threatening). We evaluate 15 different known vulnerabilities in medical devices and compare MVSS against two state-of-the-art medical device-oriented vulnerability scoring system and the foundational CVSS.

The Impact of ISO 9001 Certification on Brazilian Firms’ Performance: Insights from Multiple Case Studies

The evolution of quality management by companies was strongly enabled by, among others, ISO 9001 certification, which is considered a crucial requirement for several customers. Likewise, performance measurement provides useful insights for companies to identify the reflection of their decision-making process on their improvement. One of the most used performance measurement models is the balanced scorecard (BSC), which uses four perspectives to address a firm’s performance: financial, internal process, customer satisfaction, and learning and growth. Since ISO 9001 certified firms are likely to measure their performance through BSC approach, it is important to verify whether the certificate influences the firm performance or not. Therefore, this paper aims to verify the impact of ISO 9001:2015 on Brazilian firms’ performance based on the BSC perspective. Hence, nine certified companies located in the Southeast region of Brazil were studied through a multiple case study approach. Within this study, it was possible to identify the positive impact of ISO 9001 on firms’ overall performance, and four Critical Success Factors (CSFs) were identified as relevant on the linkage among ISO 9001 and firms’ performance: employee involvement, top management, process management, and customer focus. Due to the COVID-19 pandemic, the number of interviews was limited to the quality manager specialist, and the sample was limited since several companies were closed during the period of the study. This study presents an in-depth analysis of how the relationship between ISO 9001 certification and firms’ performance in a developing country is.

Speedup Breadth-First Search by Graph Ordering

Breadth-First Search (BFS) is a core graph algorithm that is widely used for graph analysis. As it is frequently used in many graph applications, improving the BFS performance is essential. In this paper, we present a graph ordering method that could reorder the graph nodes to achieve better data locality, thus, improving the BFS performance. Our method is based on an observation that the sibling relationships will dominate the cache access pattern during the BFS traversal. Therefore, we propose a frequency-based model to construct the graph order. First, we optimize the graph order according to the nodes’ visit frequency. Nodes with high visit frequency will be processed in priority. Second, we try to maximize the child nodes’ overlap layer by layer. As it is proved to be NP-hard, we propose a heuristic method that could greatly reduce the preprocessing overheads.We conduct extensive experiments on 16 real-world datasets. The result shows that our method could achieve comparable performance with the state-of-the-art methods while the graph ordering overheads are only about 1/15.

Enhancing the Effectiveness of Air Defense Systems through Simulation Analysis

Air Defense Systems contain high-value assets that are expected to fulfill their mission for several years - in many cases, even decades - while operating in a fast-changing, technology-driven environment. Thus, it is paramount that decision-makers can assess how effective an Air Defense System is in the face of new developing threats, as well as to identify the bottlenecks that could jeopardize the security of the airspace of a country. Given the broad extent of activities and the great variety of assets necessary to achieve the strategic objectives, a systems approach was taken in order to delineate the core requirements and the physical architecture of an Air Defense System. Then, value-focused thinking helped in the definition of the measures of effectiveness. Furthermore, analytical methods were applied to create a formal structure that preliminarily assesses such measures. To validate the proposed methodology, a powerful simulation was also used to determine the measures of effectiveness, now in more complex environments that incorporate both uncertainty and multiple interactions of the entities. The results regarding the validity of this methodology suggest that the approach can support decisions aimed at enhancing the capabilities of Air Defense Systems. In conclusion, this paper sheds some light on how consolidated approaches of Systems Engineering and Operations Research can be used as valid techniques for solving problems regarding a complex and yet vital matter.

A Comparison of YOLO Family for Apple Detection and Counting in Orchards

In agricultural production and breeding, implementing automatic picking robot in orchard farming to reduce human labour and error is challenging. The core function of it is automatic identification based on machine vision. This paper focuses on apple detection and counting in orchards and implements several deep learning methods. Extensive datasets are used and a semi-automatic annotation method is proposed. The proposed deep learning models are in state-of-the-art YOLO family. In view of the essence of the models with various backbones, a multi-dimensional comparison in details is made in terms of counting accuracy, mAP and model memory, laying the foundation for realising automatic precision agriculture.

Lexicon-Based Sentiment Analysis for Stock Movement Prediction

Sentiment analysis is a broad and expanding field that aims to extract and classify opinions from textual data. Lexicon-based approaches are based on the use of a sentiment lexicon, i.e., a list of words each mapped to a sentiment score, to rate the sentiment of a text chunk. Our work focuses on predicting stock price change using a sentiment lexicon built from financial conference call logs. We present a method to generate a sentiment lexicon based upon an existing probabilistic approach. By using a domain-specific lexicon, we outperform traditional techniques and demonstrate that domain-specific sentiment lexicons provide higher accuracy than generic sentiment lexicons when predicting stock price change.

Deep Learning Based 6D Pose Estimation for Bin-Picking Using 3D Point Clouds

Estimating the 6D pose of objects is a core step for robot bin-picking tasks. The problem is that various objects are usually randomly stacked with heavy occlusion in real applications. In this work, we propose a method to regress 6D poses by predicting three points for each object in the 3D point cloud through deep learning. To solve the ambiguity of symmetric pose, we propose a labeling method to help the network converge better. Based on the predicted pose, an iterative method is employed for pose optimization. In real-world experiments, our method outperforms the classical approach in both precision and recall.

Dependency Theory on Examining the Relationship between the United States and the Middle East: In the Case of Iran, Saudi Arabia, and Turkey

Dependency theory was developed since 1950s, with economic concerns. It divided the world into two parts, the states of the peripheral (third world countries) and the states of the core (the developed capitalist countries). Another perspective developed to the theory with the implementation of the idea of semi-peripheral states in the new world order. With these divisions (core, peripheral, semi-peripheral) this study aims to develop a concept from the perspective of dependency theory, to understand the nature of the relationship of the U.S. with the Middle East Regions through its relation with Iran, Saudi Arabia, and Turkey. The tested countries (Saudi Arabia, Iran and Turkey) are seeking a foothold and influential role in the region. The paper argued that the U.S. directs its policies toward the region, in the way to guarantee no country of the region will be in semi-peripheral level (that could create competitions or danger on the U.S. interest). Therefore, U.S. policies in the region have varied from declaring war to diplomatic channels and sometimes ignoring. The paper is based on the dependency theory, and other international relations theories used to study the Middle East in the international context.

Evaluating the Impact of Replacement Policies on the Cache Performance and Energy Consumption in Different Multicore Embedded Systems

The cache has an important role in the reduction of access delay between a processor and memory in high-performance embedded systems. In these systems, the energy consumption is one of the most important concerns, and it will become more important with smaller processor feature sizes and higher frequencies. Meanwhile, the cache system dissipates a significant portion of energy compared to the other components of a processor. There are some elements that can affect the energy consumption of the cache such as replacement policy and degree of associativity. Due to these points, it can be inferred that selecting an appropriate configuration for the cache is a crucial part of designing a system. In this paper, we investigate the effect of different cache replacement policies on both cache’s performance and energy consumption. Furthermore, the impact of different Instruction Set Architectures (ISAs) on cache’s performance and energy consumption has been investigated.

Improved Rare Species Identification Using Focal Loss Based Deep Learning Models

The use of deep learning for species identification in camera trap images has revolutionised our ability to study, conserve and monitor species in a highly efficient and unobtrusive manner, with state-of-the-art models achieving accuracies surpassing the accuracy of manual human classification. The high imbalance of camera trap datasets, however, results in poor accuracies for minority (rare or endangered) species due to their relative insignificance to the overall model accuracy. This paper investigates the use of Focal Loss, in comparison to the traditional Cross Entropy Loss function, to improve the identification of minority species in the “255 Bird Species” dataset from Kaggle. The results show that, although Focal Loss slightly decreased the accuracy of the majority species, it was able to increase the F1-score by 0.06 and improve the identification of the bottom two, five and ten (minority) species by 37.5%, 15.7% and 10.8%, respectively, as well as resulting in an improved overall accuracy of 2.96%.