A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition

This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.

Adversarial Disentanglement Using Latent Classifier for Pose-Independent Representation

The large pose discrepancy is one of the critical challenges in face recognition during video surveillance. Due to the entanglement of pose attributes with identity information, the conventional approaches for pose-independent representation lack in providing quality results in recognizing largely posed faces. In this paper, we propose a practical approach to disentangle the pose attribute from the identity information followed by synthesis of a face using a classifier network in latent space. The proposed approach employs a modified generative adversarial network framework consisting of an encoder-decoder structure embedded with a classifier in manifold space for carrying out factorization on the latent encoding. It can be further generalized to other face and non-face attributes for real-life video frames containing faces with significant attribute variations. Experimental results and comparison with state of the art in the field prove that the learned representation of the proposed approach synthesizes more compelling perceptual images through a combination of adversarial and classification losses.

Comparative Effect of Self-Myofascial Release as a Warm-Up Exercise on Functional Fitness of Young Adults

Warm-up is an essential component for optimizing performance in various sports before a physical fitness training session. This study investigated the immediate comparative effect of Self-Myofascial Release through vibration rolling (VR), non-vibration rolling (NVR), and static stretching as a part of a warm-up treatment on the functional fitness of young adults. Functional fitness is a classification of training that prepares the body for real-life movements and activities. For the present study 20male physical education students were selected as subjects. The age of the subjects was ranged from 20-25 years. The functional fitness variables undertaken in the present study were flexibility, muscle strength, agility, static and dynamic balance of the lower extremity. Each of the three warm-up protocol was administered on consecutive days, i.e. 24 hr time gap and all tests were administered in the morning. The mean and SD were used as descriptive statistics. The significance of statistical differences among the groups was measured by applying ‘F’-test, and to find out the exact location of difference, Post Hoc Test (Least Significant Difference) was applied. It was found from the study that only flexibility showed significant difference among three types of warm-up exercise. The observed result depicted that VR has more impact on myofascial release in flexibility in comparison with NVR and stretching as a part of warm-up exercise as ‘p’ value was less than 0.05. In the present study, within the three means of warm-up exercises, vibration roller showed better mean difference in terms of NVR, and static stretching exercise on functional fitness of young physical education practitioners, although the results were found insignificant in case of muscle strength, agility, static and dynamic balance of the lower extremity. These findings suggest that sports professionals and coaches may take VR into account for designing more efficient and effective pre-performance routine for long term to improve exercise performances. VR has high potential to interpret into an on-field practical application means.

Design of an Ensemble Learning Behavior Anomaly Detection Framework

Data assets protection is a crucial issue in the cybersecurity field. Companies use logical access control tools to vault their information assets and protect them against external threats, but they lack solutions to counter insider threats. Nowadays, insider threats are the most significant concern of security analysts. They are mainly individuals with legitimate access to companies information systems, which use their rights with malicious intents. In several fields, behavior anomaly detection is the method used by cyber specialists to counter the threats of user malicious activities effectively. In this paper, we present the step toward the construction of a user and entity behavior analysis framework by proposing a behavior anomaly detection model. This model combines machine learning classification techniques and graph-based methods, relying on linear algebra and parallel computing techniques. We show the utility of an ensemble learning approach in this context. We present some detection methods tests results on an representative access control dataset. The use of some explored classifiers gives results up to 99% of accuracy.

Amelioration of Cardiac Arrythmias Classification Performance Using Artificial Neural Network, Adaptive Neuro-Fuzzy and Fuzzy Inference Systems Classifiers

This paper aims at bringing a scientific contribution to the cardiac arrhythmia biomedical diagnosis systems; more precisely to the study of the amelioration of cardiac arrhythmia classification performance using artificial neural network, adaptive neuro-fuzzy and fuzzy inference systems classifiers. The purpose of this amelioration is to enable cardiologists to make reliable diagnosis through automatic cardiac arrhythmia analyzes and classifications based on high confidence classifiers. In this study, six classes of the most commonly encountered arrhythmias are considered: the Right Bundle Branch Block, the Left Bundle Branch Block, the Ventricular Extrasystole, the Auricular Extrasystole, the Atrial Fibrillation and the Normal Cardiac rate beat. From the electrocardiogram (ECG) extracted parameters, we constructed a matrix (360x360) serving as an input data sample for the classifiers based on neural networks and a matrix (1x6) for the classifier based on fuzzy logic. By varying three parameters (the quality of the neural network learning, the data size and the quality of the input parameters) the automatic classification permitted us to obtain the following performances: in terms of correct classification rate, 83.6% was obtained using the fuzzy logic based classifier, 99.7% using the neural network based classifier and 99.8% for the adaptive neuro-fuzzy based classifier. These results are based on signals containing at least 360 cardiac cycles. Based on the comparative analysis of the aforementioned three arrhythmia classifiers, the classifiers based on neural networks exhibit a better performance.

Predictive Analysis for Big Data: Extension of Classification and Regression Trees Algorithm

Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.

Evaluation of Bone and Body Mineral Profile in Association with Protein Content, Fat, Fat-Free, Skeletal Muscle Tissues According to Obesity Classification among Adult Men

Obesity is associated with increased fat mass as well as fat percentage. Minerals are the elements, which are of vital importance. In this study, the relationships between body as well as bone mineral profile and the percentage as well as mass values of fat, fat-free portion, protein, skeletal muscle were evaluated in adult men with normal body mass index (N-BMI), and those classified according to different stages of obesity. A total of 103 adult men classified into five groups participated in this study. Ages were within 19-79 years range. Groups were N-BMI (Group 1), overweight (OW) (Group 2), first level of obesity (FLO) (Group 3), second level of obesity (SLO) (Group 4) and third level of obesity (TLO) (Group 5). Anthropometric measurements were performed. BMI values were calculated. Obesity degree, total body fat mass, fat percentage, basal metabolic rate (BMR), visceral adiposity, body mineral mass, body mineral percentage, bone mineral mass, bone mineral percentage, fat-free mass, fat-free percentage, protein mass, protein percentage, skeletal muscle mass and skeletal muscle percentage were determined by TANITA body composition monitor using bioelectrical impedance analysis technology. Statistical package (SPSS) for Windows Version 16.0 was used for statistical evaluations. The values below 0.05 were accepted as statistically significant. All the groups were matched based upon age (p > 0.05). BMI values were calculated as 22.6 ± 1.7 kg/m2, 27.1 ± 1.4 kg/m2, 32.0 ± 1.2 kg/m2, 37.2 ± 1.8 kg/m2, and 47.1 ± 6.1 kg/m2 for groups 1, 2, 3, 4, and 5, respectively. Visceral adiposity and BMR values were also within an increasing trend. Percentage values of mineral, protein, fat-free portion and skeletal muscle masses were decreasing going from normal to TLO. Upon evaluation of the percentages of protein, fat-free portion and skeletal muscle, statistically significant differences were noted between NW and OW as well as OW and FLO (p < 0.05). However, such differences were not observed for body and bone mineral percentages. Correlation existed between visceral adiposity and BMI was stronger than that detected between visceral adiposity and obesity degree. Correlation between visceral adiposity and BMR was significant at the 0.05 level. Visceral adiposity was not correlated with body mineral mass but correlated with bone mineral mass whereas significant negative correlations were observed with percentages of these parameters (p < 0.001). BMR was not correlated with body mineral percentage whereas a negative correlation was found between BMR and bone mineral percentage (p < 0.01). It is interesting to note that mineral percentages of both body as well as bone are highly affected by the visceral adiposity. Bone mineral percentage was also associated with BMR. From these findings, it is plausible to state that minerals are highly associated with the critical stages of obesity as prominent parameters.

Optimizing the Probabilistic Neural Network Training Algorithm for Multi-Class Identification

In this work, a training algorithm for probabilistic neural networks (PNN) is presented. The algorithm addresses one of the major drawbacks of PNN, which is the size of the hidden layer in the network. By using a cross-validation training algorithm, the number of hidden neurons is shrunk to a smaller number consisting of the most representative samples of the training set. This is done without affecting the overall architecture of the network. Performance of the network is compared against performance of standard PNN for different databases from the UCI database repository. Results show an important gain in network size and performance.

Classification Based on Deep Neural Cellular Automata Model

Deep learning structure is a branch of machine learning science and greet achievement in research and applications. Cellular neural networks are regarded as array of nonlinear analog processors called cells connected in a way allowing parallel computations. The paper discusses how to use deep learning structure for representing neural cellular automata model. The proposed learning technique in cellular automata model will be examined from structure of deep learning. A deep automata neural cellular system modifies each neuron based on the behavior of the individual and its decision as a result of multi-level deep structure learning. The paper will present the architecture of the model and the results of simulation of approach are given. Results from the implementation enrich deep neural cellular automata system and shed a light on concept formulation of the model and the learning in it.

Foot Recognition Using Deep Learning for Knee Rehabilitation

The use of foot recognition can be applied in many medical fields such as the gait pattern analysis and the knee exercises of patients in rehabilitation. Generally, a camera-based foot recognition system is intended to capture a patient image in a controlled room and background to recognize the foot in the limited views. However, this system can be inconvenient to monitor the knee exercises at home. In order to overcome these problems, this paper proposes to use the deep learning method using Convolutional Neural Networks (CNNs) for foot recognition. The results are compared with the traditional classification method using LBP and HOG features with kNN and SVM classifiers. According to the results, deep learning method provides better accuracy but with higher complexity to recognize the foot images from online databases than the traditional classification method.

Masquerade and “What Comes Behind Six Is More Than Seven”: Thoughts on Art History and Visual Culture Research Methods

In the 21st century, the disciplinary boundaries of past centuries that we often create through mainstream art historical classification, techniques and sources may have been eroded by visual culture, which seems to provide a more inclusive umbrella for the new ways artists go about the creative process and its resultant commodities. Over the past four decades, artists in Africa have resorted to new materials, techniques and themes which have affected our ways of research on these artists and their art. Frontline artists such as El Anatsui, Yinka Shonibare, Erasmus Onyishi are demonstrating that any material is just suitable for artistic expression. Most of times, these materials come with their own techniques/effects and visual syntax: a combination of materials compounds techniques, formal aesthetic indexes, halo effects, and iconography. This tends to challenge the categories and we lean on to view, think and talk about them. This renders our main stream art historical research methods inadequate, thus suggesting new discursive concepts, terms and theories. This paper proposed the Africanist eclectic methods derived from the dual framework of Masquerade Theory and What Comes Behind Six is More Than Seven. This paper shares thoughts/research on art historical methods, terminological re-alignments on classification/source data, presentational format and interpretation arising from the emergent trends in our subject. The outcome provides useful tools to mediate new thoughts and experiences in recent African art and visual culture.

An Exploratory Study Regarding the Effects of Auditor Switch, Auditee’s Industry, and Auditee’s Location on Audit Fees in Australia

This study examines the effects of auditor switch, auditee’s industry, and auditee’s location on audit fees in Australia. It uses fee data of Australian Securities Exchange 500 companies, considering all industry classifications throughout the country from 2006 until 2016. Main findings show that auditor switch does not affect audit fees. However, auditee’s industry affects audit fees. This effect occurs in information technology, financials, energy, and materials sectors among the top 500 companies. Financials, energy, and materials sectors face a fee rise, whereas information technology has a fee cut. The extent of fee changes is different among various industries, wherein the financial sector has the highest increase. Further, auditee’s location affects audit fees. Top 500 companies in Hobart, Perth, and Brisbane face a fee reduction, wherein the highest cut is in Hobart. Further analysis suggests that the Australian audit market is being increasingly concentrated in the hands of the Big Four audit firms.

Classification of Health Risk Factors to Predict the Risk of Falling in Older Adults

Cognitive decline and frailty is apparent in older adults leading to an increased likelihood of the risk of falling. Currently health care professionals have to make professional decisions regarding such risks, and hence make difficult decisions regarding the future welfare of the ageing population. This study uses health data from The Irish Longitudinal Study on Ageing (TILDA), focusing on adults over the age of 50 years, in order to analyse health risk factors and predict the likelihood of falls. This prediction is based on the use of machine learning algorithms whereby health risk factors are used as inputs to predict the likelihood of falling. Initial results show that health risk factors such as long-term health issues contribute to the number of falls. The identification of such health risk factors has the potential to inform health and social care professionals, older people and their family members in order to mitigate daily living risks.

A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics

Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%.

Measuring Text-Based Semantics Relatedness Using WordNet

Measuring semantic similarity between texts is calculating semantic relatedness between texts using various techniques. Our web application (Measuring Relatedness of Concepts-MRC) allows user to input two text corpuses and get semantic similarity percentage between both using WordNet. Our application goes through five stages for the computation of semantic relatedness. Those stages are: Preprocessing (extracts keywords from content), Feature Extraction (classification of words into Parts-of-Speech), Synonyms Extraction (retrieves synonyms against each keyword), Measuring Similarity (using keywords and synonyms, similarity is measured) and Visualization (graphical representation of similarity measure). Hence the user can measure similarity on basis of features as well. The end result is a percentage score and the word(s) which form the basis of similarity between both texts with use of different tools on same platform. In future work we look forward for a Web as a live corpus application that provides a simpler and user friendly tool to compare documents and extract useful information.

Measuring Text-Based Semantics Relatedness Using WordNet

Measuring semantic similarity between texts is calculating semantic relatedness between texts using various techniques. Our web application (Measuring Relatedness of Concepts-MRC) allows user to input two text corpuses and get semantic similarity percentage between both using WordNet. Our application goes through five stages for the computation of semantic relatedness. Those stages are: Preprocessing (extracts keywords from content), Feature Extraction (classification of words into Parts-of-Speech), Synonyms Extraction (retrieves synonyms against each keyword), Measuring Similarity (using keywords and synonyms, similarity is measured) and Visualization (graphical representation of similarity measure). Hence the user can measure similarity on basis of features as well. The end result is a percentage score and the word(s) which form the basis of similarity between both texts with use of different tools on same platform. In future work we look forward for a Web as a live corpus application that provides a simpler and user friendly tool to compare documents and extract useful information.

Use of Hierarchical Temporal Memory Algorithm in Heart Attack Detection

In order to reduce the number of deaths due to heart problems, we propose the use of Hierarchical Temporal Memory Algorithm (HTM) which is a real time anomaly detection algorithm. HTM is a cortical learning algorithm based on neocortex used for anomaly detection. In other words, it is based on a conceptual theory of how the human brain can work. It is powerful in predicting unusual patterns, anomaly detection and classification. In this paper, HTM have been implemented and tested on ECG datasets in order to detect cardiac anomalies. Experiments showed good performance in terms of specificity, sensitivity and execution time.

Products in Early Development Phases: Ecological Classification and Evaluation Using an Interval Arithmetic Based Calculation Approach

As a pillar of sustainable development, ecology has become an important milestone in research community, especially due to global challenges like climate change. The ecological performance of products can be scientifically conducted with life cycle assessments. In the construction sector, significant amounts of CO2 emissions are assigned to the energy used for building heating purposes. Therefore, sustainable construction materials for insulating purposes are substantial, whereby aerogels have been explored intensively in the last years due to their low thermal conductivity. Therefore, the WALL-ACE project aims to develop an aerogel-based thermal insulating plaster that would achieve minor thermal conductivities. But as in the early stage of development phases, a lot of information is still missing or not yet accessible, the ecological performance of innovative products bases increasingly on uncertain data that can lead to significant deviations in the results. To be able to predict realistically how meaningful the results are and how viable the developed products may be with regard to their corresponding respective market, these deviations however have to be considered. Therefore, a classification method is presented in this study, which may allow comparing the ecological performance of modern products with already established and competitive materials. In order to achieve this, an alternative calculation method was used that allows computing with lower and upper bounds to consider all possible values without precise data. The life cycle analysis of the considered products was conducted with an interval arithmetic based calculation method. The results lead to the conclusion that the interval solutions describing the possible environmental impacts are so wide that the result usability is limited. Nevertheless, a further optimization in reducing environmental impacts of aerogels seems to be needed to become more competitive in the future.

An Overview of the Porosity Classification in Carbonate Reservoirs and Their Challenges: An Example of Macro-Microporosity Classification from Offshore Miocene Carbonate in Central Luconia, Malaysia

Biological and chemical activities in carbonates are responsible for the complexity of the pore system. Primary porosity is generally of natural origin while secondary porosity is subject to chemical reactivity through diagenetic processes. To understand the integrated part of hydrocarbon exploration, it is necessary to understand the carbonate pore system. However, the current porosity classification scheme is limited to adequately predict the petrophysical properties of different reservoirs having various origins and depositional environments. Rock classification provides a descriptive method for explaining the lithofacies but makes no significant contribution to the application of porosity and permeability (poro-perm) correlation. The Central Luconia carbonate system (Malaysia) represents a good example of pore complexity (in terms of nature and origin) mainly related to diagenetic processes which have altered the original reservoir. For quantitative analysis, 32 high-resolution images of each thin section were taken using transmitted light microscopy. The quantification of grains, matrix, cement, and macroporosity (pore types) was achieved using a petrographic analysis of thin sections and FESEM images. The point counting technique was used to estimate the amount of macroporosity from thin section, which was then subtracted from the total porosity to derive the microporosity. The quantitative observation of thin sections revealed that the mouldic porosity (macroporosity) is the dominant porosity type present, whereas the microporosity seems to correspond to a sum of 40 to 50% of the total porosity. It has been proven that these Miocene carbonates contain a significant amount of microporosity, which significantly complicates the estimation and production of hydrocarbons. Neglecting its impact can increase uncertainty about estimating hydrocarbon reserves. Due to the diversity of geological parameters, the application of existing porosity classifications does not allow a better understanding of the poro-perm relationship. However, the classification can be improved by including the pore types and pore structures where they can be divided into macro- and microporosity. Such studies of microporosity identification/classification represent now a major concern in limestone reservoirs around the world.

Implementation-Oriented Discussion for Historical and Cultural Villages’ Conservation Planning

Since the State Council of China issued the Regulations on the Conservation of Historical Cultural Towns and Villages in 2008, formulation of conservation planning has been carried out in national, provincial and municipal historical and cultural villages for protection needs, which provides a legal basis for inheritance of historical culture and protection of historical resources. Although the quantity and content of the conservation planning are continually increasing, the implementation and application are still ambiguous. To solve the aforementioned problems, this paper explores methods to enhance the implementation of conservation planning from the perspective of planning formulation. Specifically, the technical framework of "overall objectives planning - sub-objectives planning - zoning guidelines - implementation by stages" is proposed to implement the planning objectives in different classifications and stages. Then combined with details of the Qiqiao historical and cultural village conservation planning project in Ningbo, five sub-objectives are set, which are implemented through the village zoning guidelines. At the same time, the key points and specific projects in the near-term, medium-term and long-term work are clarified, and the spatial planning is transformed into the action plan with time scale. The proposed framework and method provide a reference for the implementation and management of the conservation planning of historical and cultural villages in the future.