Abstract: As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.
Abstract: The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.
Abstract: Italian Central Guarantee Fund (CGF) has the purpose to facilitate Small and Medium-sized Enterprises (SMEs)’ access to credit. The aim of the paper is to study the evaluation method adopted by the CGF with regard to SMEs requiring its intervention. This is even more important in the light of the recent CGF reform. We analyse an initial sample of more than 500.000 guarantees from 2012 to 2018. We distinguish between a counter-guarantee delivered to a mutual guarantee institution and a guarantee directly delivered to a bank. We investigate the impact of variables related to the operations and the SMEs on Altman Z’’-score and the score consistent with CGF methodology. We verify that the type of intervention affects the scores and the initial condition changes with the new assessment criterions.
Abstract: Tuberculosis (TB) is considered as a major disease that affects daily activities and impairs health-related quality of life (HRQoL). The impact of TB on HRQoL can affect treatment outcome and may lead to treatment defaulting. Therefore, this study aims to evaluate the HRQoL of TB treatment lost to follow-up during and after treatment in Yemen. For this aim, this prospective study enrolled a total of 399 TB lost to follow-up patients between January 2011 and December 2015. By applying HRQoL criteria, only 136 fill the survey during treatment. Moreover, 96 were traced and fill out the HRQoL survey. All eight HRQol domains were categorized into the physical component score (PCS) and mental component score (MCS), which were calculated using QM scoring software. Results show that all lost to follow-up TB patients reported a score less than 47 for all eight domains, except general health (67.3) during their treatment period. Low scores of 27.9 and 29.8 were reported for emotional role limitation (RE) and mental health (MH), respectively. Moreover, the mental component score (MCS) was found to be only 28.9. The trace lost follow-up shows a significant improvement in all eight domains and a mental component score of 43.1. The low scores of 27.9 and 29.8 for role emotion and mental health, respectively, in addition to the MCS score of 28.9, show that severe emotional condition and reflect the higher depression during treatment period that can result to lost to follow-up. The low MH, RE, and MCS can be used as a clue for predicting future TB treatment lost to follow-up.
Abstract: Poor air quality is one of the main environmental causes of premature deaths worldwide, and mainly in cities, where the majority of the population lives. It is a consequence of successive land cover (LC) and use changes, as a result of the intensification of human activities. Knowing these landscape modifications in a comprehensive spatiotemporal dimension is, therefore, essential for understanding variations in air pollutant concentrations. In this sense, the use of air quality models is very useful to simulate the physical and chemical processes that affect the dispersion and reaction of chemical species into the atmosphere. However, the modelling performance should always be evaluated since the resolution of the input datasets largely dictates the reliability of the air quality outcomes. Among these data, the updated LC is an important parameter to be considered in atmospheric models, since it takes into account the Earth’s surface changes due to natural and anthropic actions, and regulates the exchanges of fluxes (emissions, heat, moisture, etc.) between the soil and the air. This work aims to evaluate the performance of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem), when different LC classifications are used as an input. The influence of two LC classifications was tested: i) the 24-classes USGS (United States Geological Survey) LC database included by default in the model, and the ii) CLC (Corine Land Cover) and specific high-resolution LC data for Portugal, reclassified according to the new USGS nomenclature (33-classes). Two distinct WRF-Chem simulations were carried out to assess the influence of the LC on air quality over Europe and Portugal, as a case study, for the year 2015, using the nesting technique over three simulation domains (25 km2, 5 km2 and 1 km2 horizontal resolution). Based on the 33-classes LC approach, particular emphasis was attributed to Portugal, given the detail and higher LC spatial resolution (100 m x 100 m) than the CLC data (5000 m x 5000 m). As regards to the air quality, only the LC impacts on tropospheric ozone concentrations were evaluated, because ozone pollution episodes typically occur in Portugal, in particular during the spring/summer, and there are few research works relating to this pollutant with LC changes. The WRF-Chem results were validated by season and station typology using background measurements from the Portuguese air quality monitoring network. As expected, a better model performance was achieved in rural stations: moderate correlation (0.4 – 0.7), BIAS (10 – 21µg.m-3) and RMSE (20 – 30 µg.m-3), and where higher average ozone concentrations were estimated. Comparing both simulations, small differences grounded on the Leaf Area Index and air temperature values were found, although the high-resolution LC approach shows a slight enhancement in the model evaluation. This highlights the role of the LC on the exchange of atmospheric fluxes, and stresses the need to consider a high-resolution LC characterization combined with other detailed model inputs, such as the emission inventory, to improve air quality assessment.
Abstract: The Numerical weather prediction (NWP) models are
considered powerful tools for guiding quantitative rainfall prediction.
A couple of NWP models exist and are used at many operational
weather prediction centers. This study considers two models namely
the Consortium for Small–scale Modeling (COSMO) model and the
Weather Research and Forecasting (WRF) model. It compares the
models’ ability to predict rainfall over Uganda for the period 21st
April 2013 to 10th May 2013 using the root mean square (RMSE)
and the mean error (ME). In comparing the performance of the
models, this study assesses their ability to predict light rainfall events
and extreme rainfall events. All the experiments used the default
parameterization configurations and with same horizontal resolution
(7 Km). The results show that COSMO model had a tendency of
largely predicting no rain which explained its under–prediction. The
COSMO model (RMSE: 14.16; ME: -5.91) presented a significantly
(p = 0.014) higher magnitude of error compared to the WRF
model (RMSE: 11.86; ME: -1.09). However the COSMO model
(RMSE: 3.85; ME: 1.39) performed significantly (p = 0.003) better
than the WRF model (RMSE: 8.14; ME: 5.30) in simulating light
rainfall events. All the models under–predicted extreme rainfall events
with the COSMO model (RMSE: 43.63; ME: -39.58) presenting
significantly higher error magnitudes than the WRF model (RMSE:
35.14; ME: -26.95). This study recommends additional diagnosis of
the models’ treatment of deep convection over the tropics.
Abstract: “Humour studies” is an interdisciplinary research area that is relatively recent. It interests researchers from the disciplines of psychology, sociology, medicine, nursing, in the work place, gender studies, among others, and certainly teaching, language learning, linguistics, and literature. Linguistic theories of humour research are numerous; some of which are of interest to the present study. In spite of the fact that humour courses are now taught in universities around the world in the Egyptian context it is not included. The purpose of the present study is two-fold: to review the state of arts and to show how linguistic theories of humour can be possibly used as an art and craft of teaching and of learning in EFL literature classes. In the present study linguistic theories of humour were applied to selected literary texts to interpret humour as an intrinsic artistic communicative competence challenge. Humour in the area of linguistics was seen as a fifth component of communicative competence of the second language leaner. In literature it was studied as satire, irony, wit, or comedy. Linguistic theories of humour now describe its linguistic structure, mechanism, function, and linguistic deviance. Semantic Script Theory of Verbal Humor (SSTH), General Theory of Verbal Humor (GTVH), Audience Based Theory of Humor (ABTH), and their extensions and subcategories as well as the pragmatic perspective were employed in the analyses. This research analysed the linguistic semantic structure of humour, its mechanism, and how the audience reader (teacher or learner) becomes an interactive interpreter of the humour. This promotes humour competence together with the linguistic, social, cultural, and discourse communicative competence. Studying humour as part of the literary texts and the perception of its function in the work also brings its positive association in class for educational purposes. Humour is by default a provoking/laughter-generated device. Incongruity recognition, perception and resolving it, is a cognitive mastery. This cognitive process involves a humour experience that lightens up the classroom and the mind. It establishes connections necessary for the learning process. In this context the study examined selected narratives to exemplify the application of the theories. It is, therefore, recommended that the theories would be taught and applied to literary texts for a better understanding of the language. Students will then develop their language competence. Teachers in EFL/ESL classes will teach the theories, assist students apply them and interpret text and in the process will also use humour. This is thus easing students' acquisition of the second language, making the classroom an enjoyable, cheerful, self-assuring, and self-illuminating experience for both themselves and their students. It is further recommended that courses of humour research studies should become an integral part of higher education curricula in Egypt.
Abstract: A waste-to-energy plasma system was designed by Necsa for commercial use to create electricity from unsorted municipal waste. Fly ash particles must be removed from the syngas stream at operating temperatures of 1000 °C and recycled back into the reactor for complete combustion. A 2D2D high efficiency cyclone separator was chosen for this purpose. During this study, two cyclone design methods were explored: The Classic Empirical Method (smaller cyclone) and the Flow Characteristics Method (larger cyclone). These designs were optimized with regard to efficiency, so as to remove at minimum 90% of the fly ash particles of average size 10 μm by 50 μm. Wood was used as feed source at a concentration of 20 g/m3 syngas. The two designs were then compared at room temperature, using Perspex test units and three feed gases of different densities, namely nitrogen, helium and air. System conditions were imitated by adapting the gas feed velocity and particle load for each gas respectively. Helium, the least dense of the three gases, would simulate higher temperatures, whereas air, the densest gas, simulates a lower temperature. The average cyclone efficiencies ranged between 94.96% and 98.37%, reaching up to 99.89% in individual runs. The lowest efficiency attained was 94.00%. Furthermore, the design of the smaller cyclone proved to be more robust, while the larger cyclone demonstrated a stronger correlation between its separation efficiency and the feed temperatures. The larger cyclone can be assumed to achieve slightly higher efficiencies at elevated temperatures. However, both design methods led to good designs. At room temperature, the difference in efficiency between the two cyclones was almost negligible. At higher temperatures, however, these general tendencies are expected to be amplified so that the difference between the two design methods will become more obvious. Though the design specifications were met for both designs, the smaller cyclone is recommended as default particle separator for the plasma system due to its robust nature.
Abstract: Probabilistic risk analysis models are used to provide a better understanding of the reliability and structural failure of works, including when calculating the stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of reliability analysis methods including the methods used in engineering. It is in our case, the use of level 2 methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type first order risk method (FORM) and the second order risk method (SORM). By way of comparison, a level three method was used which generates a full analysis of the problem and involves an integration of the probability density function of random variables extended to the field of security using the Monte Carlo simulation method. Taking into account the change in stress following load combinations: normal, exceptional and extreme acting on the dam, calculation of the results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities, thus causing a significant decrease in strength, shear forces then induce a shift that threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case the increase of uplift in a hypothetical default of the drainage system.
Abstract: Multi-point forming (MPF) and asymmetric incremental forming (ISF) are two flexible processes for sheet metal manufacturing. To take advantages of these two techniques, a hybrid process has been developed: The Multipoint Incremental Forming (MPIF). This process accumulates at once the advantages of each of these last mentioned forming techniques, which makes it a very interesting and particularly an efficient process for single, small, and medium series production. In this paper, an experimental and a numerical investigation of this technique are presented. To highlight the flexibility of this process and its capacity to manufacture standard and complex shapes, several pieces were produced by using MPIF. The forming experiments are performed on a 3-axis CNC machine. Moreover, a numerical model of the MPIF process has been implemented in ABAQUS and the analysis showed a good agreement with experimental results in terms of deformed shape. Furthermore, the use of an elastomeric interpolator allows avoiding classical local defaults like dimples, which are generally caused by the asymmetric contact and also improves the distribution of residual strain. Future works will apply this approach to other alloys used in aeronautic or automotive applications.
Abstract: Databases comprise the foundation of most software systems. System developers inevitably write code to query these databases. The de facto language for querying is SQL and this, consequently, is the default language taught by higher education institutions. There is evidence that learners find it hard to master SQL, harder than mastering other programming languages such as Java. Educators do not agree about explanations for this seeming anomaly. Further investigation may well reveal the reasons. In this paper, we report on our investigations into how novices learn SQL, the actual problems they experience when writing SQL, as well as the differences between expert and novice SQL query writers. We conclude by presenting a model of SQL learning that should inform the instructional material design process better to support the SQL learning process.
Abstract: Intrusion Detection Systems are an essential tool for
network security infrastructure. However, IDSs have a serious
problem which is the generating of massive number of alerts, most of
them are false positive ones which can hide true alerts and make the
analyst confused to analyze the right alerts for report the true attacks.
The purpose behind this paper is to present a formalism model to
perform correlation engine by the reduction of false positive alerts
basing on vulnerability contextual information. For that, we propose
a formalism model based on non-monotonic JClassicδє description
logic augmented with a default (δ) and an exception (є) operator that
allows a dynamic inference according to contextual information.
Abstract: Municipal Solid Waste (MSW) disposed in landfill sites decompose under anaerobic conditions and produce gases which mainly contain carbon dioxide (CO2) and methane (CH4). Methane has the potential of causing global warming 25 times more than CO2, and can potentially affect human life and environment. Thus, this research aims to determine MSW generation and the annual CH4 emissions from the generated waste in Oman over the years 1971-2030. The estimation of total waste generation was performed using existing models, while the CH4 emissions estimation was performed using the intergovernmental panel on climate change (IPCC) default method. It is found that total MSW generation in Oman might be reached 3,089 Gg in the year 2030, which approximately produced 85 Gg of CH4 emissions in the year 2030.
Abstract: This study investigates how the site specific traffic
data differs from the Mechanistic Empirical Pavement Design
Software default values. Two Weigh-in-Motion (WIM) stations were
installed in Interstate-40 (I-40) and Interstate-25 (I-25) to developed
site specific data. A computer program named WIM Data Analysis
Software (WIMDAS) was developed using Microsoft C-Sharp (.Net)
for quality checking and processing of raw WIM data. A complete
year data from November 2013 to October 2014 was analyzed using
the developed WIM Data Analysis Program. After that, the vehicle
class distribution, directional distribution, lane distribution, monthly
adjustment factor, hourly distribution, axle load spectra, average
number of axle per vehicle, axle spacing, lateral wander distribution,
and wheelbase distribution were calculated. Then a comparative
study was done between measured data and AASHTOWare default
values. It was found that the measured general traffic inputs for I-40
and I-25 significantly differ from the default values.
Abstract: Maintaining factory default battery endurance rate
over time in supporting huge amount of running applications on
energy-restricted mobile devices has created a new challenge for
mobile applications developer. While delivering customers’
unlimited expectations, developers are barely aware of efficient use
of energy from the application itself. Thus, developers need a set of
valid energy consumption indicators in assisting them to develop
energy saving applications. In this paper, we present a few software
product metrics that can be used as an indicator to measure energy
consumption of Android-based mobile applications in the early of
design stage. In particular, Trepn Profiler (Power profiling tool for
Qualcomm processor) has used to collect the data of mobile
application power consumption, and then analyzed for the 23
software metrics in this preliminary study. The results show that
McCabe cyclomatic complexity, number of parameters, nested block
depth, number of methods, weighted methods per class, number of
classes, total lines of code and method lines have direct relationship
with power consumption of mobile application.
Abstract: The use of wireless technology in industrial networks
has gained vast attraction in recent years. In this paper, we have
thoroughly analyzed the effect of contention window (CW) size on
the performance of IEEE 802.11-based industrial wireless networks
(IWN), from delay and reliability perspective. Results show that the
default values of CWmin, CWmax, and retry limit (RL) are far from
the optimum performance due to the industrial application
characteristics, including short packet and noisy environment. In this
paper, an adaptive CW algorithm (payload-dependent) has been
proposed to minimize the average delay. Finally a simple, but
effective CW and RL setting has been proposed for industrial
applications which outperforms the minimum-average-delay solution
from maximum delay and jitter perspective, at the cost of a little
higher average delay. Simulation results show an improvement of up
to 20%, 25%, and 30% in average delay, maximum delay and jitter
respectively.
Abstract: Background in music analysis: Traditionally, when we
think about a composer’s sketches, the chances are that we are
thinking in terms of the working out of detail, rather than the
evolution of an overall concept. Since music is a “time art,” it follows
that questions of a form cannot be entirely detached from
considerations of time. One could say that composers tend to regard
time either as a place gradually and partially intuitively filled, or they
can look for a specific strategy to occupy it. It seems that the one
thing that sheds light on Stockhausen’s compositional thinking is his
frequent use of “form schemas,” that is often a single-page
representation of the entire structure of a piece.
Background in music technology: Sonic Visualiser is a program
used to study a musical recording. It is an open source application for
viewing, analyzing, and annotating music audio files. It contains a
number of visualisation tools, which are designed with useful default
parameters for musical analysis. Additionally, the Vamp plugin
format of SV supports to provide analysis such as for example
structural segmentation.
Aims: The aim of paper is to show how SV may be used to obtain
a better understanding of the specific musical work, and how the
compositional strategy does impact on musical structures and musical
surfaces. It is known that “traditional” music analytic methods don’t
allow indicating interrelationships between musical surface (which is
perceived) and underlying musical/acoustical structure.
Main Contribution: Stockhausen had dealt with the most diverse
musical problems by the most varied methods. A characteristic which
he had never ceased to be placed at the center of his thought and
works, it was the quest for a new balance founded upon an acute
connection between speculation and intuition. In the case with
Mikrophonie I (1964) for tam-tam and 6 players Stockhausen makes
a distinction between the “connection scheme,” which indicates the
ground rules underlying all versions, and the form scheme, which is
associated with a particular version. The preface to the published
score includes both the connection scheme, and a single instance of a
“form scheme,” which is what one can hear on the CD recording. In
the current study, the insight into the compositional strategy chosen
by Stockhausen was been compared with auditory image, that is, with
the perceived musical surface. Stockhausen’s musical work is
analyzed both in terms of melodic/voice and timbre evolution.
Implications: The current study shows how musical structures
have determined of musical surface. The general assumption is this,
that while listening to music we can extract basic kinds of musical
information from musical surfaces. It is shown that interactive
strategies of musical structure analysis can offer a very fruitful way
of looking directly into certain structural features of music.
Abstract: In this paper the CVA computation of interest rate
swap is presented based on its rating. Rating and probability default
given by Moody’s Investors Service are used to calculate our CVA
for a specific swap with different maturities. With this computation
the influence of rating variation can be shown on CVA. Application
is made to the analysis of Greek CDS variation during the period of
Greek crisis between 2008 and 2011. The main point is the
determination of correlation between the fluctuation of Greek CDS
cumulative value and the variation of swap CVA due to change of
rating.
Abstract: One of the most important tasks in the risk
management is the correct determination of probability of default
(PD) of particular financial subjects. In this paper a possibility of
determination of financial institution’s PD according to the creditscoring
models is discussed. The paper is divided into the two parts.
The first part is devoted to the estimation of the three different
models (based on the linear discriminant analysis, logit regression
and probit regression) from the sample of almost three hundred US
commercial banks. Afterwards these models are compared and
verified on the control sample with the view to choose the best one.
The second part of the paper is aimed at the application of the chosen
model on the portfolio of three key Czech banks to estimate their
present financial stability. However, it is not less important to be able
to estimate the evolution of PD in the future. For this reason, the
second task in this paper is to estimate the probability distribution of
the future PD for the Czech banks. So, there are sampled randomly
the values of particular indicators and estimated the PDs’ distribution,
while it’s assumed that the indicators are distributed according to the
multidimensional subordinated Lévy model (Variance Gamma model
and Normal Inverse Gaussian model, particularly). Although the
obtained results show that all banks are relatively healthy, there is
still high chance that “a financial crisis” will occur, at least in terms
of probability. This is indicated by estimation of the various quantiles
in the estimated distributions. Finally, it should be noted that the
applicability of the estimated model (with respect to the used data) is
limited to the recessionary phase of the financial market.
Abstract: The objective of this work is to use the Fire Dynamics Simulator (FDS) to investigate the behavior of a kerosene small-scale fire. FDS is a Computational Fluid Dynamics (CFD) tool developed specifically for fire applications. Throughout its development, FDS is used for the resolution of practical problems in fire protection engineering. At the same time FDS is used to study fundamental fire dynamics and combustion. Predictions are based on Large Eddy Simulation (LES) with a Smagorinsky turbulence model. LES directly computes the large-scale eddies and the sub-grid scale dissipative processes are modeled. This technique is the default turbulence model which was used in this study. The validation of the numerical prediction is done using a direct comparison of combustion output variables to experimental measurements. Effect of the mesh size on the temperature evolutions is investigated and optimum grid size is suggested. Effect of width openings is investigated. Temperature distribution and species flow are presented for different operating conditions. The effect of the composition of the used fuel on atmospheric pollution is also a focus point within this work. Good predictions are obtained where the size of the computational cells within the fire compartment is less than 1/10th of the characteristic fire diameter.