Abstract: Currently the most prevalent deep learning methods require a large amount of data for training, whereas few-shot learning tries to learn a model from limited data without extensive retraining. In this paper, we present a loss function based on triplet loss for solving few-shot problem using metric based learning. Instead of setting the margin distance in triplet loss as a constant number empirically, we propose an adaptive margin distance strategy to obtain the appropriate margin distance automatically. We implement the strategy in the deep siamese network for deep metric embedding, by utilizing an optimization approach by penalizing the worst case and rewarding the best. Our experiments on image recognition and co-segmentation model demonstrate that using our proposed triplet loss with adaptive margin distance can significantly improve the performance.
Abstract: As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.
Abstract: The use of deep learning for species identification in camera trap images has revolutionised our ability to study, conserve and monitor species in a highly efficient and unobtrusive manner, with state-of-the-art models achieving accuracies surpassing the accuracy of manual human classification. The high imbalance of camera trap datasets, however, results in poor accuracies for minority (rare or endangered) species due to their relative insignificance to the overall model accuracy. This paper investigates the use of Focal Loss, in comparison to the traditional Cross Entropy Loss function, to improve the identification of minority species in the “255 Bird Species” dataset from Kaggle. The results show that, although Focal Loss slightly decreased the accuracy of the majority species, it was able to increase the F1-score by 0.06 and improve the identification of the bottom two, five and ten (minority) species by 37.5%, 15.7% and 10.8%, respectively, as well as resulting in an improved overall accuracy of 2.96%.
Abstract: Ancient books are significant culture inheritors and their background textures convey the potential history information. However, multi-style texture recovery of ancient books has received little attention. Restricted by insufficient ancient textures and complex handling process, the generation of ancient textures confronts with new challenges. For instance, training without sufficient data usually brings about overfitting or mode collapse, so some of the outputs are prone to be fake. Recently, image generation and style transfer based on deep learning are widely applied in computer vision. Breakthroughs within the field make it possible to conduct research upon multi-style texture recovery of ancient books. Under the circumstances, we proposed a network of layout analysis and image fusion system. Firstly, we trained models by using Deep Convolution Generative against Networks (DCGAN) to synthesize multi-style ancient textures; then, we analyzed layouts based on the Position Rearrangement (PR) algorithm that we proposed to adjust the layout structure of foreground content; at last, we realized our goal by fusing rearranged foreground texts and generated background. In experiments, diversified samples such as ancient Yi, Jurchen, Seal were selected as our training sets. Then, the performances of different fine-turning models were gradually improved by adjusting DCGAN model in parameters as well as structures. In order to evaluate the results scientifically, cross entropy loss function and Fréchet Inception Distance (FID) are selected to be our assessment criteria. Eventually, we got model M8 with lowest FID score. Compared with DCGAN model proposed by Radford at el., the FID score of M8 improved by 19.26%, enhancing the quality of the synthetic images profoundly.
Abstract: Dr. Genichi Taguchi looked at quality in a broader term and gave an excellent definition of quality in terms of loss to society. However the scope of this definition is limited to the losses imparted by a poor quality product to the customer only and are considered during the useful life of the product and further in a certain situation this loss can even be zero. In this paper, it has been proposed that the scope of quality of a product shall be further enhanced by considering the losses imparted by a poor quality product to society at large, due to associated environmental and safety related factors, over the complete life cycle of the product. Moreover, though these losses can be further minimized with the use of techno-safety interventions, the net losses to society however can never be made zero. This paper proposes an entirely new approach towards defining product quality and is based on Taguchi’s definition of quality.
Abstract: Past earthquakes have shown that seismic events may incur large economic losses in buildings. FEMA P-58 provides engineers a practical tool for the performance seismic assessment of buildings. In this study, FEMA P-58 is applied to two typical Italian pre-1970 reinforced concrete frame buildings, characterized by plain rebars as steel reinforcement and masonry infills and partitions. Given that suitable tools for these buildings are missing in FEMA P- 58, specific fragility curves and loss functions are first developed. Next, building performance is evaluated following a time-based assessment approach. Finally, expected annual losses for the selected buildings are derived and compared with past applications to old RC frame buildings representative of the US building stock.
Abstract: The final step to complete the “Analytical Systems
Engineering Process” is the “Allocated Architecture” in which all
Functional Requirements (FRs) of an engineering system must be
allocated into their corresponding Physical Components (PCs). At
this step, any design for developing the system’s allocated
architecture in which no clear pattern of assigning the exclusive
“responsibility” of each PC for fulfilling the allocated FR(s) can be
found is considered a poor design that may cause difficulties in
determining the specific PC(s) which has (have) failed to satisfy a
given FR successfully. The present study utilizes the Axiomatic
Design method principles to mathematically address this problem and
establishes an “Axiomatic Model” as a solution for reaching good
alternatives for developing the allocated architecture. This study
proposes a “loss Function”, as a quantitative criterion to monetarily
compare non-ideal designs for developing the allocated architecture
and choose the one which imposes relatively lower cost to the
system’s stakeholders. For the case-study, we use the existing design
of U. S. electricity marketing subsystem, based on data provided by
the U.S. Energy Information Administration (EIA). The result for
2012 shows the symptoms of a poor design and ineffectiveness due to
coupling among the FRs of this subsystem.
Abstract: In this paper, we consider the application of Extreme
Value Theory as a risk measurement tool. The Value at Risk, for a set
of indices, from six Stock Exchanges of Frontier markets is
calculated using the Peaks over Threshold method and the
performance of the model index-wise is evaluated using coverage
tests and loss functions. Our results show that “fattailedness” alone of
the data is not enough to justify the use of EVT as a VaR approach.
The structure of the returns dynamics is also a determining factor.
This approach works fine in markets which have had extremes
occurring in the past thus making the model capable of coping with
extremes coming up (Colombo, Tunisia and Zagreb Stock
Exchanges). On the other hand, we find that indices with lower past
than present volatility fail to adequately deal with future extremes
(Mauritius and Kazakhstan). We also conclude that using EVT alone
produces quite static VaR figures not reflecting the actual dynamics
of the data.
Abstract: A comprehensive Bayesian analysis has been carried out in the context of informative and non-informative priors for the shape parameter of the Burr type X distribution under different symmetric and asymmetric loss functions. Elicitation of hyperparameter through prior predictive approach is also discussed. Also we derive the expression for posterior predictive distributions, predictive intervals and the credible Intervals. As an illustration, comparisons of these estimators are made through simulation study.
Abstract: The article is concerned with analysis of failure rate (shape parameter) under the Topp Leone distribution using a Bayesian framework. Different loss functions and a couple of noninformative priors have been assumed for posterior estimation. The posterior predictive distributions have also been derived. A simulation study has been carried to compare the performance of different estimators. A real life example has been used to illustrate the applicability of the results obtained. The findings of the study suggest that the precautionary loss function based on Jeffreys prior and singly type II censored samples can effectively be employed to
obtain the Bayes estimate of the failure rate under Topp Leone distribution.
Abstract: One of the major problems in liberalized power
markets is loss allocation. In this paper, a different method for
allocating transmission losses to pool market participants is
proposed. The proposed method is fundamentally based on
decomposition of loss function and current projection concept. The
method has been implemented and tested on several networks and
one sample summarized in the paper. The results show that the
method is comprehensive and fair to allocating the energy losses of a
power market to its participants.
Abstract: This paper uses p-tolerance with the lowest posterior
loss, quadratic loss function, average length criteria, average
coverage criteria, and worst outcome criterion for computing of
sample size to estimate proportion in Binomial probability function
with Beta prior distribution. The proposed methodology is examined,
and its effectiveness is shown.
Abstract: Despite many success stories of manufacturing safety, many organizations are still reluctant, perceiving it as cost increasing and time consuming. The clear contributor may be due to the use of lagging indicators rather than leading indicator measures. The study therefore proposes a combinatorial model for determining the best safety strategy. A combination theory and cost benefit analysis was employed to develop a monetary saving / loss function in terms value of preventions and cost of prevention strategy. Documentations, interviews and structured questionnaire were employed to collect information on Before-And-After safety programme records from a Tobacco company between periods of 1993-2001(for pre-safety) and 2002-2008 (safety period) for the model application. Three combinatorial alternatives A, B, C were obtained resulting into 4, 6 and 4 strategies respectively with PPE and Training being predominant. A total of 728 accidents were recorded for a 9 year period of pre-safety programme and 163 accidents were recorded for 7 years period of safety programme. Six preventions activities (alternative B) yielded the best results. However, all the years of operation experienced except year 2004. The study provides a leading resources for planning successful safety programme
Abstract: The problem of ranking (rank regression) has become popular in the machine learning community. This theory relates to problems, in which one has to predict (guess) the order between objects on the basis of vectors describing their observed features. In many ranking algorithms a convex loss function is used instead of the 0-1 loss. It makes these procedures computationally efficient. Hence, convex risk minimizers and their statistical properties are investigated in this paper. Fast rates of convergence are obtained under conditions, that look similarly to the ones from the classification theory. Methods used in this paper come from the theory of U-processes as well as empirical processes.
Abstract: In this paper a nonlinear model is presented to
demonstrate the relation between production and marketing
departments. By introducing some functions such as pricing cost and
market share loss functions it will be tried to show some aspects of
market modelling which has not been regarded before. The proposed
model will be a constrained signomial geometric programming
model. For model solving, after variables- modifications an iterative
technique based on the concept of geometric mean will be introduced
to solve the resulting non-standard posynomial model which can be
applied to a wide variety of models in non-standard posynomial
geometric programming form. At the end a numerical analysis will
be presented to accredit the validity of the mentioned model.