Translation, Cultural Adaptation and Validation of the Hungarian Version of Self-Determination Scale

There is a scarcity of validated instruments in Hungarian for the assessment of self-determination related traits and behaviors. In order to fill in this gap, the aim of this study was the translation, cultural adaptation and validation of Self-Determination Scale (SDS) for the Hungarian population. A total of 4335 adults participated in the study. The mean age of the participants was 27.97 (SD = 9.60). The sample consisted mostly of females, less than 20% were males. Exploratory and Confirmatory Factor Analysis was performed for factorial structure checking and validation Cronbach’s alpha was used to examine the reliability of the factors. Our results revealed that the Hungarian version of SDS has good psychometric properties and it is a reliable tool for psychologists who would like to study or assess self-determination traits in their clients. The adapted and validated Hungarian version of SDS is presented in this paper.

The Association of Vitamin B₁₂ with Body Weight-and Fat-Based Indices in Childhood Obesity

Vitamin deficiencies are common in obese individuals. Particularly, the status of vitamin B12 and its association with vitamin B9 (folate) and vitamin D is under investigation in recent time. Vitamin B12 is closely related to many vital processes in the body. In clinical studies, its involvement in fat metabolism draws attention from the obesity point of view. Obesity, in its advanced stages and in combination with metabolic syndrome (MetS) findings, may be a life-threatening health problem. Pediatric obesity is particularly important, because it may be a predictor of the severe chronic diseases during adulthood period of the child. Due to its role in fat metabolism, vitamin B12 deficiency may disrupt metabolic pathways of the lipid and energy metabolisms in the body. The association of low B12 levels with obesity degree may be an interesting topic to be investigated. Obesity indices may be helpful at this point. Weight- and fat-based indices are available. Of them, body mass index (BMI) is in the first group. Fat mass index (FMI), fat-free mass index (FFMI) and diagnostic obesity notation model assessment-II (D2I) index lie in the latter group. The aim of this study is to clarify possible associations between vitamin B12 status and obesity indices in pediatric population. The study comprises a total of 122 children. 32 children were included in the normal-body mass index (N-BMI) group. 46 and 44 children constitute groups with morbid obese children without MetS and with MetS, respectively. Informed consent forms and the approval of the institutional ethics committee were obtained. Tables prepared for obesity classification by World Health Organization were used. MetS criteria were defined. Anthropometric and blood pressure measurements were taken. BMI, FMI, FFMI, D2I were calculated. Routine laboratory tests were performed. Vitamin B9, B12, D concentrations were determined. Statistical evaluation of the study data was performed. Vitamin B9 and vitamin D levels were reduced in MetS group compared to children with N-BMI (p > 0.05). Significantly lower values were observed in vitamin B12 concentrations of MetS group (p < 0.01). Upon evaluation of blood pressure as well as triglyceride levels, there exist significant increases in morbid obese children. Significantly decreased concentrations of high-density lipoprotein cholesterol were observed. All of the obesity indices and insulin resistance index exhibit increasing tendency with the severity of obesity. Inverse correlations were calculated between vitamin D and insulin resistance index as well as vitamin B12 and D2I in morbid obese groups. In conclusion, a fat-based index, D2I, was the most prominent body index, which shows strong correlation with vitamin B12 concentrations in the late stage of obesity in children. A negative correlation between these two parameters was a confirmative finding related to the association between vitamin B12 and obesity degree. 

Spexin and Fetuin A in Morbid Obese Children

Spexin, expressed in the central nervous system, has attracted much interest in feeding behavior, obesity, diabetes, energy metabolism and cardiovascular functions. Fetuin A is known as the negative acute phase reactant synthesized in the liver. Eosinophils are early indicators of cardiometabolic complications. Patients with elevated platelet count, associated with hypercoagulable state in the body, are also more liable to cardiovascular diseases (CVDs). In this study, the aim is to examine the profiles of spexin and fetuin A concomitant with the course of variations detected in eosinophil as well as platelet counts in morbid obese children. 34 children with normal-body mass index (N-BMI) and 51 morbid obese (MO) children participated in the study. Written-informed consent forms were obtained prior to the study. Institutional ethics committee approved the study protocol. Age- and sex-adjusted BMI percentile tables prepared by World Health Organization were used to classify healthy and obese children. Mean age ± SEM of the children were 9.3 ± 0.6 years and 10.7 ± 0.5 years in N-BMI and MO groups, respectively. Anthropometric measurements of the children were taken. BMI values were calculated from weight and height values. Blood samples were obtained after an overnight fasting. Routine hematologic and biochemical tests were performed. Within this context, fasting blood glucose (FBG), insulin (INS), triglycerides (TRG), high density lipoprotein-cholesterol (HDL-C) concentrations were measured. Homeostatic model assessment for insulin resistance (HOMA-IR) values were calculated. Spexin and fetuin A levels were determined by enzyme-linked immunosorbent assay. Data were evaluated from the statistical point of view. Statistically significant differences were found between groups in terms of BMI, fat mass index, INS, HOMA-IR and HDL-C. In MO group, all parameters increased as HDL-C decreased. Elevated concentrations in MO group were detected in eosinophils (p < 0.05) and platelets (p > 0.05). Fetuin A levels decreased in MO group (p > 0.05). However, decrease was statistically significant in spexin levels for this group (p < 0.05). In conclusion, these results have suggested that increases in eosinophils and platelets exhibit behavior as cardiovascular risk factors. Decreased fetuin A behaved as a risk factor suitable to increased risk for cardiovascular problems associated with the severity of obesity. Along with increased eosinophils, increased platelets and decreased fetuin A, decreased spexin was the parameter, which reflects best its possible participation in the early development of CVD risk in MO children.

Hematologic Inflammatory Markers and Inflammation-Related Hepatokines in Pediatric Obesity

Obesity in children particularly draws attention, because it may threaten the individual’s future life due to many chronic diseases it may lead to. Most of these diseases including obesity itself altogether are related to inflammation. For this reason, inflammation-related parameters gain importance. Within this context, complete blood cell counts, ratios or indices derived from these counts have recently found some platform to be used as inflammatory markers. So far, mostly adipokines were investigated within the field of obesity. Metabolic inflammation is closely associated with cellular dysfunction. In this study, hematologic inflammatory markers and cytokines produced predominantly by the liver (fibroblast growth factor-21 (FGF-21) and fetuin A) were investigated in pediatric obesity. Two groups were constituted from 76 obese children based on World Health Organization criteria. Group 1 was composed of children, whose age- and sex-adjusted body mass index (BMI) percentiles were between 95 and 99. Group 2 consists of children, who are above 99th percentile. The first and the latter groups were defined as obese (OB) and morbid obese (MO). Anthropometric measurements of the children were performed. Informed consent forms and the approval of the institutional ethics committee were obtained. Blood cell counts and ratios were determined by automated hematology analyzer. The related ratios and indexes were calculated. Statistical evaluation of the data was performed by SPSS program. There was no statistically significant difference in terms of neutrophil-to lymphocyte ratio, monocyte-to-high density lipoprotein cholesterol ratio and platelet-to-lymphocyte ratio between the groups. Mean platelet volume and platelet distribution width values were decreased (p < 0.05), total platelet count, red cell distribution width (RDW) and systemic immune inflammation index values were increased (p < 0.01) in MO group. Both hepatokines were increased in the same group, however increases were not statistically significant. In this group, also a strong correlation was calculated between FGF-21 and RDW when controlled by age, hematocrit, iron and ferritin (r = 0.425; p < 0.01). In conclusion, the association between RDW, a hematologic inflammatory marker, and FGF-21, an inflammation-related hepatokine, found in MO group is an important finding discriminating between OB and MO children. This association is even more powerful when controlled by age and iron-related parameters.

Scholar Index for Research Performance Evaluation Using Multiple Criteria Decision Making Analysis

This paper aims to present an objective quantitative methodology on how to evaluate individual’s scholarly research output using multiple criteria decision analysis. A multiple criteria decision making analysis (MCDMA) methodological process is adopted to build a multiple criteria evaluation model. With the introduction of the scholar index, which gives significant information about a researcher's productivity and the scholarly impact of his or her publications in a single number (s is the number of publications with at least s citations); cumulative research citation index; the scholar index is included in the citation databases to cover the multidimensional complexity of scholarly research performance and to undertake objective evaluations with scholar index. The scholar index, one of publication activity indexes, is analyzed by considering it to be the most appropriate sciencemetric indicator which allows to smooth over many drawbacks of scholarly output assessment by mere calculation of the number of publications (quantity) and citations (quality). Hence, this study includes a set of indicators-based scholar index to be used for evaluating scholarly researchers. Google Scholar open science database was used to assess and discuss scholarly productivity and impact of researchers. Based on the experiment of computing the scholar index, and its derivative indexes for a set of researchers on open research database platform, quantitative methods of assessing scholarly research output were successfully considered to rank researchers. The proposed methodology considers the ranking, and the selection of data on which a scholarly research performance evaluation was based, the analysis of the data, and the presentation of the multiple criteria analysis results.

Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embedding. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic, and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n2) to O(n2/k), and the memory requirement from n2 to 2(n/k)2 which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Parametric Study of 3D Micro-Fin Tubes on Heat Transfer and Friction Factor

One area of special importance for the surface-level study of heat exchangers is tubes with internal micro-fins (< 0.5 mm tall). Micro-finned surfaces are a kind of extended solid surface in which energy is exchanged with water that acts as the source or sink of energy. Significant performance gains are possible for either shell, tube, or double pipe heat exchangers if the best surfaces are identified. The parametric studies of micro-finned tubes that have appeared in the literature left some key parameters unexplored. Specifically, they ignored three-dimensional (3D) micro-fin configurations, conduction heat transfer in the fins, and conduction in the solid surface below the micro-fins. Thus, this study aimed at implementing a parametric study of 3D micro-finned tubes that considered micro-fine height and discontinuity features. A 3D conductive and convective heat-transfer simulation through coupled solid and periodic fluid domains is applied in a commercial package, ANSYS Fluent 19.1. The simulation is steady-state with turbulent water flow cooling the inner wall of a tube with micro-fins. The simulation utilizes a constant and uniform temperature on the tube outer wall. Performance is mapped for 18 different simulation cases, including a smooth tube using a realizable k-ε turbulence model at a Reynolds number of 48,928. Results compared the performance of 3D tubes with results for the similar two-dimensional (2D) one. Results showed that the micro-fine height has a greater impact on performance factors than discontinuity features in 3D micro-fin tubes. A transformed 3D micro-fin tube can enhance heat transfer, and pressure drops up to 21% and 56% compared to a 2D one, respectfully.

Catalytic Pyrolysis of Sewage Sludge for Upgrading Bio-Oil Quality Using Sludge-Based Activated Char as an Alternative to HZSM5

Due to the concerns about the depletion of fossil fuel sources and the deteriorating environment, the attempt to investigate the production of renewable energy will play a crucial role as a potential to alleviate the dependency on mineral fuels. One particular area of interest is generation of bio-oil through sewage sludge (SS) pyrolysis. SS can be a potential candidate in contrast to other types of biomasses due to its availability and low cost. However, the presence of high molecular weight hydrocarbons and oxygenated compounds in the SS bio-oil hinders some of its fuel applications. In this context, catalytic pyrolysis is another attainable route to upgrade bio-oil quality. Among different catalysts (i.e., zeolites) studied for SS pyrolysis, activated chars (AC) are eco-friendly alternatives. The beneficial features of AC derived from SS comprise the comparatively large surface area, porosity, enriched surface functional groups and presence of a high amount of metal species that can improve the catalytic activity. Hence, a sludge-based AC catalyst was fabricated in a single-step pyrolysis reaction with NaOH as the activation agent and was compared with HZSM5 zeolite in this study. The thermal decomposition and kinetics were invested via thermogravimetric analysis (TGA) for guidance and control of pyrolysis and catalytic pyrolysis and the design of the pyrolysis setup. The results indicated that the pyrolysis and catalytic pyrolysis contain four obvious stages and the main decomposition reaction occurred in the range of 200-600 °C. Coats-Redfern method was applied in the 2nd and 3rd devolatilization stages to estimate the reaction order and activation energy (E) from the mass loss data. The average activation energy (Em) values for the reaction orders n = 1, 2 and 3 were in the range of 6.67-20.37 kJ/mol for SS; 1.51-6.87 kJ/mol for HZSM5; and 2.29-9.17 kJ/mol for AC, respectively. According to the results, AC and HZSM5 both were able to improve the reaction rate of SS pyrolysis by abridging the Em value. Moreover, to generate and examine the effect of the catalysts on the quality of bio-oil, a fixed-bed pyrolysis system was designed and implemented. The composition analysis of the produced bio-oil was carried out via gas chromatography/mass spectrometry (GC/MS). The selected SS to catalyst ratios were 1:1, 2:1 and 4:1. The optimum ratio in terms of cracking the long-chain hydrocarbons and removing oxygen-containing compounds was 1:1 for both catalysts. The upgraded bio-oils with HZSM5 and AC were in the total range of C4-C17 with around 72% in the range of C4-C9. The bio-oil from pyrolysis of SS contained 49.27% oxygenated compounds while the presence of HZSM5 and AC dropped to 7.3% and 13.02%, respectively. Meanwhile, generation of value-added chemicals such as light aromatic compounds were significantly improved in the catalytic process. Furthermore, the fabricated AC catalyst was characterized by BET, SEM-EDX, FT-IR and TGA techniques. Overall, this research demonstrated that AC is an efficient catalyst in the pyrolysis of SS and can be used as a cost-competitive catalyst in contrast to HZSM5.

Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., entropy, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one-class classification (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, principal component analysis (PCA), kernel principal component analysis (KPCA), and autoassociative neural network (ANN) are presented and their performance are compared. It is also shown that, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 95%.

Rolling Element Bearing Diagnosis by Improved Envelope Spectrum: Optimal Frequency Band Selection

The Rolling Element Bearing (REB) vibration diagnosis is worth of special interest by the variety of REB and the wide necessity of those elements in industrial applications. The presence of a localized fault in a REB gives rise to a vibrational response, characterized by the modulation of a carrier signal. Frequency content of carrier signal (Spectral Frequency –f) is mainly related to resonance frequencies of the REB. This carrier signal is modulated by another signal, governed by the periodicity of the fault impact (Cyclic Frequency –α). In this sense, REB fault vibration response gives rise to a second-order cyclostationary signal. Second order cyclostationary signals could be represented in a bi-spectral map, where Spectral Coherence –SCoh are plotted against f and α. The Improved Envelope Spectrum –IES, is a useful approach to execute REB fault diagnosis. IES could be applied by the integration of SCoh over a predefined bandwidth on the f axis. Approaches to select f-bandwidth have been recently exposed by the definition of a metric which intends to evaluate the magnitude of the IES at the fault characteristics frequencies. This metric is represented in a 1/3-binary tree as a function of the frequency bandwidth and centre. Based on this binary tree the optimal frequency band is selected. However, some advantages have been seen if the metric is changed, which in fact tends to dictate different optimal f-bandwidth and so improve the IES representation. This paper evaluates the behaviour of the IES from a different metric optimization. This metric is based on the sample correlation coefficient, detecting high peaks in the selected frequencies while penalizing high peaks in the neighbours of the selected frequencies. Prior results indicate an improvement on the signal-noise ratio (SNR) on around 86% of samples analysed, which belong to IMS database.

A Medical Vulnerability Scoring System Incorporating Health and Data Sensitivity Metrics

With the advent of complex software and increased connectivity, security of life-critical medical devices is becoming an increasing concern, particularly with their direct impact to human safety. Security is essential, but it is impossible to develop completely secure and impenetrable systems at design time. Therefore, it is important to assess the potential impact on security and safety of exploiting a vulnerability in such critical medical systems. The common vulnerability scoring system (CVSS) calculates the severity of exploitable vulnerabilities. However, for medical devices, it does not consider the unique challenges of impacts to human health and privacy. Thus, the scoring of a medical device on which a human life depends (e.g., pacemakers, insulin pumps) can score very low, while a system on which a human life does not depend (e.g., hospital archiving systems) might score very high. In this paper, we present a Medical Vulnerability Scoring System (MVSS) that extends CVSS to address the health and privacy concerns of medical devices. We propose incorporating two new parameters, namely health impact and sensitivity impact. Sensitivity refers to the type of information that can be stolen from the device, and health represents the impact to the safety of the patient if the vulnerability is exploited (e.g., potential harm, life threatening). We evaluate 15 different known vulnerabilities in medical devices and compare MVSS against two state-of-the-art medical device-oriented vulnerability scoring system and the foundational CVSS.

Fatigue Failure Analysis in AISI 304 Stainless Wind Turbine Shafts

Wind turbines are equipment of great importance for generating clean energy in countries and regions with abundant winds. However, complex loadings fluctuations to which they are subject can cause premature failure of these equipment due to the material fatigue process. This work evaluates fatigue failures in small AISI 304 stainless steel turbine shafts. Fractographic analysis techniques, chemical analyzes using energy dispersive spectrometry (EDS), and hardness tests were used to verify the origin of the failures, characterize the properties of the components and the material. The nucleation of cracks on the shafts' surface was observed due to a combined effect of variable stresses, geometric stress concentrating details, and surface wear, leading to the crack's propagation until the catastrophic failure. Beach marks were identified in the macrographic examination, characterizing the probable failure due to fatigue. The sensitization phenomenon was also observed.

Adaptive Few-Shot Deep Metric Learning

Currently the most prevalent deep learning methods require a large amount of data for training, whereas few-shot learning tries to learn a model from limited data without extensive retraining. In this paper, we present a loss function based on triplet loss for solving few-shot problem using metric based learning. Instead of setting the margin distance in triplet loss as a constant number empirically, we propose an adaptive margin distance strategy to obtain the appropriate margin distance automatically. We implement the strategy in the deep siamese network for deep metric embedding, by utilizing an optimization approach by penalizing the worst case and rewarding the best. Our experiments on image recognition and co-segmentation model demonstrate that using our proposed triplet loss with adaptive margin distance can significantly improve the performance.

Systematic Examination of Methods Supporting the Social Innovation Process

Innovation is the key element of economic development and a key factor in social processes. Technical innovations can be identified as prerequisites and causes of social change and cannot be created without the renewal of society. The study of social innovation can be characterised as one of the significant research areas of our day. The study’s aim is to identify the process of social innovation, which can be defined by input, transformation, and output factors. This approach divides the social innovation process into three parts: situation analysis, implementation, follow-up. The methods associated with each stage of the process are illustrated by the chronological line of social innovation. In this study, we have sought to present methodologies that support long- and short-term decision-making that is easy to apply, have different complementary content, and are well visualised for different user groups. When applying the methods, the reference objects are different: county, district, settlement, specific organisation. The solution proposed by the study supports the development of a methodological combination adapted to different situations. Having reviewed metric and conceptualisation issues, we wanted to develop a methodological combination along with a change management logic suitable for structured support to the generation of social innovation in the case of a locality or a specific organisation. In addition to a theoretical summary, in the second part of the study, we want to give a non-exhaustive picture of the two counties located in the north-eastern part of Hungary through specific analyses and case descriptions.

Bit Error Rate Monitoring for Automatic Bias Control of Quadrature Amplitude Modulators

The most common quadrature amplitude modulator (QAM) applies two Mach-Zehnder Modulators (MZM) and one phase shifter to generate high order modulation format. The bias of MZM changes over time due to temperature, vibration, and aging factors. The change in the biasing causes distortion to the generated QAM signal which leads to deterioration of bit error rate (BER) performance. Therefore, it is critical to be able to lock MZM’s Q point to the required operating point for good performance. We propose a technique for automatic bias control (ABC) of QAM transmitter using BER measurements and gradient descent optimization algorithm. The proposed technique is attractive because it uses the pertinent metric, BER, which compensates for bias drifting independently from other system variations such as laser source output power. The proposed scheme performance and its operating principles are simulated using OptiSystem simulation software for 4-QAM and 16-QAM transmitters.

Treatment of the Modern Management Mechanism of the Debris Flow Processes Expected in the Mletiskhevi

The work reviewed and evaluated various genesis debris flow phenomena recently formatted in the Mletiskhevi, accordingly it revealed necessity of treatment modern debris flow against measures. Based on this, it is proposed the debris flow against truncated semi cone shape construction, which elements are contained in the car’s secondary tires. its constituent elements (sections), due to the possibilities of amortization and geometric shapes is effective and sustainable towards debris flow hitting force. The construction is economical, because after crossing the debris flows in the river bed, the riverbed is not cleanable, also the elements of the building are resource saving. For assessment of influence of cohesive debris flow at the construction and evaluation of the construction effectiveness have been implemented calculation in the specific assumptions with approved methodology. According to the calculation, it was established that after passing debris flow in the debris flow construction (in 3 row case) its hitting force reduces 3 times, that causes reduce of debris flow speed and kinetic energy, as well as sedimentation on a certain section of water drain in the lower part of the construction. Based on the analysis and report on the debris flow against construction, it can be said that construction is effective, inexpensive, technically relatively easy-to-reach measure, that’s why its implementation is prospective.

Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

A Highly Sensitive Dip Strip for Detection of Phosphate in Water

Phosphorus is an essential nutrient for plant life which is most frequently found as phosphate in water. Once phosphate is found in abundance in surface water, a series of adverse effects on an ecosystem can be initiated. Therefore, a portable and reliable method is needed to monitor the phosphate concentrations in the field. In this paper, an inexpensive dip strip device with the ascorbic acid/antimony reagent dried on blotting paper along with wet chemistry is developed for the detection of low concentrations of phosphate in water. Ammonium molybdate and sulfuric acid are separately stored in liquid form so as to improve significantly the lifetime of the device and enhance the reproducibility of the device’s performance. The limit of detection and quantification for the optimized device are 0.134 ppm and 0.472 ppm for phosphate in water, respectively. The device’s shelf life, storage conditions, and limit of detection are superior to what has been previously reported for the paper-based phosphate detection devices.

The Reproducibility and Repeatability of Modified Likelihood Ratio for Forensics Handwriting Examination

The forensic use of handwriting depends on the analysis, comparison, and evaluation decisions made by forensic document examiners. When using biometric technology in forensic applications, it is necessary to compute Likelihood Ratio (LR) for quantifying strength of evidence under two competing hypotheses, namely the prosecution and the defense hypotheses wherein a set of assumptions and methods for a given data set will be made. It is therefore important to know how repeatable and reproducible our estimated LR is. This paper evaluated the accuracy and reproducibility of examiners' decisions. Confidence interval for the estimated LR were presented so as not get an incorrect estimate that will be used to deliver wrong judgment in the court of Law. The estimate of LR is fundamentally a Bayesian concept and we used two LR estimators, namely Logistic Regression (LoR) and Kernel Density Estimator (KDE) for this paper. The repeatability evaluation was carried out by retesting the initial experiment after an interval of six months to observe whether examiners would repeat their decisions for the estimated LR. The experimental results, which are based on handwriting dataset, show that LR has different confidence intervals which therefore implies that LR cannot be estimated with the same certainty everywhere. Though the LoR performed better than the KDE when tested using the same dataset, the two LR estimators investigated showed a consistent region in which LR value can be estimated confidently. These two findings advance our understanding of LR when used in computing the strength of evidence in handwriting using forensics.

An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.