Improving Subjective Bias Detection Using Bidirectional Encoder Representations from Transformers and Bidirectional Long Short-Term Memory

Detecting subjectively biased statements is a vital task. This is because this kind of bias, when present in the text or other forms of information dissemination media such as news, social media, scientific texts, and encyclopedias, can weaken trust in the information and stir conflicts amongst consumers. Subjective bias detection is also critical for many Natural Language Processing (NLP) tasks like sentiment analysis, opinion identification, and bias neutralization. Having a system that can adequately detect subjectivity in text will boost research in the above-mentioned areas significantly. It can also come in handy for platforms like Wikipedia, where the use of neutral language is of importance. The goal of this work is to identify the subjectively biased language in text on a sentence level. With machine learning, we can solve complex AI problems, making it a good fit for the problem of subjective bias detection. A key step in this approach is to train a classifier based on BERT (Bidirectional Encoder Representations from Transformers) as upstream model. BERT by itself can be used as a classifier; however, in this study, we use BERT as data preprocessor as well as an embedding generator for a Bi-LSTM (Bidirectional Long Short-Term Memory) network incorporated with attention mechanism. This approach produces a deeper and better classifier. We evaluate the effectiveness of our model using the Wiki Neutrality Corpus (WNC), which was compiled from Wikipedia edits that removed various biased instances from sentences as a benchmark dataset, with which we also compare our model to existing approaches. Experimental analysis indicates an improved performance, as our model achieved state-of-the-art accuracy in detecting subjective bias. This study focuses on the English language, but the model can be fine-tuned to accommodate other languages.

Bayesian Geostatistical Modelling of COVID-19 Datasets

The COVID-19 dataset is obtained by extracting weather, longitude, latitude, ISO3666, cases and death of coronavirus patients across the globe. The data were extracted for a period of eight day choosing uniform time within the specified period. Then mapping of cases and deaths with reverence to continents were obtained. Bayesian Geostastical modelling was carried out on the dataset. The study found out that countries in the tropical region suffered less deaths/attacks compared to countries in the temperate region, this is due to high temperature in the tropical region.

The Ballistics Case Study of the Enrica Lexie Incident

On February 15, 2012 off the Indian coast of Kerala, in position 091702N-0760180E by the oil tanker Enrica Lexie, flying the Italian flag, bursts of 5.56 x45 caliber shots were fired from assault rifles AR/70 Italian-made Beretta towards the Indian fisher boat St. Anthony. The shots that hit the St. Anthony fishing boat were six, of which two killed the Indian fishermen Ajesh Pink and Valentine Jelestine. From the analysis concerning the kinematic engagement of the two ships and from the autopsy and ballistic results of the Indian judicial authorities it is possible to reconstruct the trajectories of the six aforementioned shots. This essay reconstructs the trajectories of the six shots that cannot be of direct shooting but have undergone a rebound on the water. The investigation carried out scientifically demonstrates the rebound of the blows on the water, the gyrostatic deviation due to the rebound and the tumbling effect always due to the rebound as regards intermediate ballistics. In consideration of the four shots that directly impacted the fishing vessel, the current examination proves, with scientific value, that the trajectories could not be downwards but upwards. Also, the trajectory of two shots that hit to death the two fishermen could not be downwards but only upwards. In fact, this paper demonstrates, with scientific value: The loss of speed of the projectiles due to the rebound on the water; The tumbling effect in the ballistic medium within the two victims; The permanent cavities subject to the injury ballistics and the related ballistic trauma that prevented homeostasis causing bleeding in one case; The thermo-hardening deformation of the bullet found in Valentine Jelestine's skull; The upward and non-downward trajectories. The paper constitutes a tool in forensic ballistics in that it manages to reconstruct, from the final spot of the projectiles fired, all phases of ballistics like the internal one of the weapons that fired, the intermediate one, the terminal one and the penetrative structural one. In general terms the ballistics reconstruction is based on measurable parameters whose entity is contained with certainty within a lower and upper limit. Therefore, quantities that refer to angles, speed, impact energy and firing position of the shooter can be identified within the aforementioned limits. Finally, the investigation into the internal bullet track, obtained from any autopsy examination, offers a significant “lesson learned” but overall a starting point to contain or mitigate bleeding as a rescue from future gunshot wounds.

Transformation of Industrial Policy towards Industry 4.0 and Its Impact on Firms' Competition

Although Europe is on the threshold of a new industrial revolution called Industry 4.0, many believe that this will increase the flexibility of production, the mass adaptation of products to consumers and the speed of their service; it will also improve product quality and dramatically increase productivity. However, as expected, all the benefits of Industry 4.0 face many of the inevitable changes and challenges they pose. One of them is the inevitable transformation of current competition and business models. This article examines the possible results of competitive conversion from the classic Bertrand and Cournot models to qualitatively new competition based on innovation. Ability to deliver a new product quickly and the possibility to produce the individual design (through flexible and quickly configurable factories) by reducing equipment failures and increasing process automation and control is highly important. This study shows that the ongoing transformation of the competition model is changing the game. This, together with the creation of complex value networks, means huge investments that make it particularly difficult for small and medium-sized enterprises. In addition, the ongoing digitalization of data raises new concerns regarding legal obligations, intellectual property, and security.

Modified Genome-Scale Metabolic Model of Escherichia coli by Adding Hyaluronic Acid Biosynthesis-Related Enzymes (GLMU2 and HYAD) from Pasteurella multocida

Hyaluronic acid (HA) consists of linear heteropolysaccharides repeat of D-glucuronic acid and N-acetyl-D-glucosamine. HA has various useful properties to maintain skin elasticity and moisture, reduce inflammation, and lubricate the movement of various body parts without causing immunogenic allergy. HA can be found in several animal tissues as well as in the capsule component of some bacteria including Pasteurella multocida. This study aimed to modify a genome-scale metabolic model of Escherichia coli using computational simulation and flux analysis methods to predict HA productivity under different carbon sources and nitrogen supplement by the addition of two enzymes (GLMU2 and HYAD) from P. multocida to improve the HA production under the specified amount of carbon sources and nitrogen supplements. Result revealed that threonine and aspartate supplement raised the HA production by 12.186%. Our analyses proposed the genome-scale metabolic model is useful for improving the HA production and narrows the number of conditions to be tested further.

The Profit Trend of Cosmetics Products Using Bootstrap Edgeworth Approximation

Edgeworth approximation is one of the most important statistical methods that has a considered contribution in the reduction of the sum of standard deviation of the independent variables’ coefficients in a Quantile Regression Model. This model estimates the conditional median or other quantiles. In this paper, we have applied approximating statistical methods in an economical problem. We have created and generated a quantile regression model to see how the profit gained is connected with the realized sales of the cosmetic products in a real data, taken from a local business. The Linear Regression of the generated profit and the realized sales was not free of autocorrelation and heteroscedasticity, so this is the reason that we have used this model instead of Linear Regression. Our aim is to analyze in more details the relation between the variables taken into study: the profit and the finalized sales and how to minimize the standard errors of the independent variable involved in this study, the level of realized sales. The statistical methods that we have applied in our work are Edgeworth Approximation for Independent and Identical distributed (IID) cases, Bootstrap version of the Model and the Edgeworth approximation for Bootstrap Quantile Regression Model. The graphics and the results that we have presented here identify the best approximating model of our study.

The SOCI Strategy as a Method to Meet the Innovation Challenges of COVID-19

The COVID-19 causes a worldwide crisis and has an impact in every dimension of the economy. Organizations with the ability to adapt to new developments and which innovate solutions for the disrupted world during and after the Corona crises have the opportunity to not only survive the crisis but rather to use new trends to implement new business models and gain advantage. In this context, startups seem to have better opportunities to manage the Corona crisis through their innovation-based nature. The main result of this paper is the understanding that by applying a startup orientated innovation (SOCI) strategy, established companies can be motivated to meet the challenge of COVID-19 in a similar way like startups. This result can be achieved by describing the role of innovation and a SOCI strategy as helpful methods for organizations to meet the coming challenges during and after the COVID-19 epidemics. In addition to this, this paper presents a practical application of SOCI through the PANDA approach of the Fresenius University of Applied Sciences in Germany and discuss it in the context of COVID-19 as an exemplary successful real-world implementation of SOCI strategy.

A Highly Sensitive Dip Strip for Detection of Phosphate in Water

Phosphorus is an essential nutrient for plant life which is most frequently found as phosphate in water. Once phosphate is found in abundance in surface water, a series of adverse effects on an ecosystem can be initiated. Therefore, a portable and reliable method is needed to monitor the phosphate concentrations in the field. In this paper, an inexpensive dip strip device with the ascorbic acid/antimony reagent dried on blotting paper along with wet chemistry is developed for the detection of low concentrations of phosphate in water. Ammonium molybdate and sulfuric acid are separately stored in liquid form so as to improve significantly the lifetime of the device and enhance the reproducibility of the device’s performance. The limit of detection and quantification for the optimized device are 0.134 ppm and 0.472 ppm for phosphate in water, respectively. The device’s shelf life, storage conditions, and limit of detection are superior to what has been previously reported for the paper-based phosphate detection devices.

Exploring the Perspective of Service Quality in mHealth Services during the COVID-19 Pandemic

The impact of COVID-19 has a significant effect on all sectors of society globally. Health information technology (HIT) has become an effective health strategy in this age of distancing. In this regard, Mobile Health (mHealth) plays a critical role in managing patient and provider workflows during the COVID-19 pandemic. Therefore, the users' perception of service quality about mHealth services plays a significant role in shaping confidence and subsequent behaviors regarding the mHealth users' intention of use. This study's objective was to explore levels of user attributes analyzed by a qualitative method of how health practitioners and patients are satisfied or dissatisfied with using mHealth services; and analyzed the users' intention in the context of Taiwan during the COVID-19 pandemic. This research explores the experienced usability of a mHealth services during the Covid-19 pandemic. This study uses qualitative methods that include in-depth and semi-structured interviews that investigate participants' perceptions and experiences and the meanings they attribute to them. The five cases consisted of health practitioners, clinic staff, and patients' experiences using mHealth services. This study encourages participants to discuss issues related to the research question by asking open-ended questions, usually in one-to-one interviews. The findings show the positive and negative attributes of mHealth service quality. Hence, the significant importance of patients' and health practitioners' issues on several dimensions of perceived service quality is system quality, information quality, and interaction quality. A concept map for perceptions regards to emergency uses' intention of mHealth services process is depicted. The findings revealed that users pay more attention to "Medical care", "ease of use" and "utilitarian benefits" and have less importance for "Admissions and Convenience" and "Social influence". To improve mHealth services, the mHealth providers and health practitioners should better manage users' experiences to enhance mHealth services. This research contributes to the understanding of service quality issues in mHealth services during the COVID-19 pandemic.

Modelling of Multi-Agent Systems for the Scheduling of Multi-EV Charging from Power Limited Sources

This paper presents the research and application of model predictive scheduled charging of electric vehicles (EV) subject to limited available power resource. To focus on algorithm and operational characteristics, the EV interface to the source is modelled as a battery state equation during the charging operation. The researched methods allow for the priority scheduling of EV charging in a multi-vehicle regime and when subject to limited source power availability. Priority attribution for each connected EV is described. The validity of the developed methodology is shown through the simulation of different scenarios of charging operation of multiple connected EVs including non-scheduled and scheduled operation with various numbers of vehicles. Performance of the developed algorithms is also reported with the recommendation of the choice of suitable parameters.

The Reproducibility and Repeatability of Modified Likelihood Ratio for Forensics Handwriting Examination

The forensic use of handwriting depends on the analysis, comparison, and evaluation decisions made by forensic document examiners. When using biometric technology in forensic applications, it is necessary to compute Likelihood Ratio (LR) for quantifying strength of evidence under two competing hypotheses, namely the prosecution and the defense hypotheses wherein a set of assumptions and methods for a given data set will be made. It is therefore important to know how repeatable and reproducible our estimated LR is. This paper evaluated the accuracy and reproducibility of examiners' decisions. Confidence interval for the estimated LR were presented so as not get an incorrect estimate that will be used to deliver wrong judgment in the court of Law. The estimate of LR is fundamentally a Bayesian concept and we used two LR estimators, namely Logistic Regression (LoR) and Kernel Density Estimator (KDE) for this paper. The repeatability evaluation was carried out by retesting the initial experiment after an interval of six months to observe whether examiners would repeat their decisions for the estimated LR. The experimental results, which are based on handwriting dataset, show that LR has different confidence intervals which therefore implies that LR cannot be estimated with the same certainty everywhere. Though the LoR performed better than the KDE when tested using the same dataset, the two LR estimators investigated showed a consistent region in which LR value can be estimated confidently. These two findings advance our understanding of LR when used in computing the strength of evidence in handwriting using forensics.

An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.

An Exploratory Study on the Difference between Online and Offline Conformity Behavior among Chinese College Students

Conformity is defined as one in a social group changing his or her behavior to match the others’ behavior in the group. It is used to find that people show a higher level of online conformity behavior than offline. However, as anonymity can decrease the level of online conformity behavior, the difference between online and offline conformity behavior among Chinese college students still needs to be tested. In this study, college students (N = 60) have been randomly assigned into three groups: control group, offline experimental group, and online experimental group. Through comparing the results of offline experimental group and online experimental group with the Mann-Whitney U test, this study verified the results of Asch’s experiment, and found out that people show a lower level of online conformity behavior than offline, which contradicted the previous finding found in China. These results can be used to explain why some people make a lot of vicious remarks and radical ideas on the Internet but perform normally in their real life: the anonymity of the network makes the online group pressure less than offline, so people are less likely to conform to social norms and public opinions on the Internet. What is more, these results support the importance and relevance of online voting, because fewer online group pressures make it easier for people to expose their true ideas, thus gathering more comprehensive and truthful views and opinions.

Oakes Test and Proportionality Test: Balance between the Practical Costs of Limiting Rights and the Benefits Arising from the Law

The analysis of proportionality as a test is raised as a basic foundation for the achievement of Fundamental Rights. We used legal dogmatics and empirical analysis to seek the expected results, from the reading of the RV Oakes trial by the Supreme Court of Canada. In cases involving freedom of expression, two tests are used to resolve disputes. The first examines whether, in fact, the case can be characterized as a violation of freedom of expression; the second assesses whether this violation can be justified by the reasonable limit clause. This test was defined in the RV Oakes trial by the Supreme Court of Canada, concluding with the Oakes Test, used worldwide as a proportionality test. Resulting is a proportionality between the effects of the limiting measure and the objective - the more serious the harmful effects of a measure, the more important the objective must be.

Geometrical Based Unequal Droplet Splitting Using Microfluidic Y-Junction

Among different droplet manipulations, controlled droplet-splitting is of great significance due to its ability to increase throughput and operational capability. Furthermore, unequal droplet-splitting can provide greater flexibility and a wider range of dilution factors. In this study, we developed two-dimensional, time-dependent complex fluid dynamics simulations to model droplet formation in a flow focusing device, followed by splitting in a Y-shaped junction with sub-channels of unequal widths. From the results obtained from the numerical study, we correlated the diameters of the droplets in the sub-channels to the Weber number, thereby demarcating the droplet splitting and non-splitting regimes.

Users’ Information Disclosure Determinants in Social Networking Sites: A Systematic Literature Review

The privacy paradox describes a phenomenon whereby there is no connection between stated privacy concerns and privacy behaviours. We need to understand the underlying reasons for this paradox if we are to help users to preserve their privacy more effectively. In particular, the Social Networking System (SNS) domain offers a rich area of investigation due to the risks of unwise information disclosure decisions. Our study thus aims to untangle the complicated nature and underlying mechanisms of online privacy-related decisions in SNSs. In this paper, we report on the findings of a Systematic Literature Review (SLR) that revealed a number of factors that are likely to influence online privacy decisions. Our deductive analysis approach was informed by Communicative Privacy Management (CPM) theory. We uncovered a lack of clarity around privacy attitudes and their link to behaviours, which makes it challenging to design privacy-protecting SNS platforms and to craft legislation to ensure that users’ privacy is preserved.

Experimental Observation on Air-Conditioning Using Radiant Chilled Ceiling in Hot Humid Climate

Radiant chilled ceiling (RCC) has been perceived to save more energy and provide better thermal comfort than the traditional air conditioning system. However, its application has been rather limited by some reasons e.g., the scarce information about the thermal characteristic in the radiant room and the local climate influence on the system performance, etc. To bridge such gap, an office-like experiment room with a RCC was constructed in the hot and humid climate of Thailand. This paper presents exemplarily results from the RCC experiments to give an insight into the thermal environment in a radiant room and the cooling load associated to maintain the room's comfort condition. It gave a demonstration of the RCC system operation for its application to achieve thermal comfort in offices in a hot humid climate, as well.

Using Analytical Hierarchy Process and TOPSIS Approaches in Designing a Finite Element Analysis Automation Program

Sophisticated numerical simulations like finite element analysis (FEA) involve a complicated process from model setup to post-processing tasks that require replication of time-consuming steps. Utilizing FEA automation program simplifies the complexity of the involved steps while minimizing human errors in analysis set up, calculations, and results processing. One of the main challenges in designing FEA automation programs is to identify user requirements and link them to possible design alternatives. This paper presents a decision-making framework to design a Python based FEA automation program for modal analysis, frequency response analysis, and random vibration fatigue (RVF) analysis procedures. Analytical hierarchy process (AHP) and technique for order preference by similarity to ideal solution (TOPSIS) are applied to evaluate design alternatives considering the feedback received from experts and program users.

Exploring the Availability and Distribution of Public Green Spaces among Riyadh Residential Neighborhoods

Public green space promotes community health including daily activities, but these resources may not be available enough or may not equitably be distributed. This paper measures and compares the availability of public green spaces (PGS) among low, middle, and high-income neighborhoods in the Riyadh city. Additionally, it compares the total availability of PGS to WHO standard and Dubai availability of PGS per person. All PGS were mapped using geographical information systems, and total area availability of PGS compared to WHO and Dubai standards. To evaluate the significant differences in PGS availability across low, medium, and high-income Riyadh neighborhoods, we used a One-way ANOVA analysis of covariance to test the differences. As a result, by comparing PGS of Riyadh neighborhoods to WHO and Dubai-availability, it was found that Riyadh PGS were lower than the minimum standard of WHO and as well as Dubai. Riyadh has only 1.13 m2 per capita of PGS. The second finding, the availability of PGS, was significantly different among Riyadh neighborhoods based on socioeconomic status. The future development of PGS should be focused on increasing PGS availability and should be given priority to those low-income and unhealthy communities.

PointNetLK-OBB: A Point Cloud Registration Algorithm with High Accuracy

To improve the registration accuracy of a source point cloud and template point cloud when the initial relative deflection angle is too large, a PointNetLK algorithm combined with an oriented bounding box (PointNetLK-OBB) is proposed. In this algorithm, the OBB of a 3D point cloud is used to represent the macro feature of source and template point clouds. Under the guidance of the iterative closest point algorithm, the OBB of the source and template point clouds is aligned, and a mirror symmetry effect is produced between them. According to the fitting degree of the source and template point clouds, the mirror symmetry plane is detected, and the optimal rotation and translation of the source point cloud is obtained to complete the 3D point cloud registration task. To verify the effectiveness of the proposed algorithm, a comparative experiment was performed using the publicly available ModelNet40 dataset. The experimental results demonstrate that, compared with PointNetLK, PointNetLK-OBB improves the registration accuracy of the source and template point clouds when the initial relative deflection angle is too large, and the sensitivity of the initial relative position between the source point cloud and template point cloud is reduced. The primary contribution of this paper is the use of PointNetLK to avoid the non-convex problem of traditional point cloud registration and leveraging the regularity of the OBB to avoid the local optimization problem in the PointNetLK context.