Abstract: Current server systems are responsible for critical applications that run in different infrastructures, such as the cloud, physical machines, and virtual machines. A common challenge that these systems face are the various hardware faults that may occur due to the high load, among other reasons, which translates to errors resulting in malfunctions or even server downtime. The most important hardware parts, that are causing most of the errors, are the CPU, RAM, and the hard drive - HDD. In this work, we investigate selected CPU, RAM, and HDD errors, observed or simulated in kernel ring buffer log files from GNU/Linux servers. Moreover, a severity characterization is given for each error type. Understanding these errors is crucial for the efficient analysis of kernel logs that are usually utilized for monitoring servers and diagnosing faults. In addition, to support the previous analysis, we present possible ways of simulating hardware errors in RAM and HDD, aiming to facilitate the testing of methods for detecting and tackling the above issues in a server running on GNU/Linux.
Abstract: This paper presents a quantitative analysis on the need for automotive calibration methods for digital tachographs. Digital tachographs are mandatory for vehicles used in people and goods transport and they are an important aspect for road safety and inspection. Digital tachographs need to be calibrated for workshops in order for the digital tachograph to display and record speed and odometer values correctly. Calibration of digital tachographs can be performed either manual or automatic. It is shown in this paper that manual calibration of digital tachographs is prone to errors and there can be differences between manual and automatic calibration parameters. Therefore automatic calibration methods are imperative for digital tachograph calibration. The presented experimental results and error analysis clearly support the claims of the paper by evaluating and statistically comparing manual and automatic calibration methods.
Abstract: Studies on interlanguage have long been engaged in describing the phenomenon of variation in SLA. Pursuing the same goal and particularly addressing the role of linguistic features, this study describes the use of Persian morphology in the interlanguage of two adult English-speaking learners of Persian L2. Taking the general approach of a combination of contrastive analysis, error analysis and interlanguage analysis, this study focuses on the identification and prediction of some possible instances of transfer from English L1 to Persian L2 across six elicitation tasks aiming to investigate whether any of contextual features may variably influence the learners’ order of morpheme accuracy in the areas of copula, possessives, articles, demonstratives, plural form, personal pronouns, and genitive cases. Results describe the existence of task variation in the interlanguage system of Persian L2 learners.
Abstract: Syntactic parsing is vital for semantic treatment by many applications related to natural language processing (NLP), because form and content coincide in many cases. However, it has not yet reached the levels of reliable performance. By manually examining and analyzing individual machine translation output errors that involve syntax as well as semantics, this study attempts to discover what is required for improving syntactic and semantic parsing.
Abstract: This research aims at finding out the causes that led to wrong lexical selections in machine translation (MT) rather than categorizing lexical errors, which has been a main practice in error analysis. By manually examining and analyzing lexical errors outputted by a MT system, it suggests what knowledge would help the system reduce lexical errors.
Abstract: Part of Speech Tagging has always been a challenging task in the era of Natural Language Processing. This article presents POS tagging for Nepali text using Hidden Markov Model and Viterbi algorithm. From the Nepali text, annotated corpus training and testing data set are randomly separated. Both methods are employed on the data sets. Viterbi algorithm is found to be computationally faster and accurate as compared to HMM. The accuracy of 95.43% is achieved using Viterbi algorithm. Error analysis where the mismatches took place is elaborately discussed.
Abstract: Hall sensor is widely used to measure rotation angle. When the Hall voltage is measured for linear displacement, it is converted to angular displacement using arctangent function, which requires a large lookup table. In this paper, a lookup table reduction technique is presented for angle measurement. When the input of the lookup table is small within a certain threshold, the change of the outputs with respect to the change of the inputs is relatively small. Thus, several inputs can share same output, which significantly reduce the lookup table size. Its error analysis was also performed, and the threshold was determined so as to maintain the error less than 1°. When the Hall voltage has 11-bit resolution, the lookup table size is reduced from 1,024 samples to 279 samples.
Abstract: Introduction: The process to build a better safety
culture, methods of error analysis, and preventive measures, starts
with an understanding of the effects when human factors engineering
refer to remote microscopic diagnosis in surgery and specially in
organ transplantation for the remote evaluation of the grafts. It has
been estimated that even in well-organized transplant systems an
average of 8% to 14% of the grafts (G) that arrive at the recipient
hospitals may be considered as diseased, injured, damaged or
improper for transplantation. Digital microscopy adds information on
a microscopic level about the grafts in Organ Transplant (OT), and
may lead to a change in their management. Such a method will
reduce the possibility that a diseased G, will arrive at the recipient
hospital for implantation. Aim: Ergonomics of Digital Microscopy
(DM) based on virtual slides, on Telemedicine Systems (TS) for
Tele-Pathological (TPE) evaluation of the grafts (G) in organ
transplantation (OT). Material and Methods: By experimental
simulation, the ergonomics of DM for microscopic TPE of Renal
Graft (RG), Liver Graft (LG) and Pancreatic Graft (PG) tissues is
analyzed. In fact, this corresponded to the ergonomics of digital
microscopy for TPE in OT by applying Virtual Slide (VS) system for
graft tissue image capture, for remote diagnoses of possible
microscopic inflammatory and/or neoplastic lesions. Experimentation
included: a. Development of an OTE-TS similar Experimental
Telemedicine System (Exp.-TS), b. Simulation of the integration of
TS with the VS based microscopic TPE of RG, LG and PG applying
DM. Simulation of the DM based TPE was performed by 2
specialists on a total of 238 human Renal Graft (RG), 172 Liver Graft
(LG) and 108 Pancreatic Graft (PG) tissues digital microscopic
images for inflammatory and neoplastic lesions on four electronic
spaces of the four used TS. Results: Statistical analysis of specialist‘s
answers about the ability to diagnose accurately the diseased RG, LG
and PG tissues on the electronic space among four TS (A,B,C,D)
showed that DM on TS for TPE in OT is elaborated perfectly on the
ES of a Desktop, followed by the ES of the applied Exp.-TS. Tablet
and Mobile-Phone ES seem significantly risky for the application of
DM in OT (p
Abstract: Pulmonary Function Tests are important non-invasive
diagnostic tests to assess respiratory impairments and provides
quantifiable measures of lung function. Spirometry is the most
frequently used measure of lung function and plays an essential role
in the diagnosis and management of pulmonary diseases. However,
the test requires considerable patient effort and cooperation,
markedly related to the age of patients resulting in incomplete data
sets. This paper presents, a nonlinear model built using Multivariate
adaptive regression splines and Random forest regression model to
predict the missing spirometric features. Random forest based feature
selection is used to enhance both the generalization capability and the
model interpretability. In the present study, flow-volume data are
recorded for N= 198 subjects. The ranked order of feature importance
index calculated by the random forests model shows that the
spirometric features FVC, FEF25, PEF, FEF25-75, FEF50 and the
demographic parameter height are the important descriptors. A
comparison of performance assessment of both models prove that, the
prediction ability of MARS with the `top two ranked features namely
the FVC and FEF25 is higher, yielding a model fit of R2= 0.96 and
R2= 0.99 for normal and abnormal subjects. The Root Mean Square
Error analysis of the RF model and the MARS model also shows that
the latter is capable of predicting the missing values of FEV1 with a
notably lower error value of 0.0191 (normal subjects) and 0.0106
(abnormal subjects) with the aforementioned input features. It is
concluded that combining feature selection with a prediction model
provides a minimum subset of predominant features to train the
model, as well as yielding better prediction performance. This
analysis can assist clinicians with a intelligence support system in the
medical diagnosis and improvement of clinical care.
Abstract: The paper presents combined automatic speech
recognition (ASR) of English and machine translation (MT) for
English and Croatian and Croatian-English language pairs in the
domain of business correspondence. The first part presents results of
training the ASR commercial system on English data sets, enriched
by error analysis. The second part presents results of machine
translation performed by free online tool for English and Croatian
and Croatian-English language pairs. Human evaluation in terms of
usability is conducted and internal consistency calculated by
Cronbach's alpha coefficient, enriched by error analysis. Automatic
evaluation is performed by WER (Word Error Rate) and PER
(Position-independent word Error Rate) metrics, followed by
investigation of Pearson’s correlation with human evaluation.
Abstract: In this paper, we have proposed a numerical method
for solving fuzzy Fredholm integral equation of the second kind. In
this method a combination of orthonormal Bernstein and Block-Pulse
functions are used. In most cases, the proposed method leads to
the exact solution. The advantages of this method are shown by an
example and calculate the error analysis.
Abstract: The linguistic competence of Thai university students majoring in Business English was examined in the context of knowledge of English language inflection, and also various linguistic elements. Errors analysis was applied to the results of the testing. Levels of errors in inflection, tense and linguistic elements were shown to be significantly high for all noun, verb and adjective inflections. Findings suggest that students do not gain linguistic competence in their use of English language inflection, because of interlanguage interference. Implications for curriculum reform and treatment of errors in the classroom are discussed.
Abstract: Most of college students in Taiwan do not have sufficient English proficiency to express themselves in written English. Teachers spent a lot of time correcting the errors in students’ English writing, but the results are not satisfactory. This study aims to use blogs as a teaching and learning tool in written English. Before applying peer assessment, students should be trained to be good reviewers. The teacher starts the course by posting the error analysis of students’ first English composition on blogs as the comment models for students. Then the students will go through the process of drafting, composing, peer response and last revision on blogs. Evaluation questionnaires and interviews will be conducted at the end of the course to see the impact and also students’ perception for the course.
Abstract: The paper concerns a special approximate algorithm of the square root of the specific positive integer, which is built by the use of the property of positive integer solution of the Pell’s equation, together with using some elementary theorems of matrices, and then takes it to compare with general used the Newton’s method and give a practical numerical example and error analysis; it is unexpected to find its special property: the significant figure of the approximation value of the square root of positive integer will increase one digit by one. It is well useful in some occasions.
Abstract: This paper investigates the suitability of Latin Hypercube sampling (LHS) for composite electric power system reliability analysis. Each sample generated in LHS is mapped into an equivalent system state and used for evaluating the annualized system and load point indices. DC loadflow based state evaluation model is solved for each sampled contingency state. The indices evaluated are loss of load probability, loss of load expectation, expected demand not served and expected energy not supplied. The application of the LHS is illustrated through case studies carried out using RBTS and IEEE-RTS test systems. Results obtained are compared with non-sequential Monte Carlo simulation and state enumeration analytical approaches. An error analysis is also carried out to check the LHS method’s ability to capture the distributions of the reliability indices. It is found that LHS approach estimates indices nearer to actual value and gives tighter bounds of indices than non-sequential Monte Carlo simulation.
Abstract: The direct synthesis process of dimethyl ether (DME)
from syngas in slurry reactors is considered to be promising because
of its advantages in caloric transfer. In this paper, the influences of
operating conditions (temperature, pressure and weight hourly space
velocity) on the conversion of CO, selectivity of DME and methanol
were studied in a stirred autoclave over Cu-Zn-Al-Zr slurry catalyst,
which is far more suitable to liquid phase dimethyl ether synthesis
process than bifunctional catalyst commercially. A Langmuir-
Hinshelwood mechanism type global kinetics model for liquid phase
DME direct synthesis based on methanol synthesis models and a
methanol dehydration model has been investigated by fitting our
experimental data. The model parameters were estimated with
MATLAB program based on general Genetic Algorithms and
Levenberg-Marquardt method, which is suitably fitting experimental
data and its reliability was verified by statistical test and residual
error analysis.
Abstract: The performance of high-resolution schemes is investigated for unsteady, inviscid and compressible multiphase flows. An Eulerian diffuse interface approach has been chosen for the simulation of multicomponent flow problems. The reduced fiveequation and seven equation models are used with HLL and HLLC approximation. The authors demonstrated the advantages and disadvantages of both seven equations and five equations models studying their performance with HLL and HLLC algorithms on simple test case. The seven equation model is based on two pressure, two velocity concept of Baer–Nunziato [10], while five equation model is based on the mixture velocity and pressure. The numerical evaluations of two variants of Riemann solvers have been conducted for the classical one-dimensional air-water shock tube and compared with analytical solution for error analysis.
Abstract: An electrical apparatus for measuring moisture
content was developed by our laboratory and uses dependence of
electrical properties on water content in studied material. Error
analysis of the apparatus was run by measuring different volumes of
water in a simplified specimen, i.e. hollow plexiglass block, in order
to avoid as many side-effects as possible. Obtained data were
processed using both basic and advanced statistics and results were
compared with each other. The influence of water content on
accuracy of measured data was studied as well as the influence of
variation of apparatus' proper arrangement or factual methodics of its
usage. The overall coefficient of variation was 4%. There was no
trend found in results of error dependence on water content.
Comparison with current surveys led to a conclusion, that the studied
apparatus can be used for indirect measurement of water content in
porous materials, with expectable error and under known conditions.
Factual experiments with porous materials are not involved, but are
currently under investigation.
Abstract: The selection for plantation of a particular type of
mustard plant depending on its productivity (pod yield) at the stage
of maturity. The growth of mustard plant dependent on some
parameters of that plant, these are shoot length, number of leaves,
number of roots and roots length etc. As the plant is growing, some
leaves may be fall down and some new leaves may come, so it can
not gives the idea to develop the relationship with the seeds weight at
mature stage of that plant. It is not possible to find the number of
roots and root length of mustard plant at growing stage that will be
harmful of this plant as roots goes deeper to deeper inside the land.
Only the value of shoot length which increases in course of time can
be measured at different time instances. Weather parameters are
maximum and minimum humidity, rain fall, maximum and minimum
temperature may effect the growth of the plant. The parameters of
pollution, water, soil, distance and crop management may be
dominant factors of growth of plant and its productivity. Considering
all parameters, the growth of the plant is very uncertain, fuzzy
environment can be considered for the prediction of shoot length at
maturity of the plant. Fuzzification plays a greater role for
fuzzification of data, which is based on certain membership
functions. Here an effort has been made to fuzzify the original data
based on gaussian function, triangular function, s-function,
Trapezoidal and L –function. After that all fuzzified data are
defuzzified to get normal form. Finally the error analysis
(calculation of forecasting error and average error) indicates the
membership function appropriate for fuzzification of data and use to
predict the shoot length at maturity. The result is also verified using
residual (Absolute Residual, Maximum of Absolute Residual, Mean
Absolute Residual, Mean of Mean Absolute Residual, Median of
Absolute Residual and Standard Deviation) analysis.
Abstract: This paper proposes an efficient finite precision block floating point (BFP) treatment to the fixed coefficient finite impulse response (FIR) digital filter. The treatment includes effective implementation of all the three forms of the conventional FIR filters, namely, direct form, cascaded and par- allel, and a roundoff error analysis of them in the BFP format. An effective block formatting algorithm together with an adaptive scaling factor is pro- posed to make the realizations more simple from hardware view point. To this end, a generic relation between the tap weight vector length and the input block length is deduced. The implementation scheme also emphasises on a simple block exponent update technique to prevent overflow even during the block to block transition phase. The roundoff noise is also investigated along the analogous lines, taking into consideration these implementational issues. The simulation results show that the BFP roundoff errors depend on the sig- nal level almost in the same way as floating point roundoff noise, resulting in approximately constant signal to noise ratio over a relatively large dynamic range.