Abstract: This study is expected to compress true color image with compression algorithms in color spaces to provide high compression rates. The need of high compression ratio is to improve storage space. Alternative aim is to rank compression algorithms in a suitable color space. The dataset is sequence of true color images with size 128 x 128. HAAR Wavelet is one of the famous wavelet transforms, has great potential and maintains image quality of color images. HAAR wavelet Transform using Set Partitioning in Hierarchical Trees (SPIHT) algorithm with different color spaces framework is applied to compress sequence of images with angles. Embedded Zerotrees of Wavelet (EZW) is a powerful standard method to sequence data. Hence the proposed compression frame work of HAAR wavelet, xyz color space, morphological gradient and applied image with EZW compression, obtained improvement to other methods, in terms of Compression Ratio, Mean Square Error, Peak Signal Noise Ratio and Bits Per Pixel quality measures.
Abstract: Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.
Abstract: Software engineers apply different measures to quantify the quality of software design. These measures consider artifacts developed at low or high level software design phases. The results are used to point to design weaknesses and to indicate design points that have to be restructured. Understanding the relationship among the quality measures and among the design quality aspects considered by these measures is important to interpreting the impact of a measure for a quality aspect on other potentially related aspects. In addition, exploring the relationship between quality measures helps to explain the impact of different quality measures on external quality aspects, such as reliability and maintainability. In this paper, we report a replication study that empirically explores the correlation between six well known and commonly applied design quality measures. These measures consider several quality aspects, including complexity, cohesion, coupling, and inheritance. The results indicate that inheritance measures are weakly correlated to other measures, whereas complexity, coupling, and cohesion measures are mostly strongly correlated.
Abstract: Spice paprika is a major spice commodity in the European Union (EU), produced locally and imported from non-EU countries, reported not only for chemical and microbiological contamination, but also for fraud. The effective interaction between producers’ quality management practices and government and EU activities is described on the example of spice paprika production and control in Hungary, a country of leading spice paprika producer and per capita consumer in Europe. To demonstrate the importance of various contamination factors in the Hungarian production and EU trade of spice paprika, several aspects concerning food safety of this commodity are presented. Alerts in the Rapid Alert System for Food and Feed (RASFF) of the EU between 2005 and 2013, as well as Hungarian state inspection results on spice paprika in 2004 are discussed, and quality non-compliance claims regarding spice paprika among EU member states are summarized in by means of network analysis. Quality assurance measures established along the spice paprika production technology chain at the leading Hungarian spice paprika manufacturer, Kalocsai Fűszerpaprika Zrt. are surveyed with main critical control points identified. The structure and operation of the Hungarian state food safety inspection system is described. Concerted performance of the latter two quality management systems illustrates the effective interaction between internal (manufacturer) and external (state) quality control measures.
Abstract: In order to help the expert to validate association rules
extracted from data, some quality measures are proposed in the
literature. We distinguish two categories: objective and subjective
measures. The first one depends on a fixed threshold and on data
quality from which the rules are extracted. The second one consists
on providing to the expert some tools in the objective to explore and
visualize rules during the evaluation step. However, the number of
extracted rules to validate remains high. Thus, the manually mining
rules task is very hard. To solve this problem, we propose, in this
paper, a semi-automatic method to assist the expert during the
association rule's validation. Our method uses rule-based
classification as follow: (i) We transform association rules into
classification rules (classifiers), (ii) We use the generated classifiers
for data classification. (iii) We visualize association rules with their
quality classification to give an idea to the expert and to assist him
during validation process.
Abstract: The chemical and physical characteristics of rainwater
harvested from a typical rooftop were progressively studied. The
samples of rainwater collected were analyzed for pH, major ion
concentrations, TDS, turbidity, conductivity. All the Physicochemical
constituents fell within the WHO guideline limits at some points as
rainfall progresses except the pH. All the components of rainwater
quality measured during the study showed higher concentrations
during the early stages of rainfall and reduce as time progresses.
There was a downward trend in terms of pH as rain progressed, with
18% of the samples recording pH below the WHO limit of 6.5-8.0. It
was observed that iron concentration was above the WHO threshold
value of 0.3 mg/l on occasions of heavy rains. The results revealed
that most of physicochemical characteristics of rainwater samples
were generally below the WHO threshold, as such, the rainwater
characteristics showed satisfactory conditions in terms of
physicochemical constituents.
Abstract: The purpose of the paper is to examine the most
critical and important factor which will affect the implementation of
Total Quality Management (TQM) in the construction industry in the
United Arab Emirates. It also examines the most effected Project
outcome from implementing TQM. A framework was also proposed
depending on the literature studies. The method used in this paper is a
quantitative study. A survey with a sample of 60 respondents was
created and distributed in a construction company in Abu Dhabi,
which includes 15 questions to examine the most critical factor that
will affect the implementation of TQM in addition to the most
effected project outcome from implementing TQM. The survey
showed that management commitment is the most important factor in
implementing TQM in a construction company. Also it showed that
Project cost is most effected outcome from the implementation of
TQM.
Management commitment is very important for implementing
TQM in any company. If the management loose interest in quality
then everyone in the organization will do so. The success of TQM
will depend mostly on the top of the pyramid. Also cost is reduced
and money is saved when the project team implement TQM. While if
no quality measures are present within the team, the project will
suffer a commercial failure.
Based on literature, more factors can be examined and added to
the model. In addition, more construction companies could be
surveyed in order to obtain more accurate results. Also this study
could be conducted outside the United Arab Emirates for further
enchantment.
Abstract: This paper focuses on the assessment of the air
pollution and morbidity relationship in Tunisia. Air pollution is
measured by ozone air concentration and the morbidity is measured
by the number of respiratory-related restricted activity days during
the 2-week period prior to the interview. Socioeconomic data are also
collected in order to adjust for any confounding covariates. Our
sample is composed by 407 Tunisian respondents; 44.7% are women,
the average age is 35.2, near 69% are living in a house built after
1980, and 27.8% have reported at least one day of respiratory-related
restricted activity. The model consists on the regression of the
number of respiratory-related restricted activity days on the air
quality measure and the socioeconomic covariates. In order to correct
for zero-inflation and heterogeneity, we estimate several models
(Poisson, negative binomial, zero inflated Poisson, Poisson hurdle,
negative binomial hurdle and finite mixture Poisson models).
Bootstrapping and post-stratification techniques are used in order to
correct for any sample bias. According to the Akaike information
criteria, the hurdle negative binomial model has the greatest goodness
of fit. The main result indicates that, after adjusting for
socioeconomic data, the ozone concentration increases the probability
of positive number of restricted activity days.
Abstract: One of the most essential issues in software products is to maintain it relevancy to the dynamics of the user’s requirements and expectation. Many studies have been carried out in quality aspect of software products to overcome these problems. Previous software quality assessment models and metrics have been introduced with strengths and limitations. In order to enhance the assurance and buoyancy of the software products, certification models have been introduced and developed. From our previous experiences in certification exercises and case studies collaborating with several agencies in Malaysia, the requirements for user based software certification approach is identified and demanded. The emergence of social network applications, the new development approach such as agile method and other varieties of software in the market have led to the domination of users over the software. As software become more accessible to the public through internet applications, users are becoming more critical in the quality of the services provided by the software. There are several categories of users in web-based systems with different interests and perspectives. The classifications and metrics are identified through brain storming approach with includes researchers, users and experts in this area. The new paradigm in software quality assessment is the main focus in our research. This paper discusses the classifications of users in web-based software system assessment and their associated factors and metrics for quality measurement. The quality model is derived based on IEEE structure and FCM model. The developments are beneficial and valuable to overcome the constraints and improve the application of software certification model in future.
Abstract: Numbers of software quality measurement system have been implemented over the past few years, but none of them focuses on telecommunication industry. Software quality measurement system for telecommunication industry was a system that could calculate the quality value of the measured software that totally focused in telecommunication industry. Before designing a system, quality factors, quality attributes and quality metrics were identified based on literature review and survey. Then, using the identified quality factors, quality attributes and quality metrics, quality model for telecommunication industry was constructed. Each identified quality metrics had its own formula. Quality value for the system was measured based on the quality metrics and aggregated by referring to the quality model. It would classify the quality level of the software based on Net Satisfaction Index (NSI). The system was designed using object-oriented approach in web-based environment. Thus, existing of software quality measurement system was important to both developers and users in order to produce high quality software product for telecommunication industry.
Abstract: Inferring the network structure from time series data
is a hard problem, especially if the time series is short and noisy.
DNA microarray is a technology allowing to monitor the mRNA
concentration of thousands of genes simultaneously that produces
data of these characteristics. In this study we try to investigate the
influence of the experimental design on the quality of the result.
More precisely, we investigate the influence of two different types of
random single gene perturbations on the inference of genetic networks
from time series data. To obtain an objective quality measure for
this influence we simulate gene expression values with a biologically
plausible model of a known network structure. Within this framework
we study the influence of single gene knock-outs in opposite to
linearly controlled expression for single genes on the quality of the
infered network structure.
Abstract: Mobile agents are a powerful approach to develop distributed systems since they migrate to hosts on which they have the resources to execute individual tasks. In a dynamic environment like a peer-to-peer network, Agents have to be generated frequently and dispatched to the network. Thus they will certainly consume a certain amount of bandwidth of each link in the network if there are too many agents migration through one or several links at the same time, they will introduce too much transferring overhead to the links eventually, these links will be busy and indirectly block the network traffic, therefore, there is a need of developing routing algorithms that consider about traffic load. In this paper we seek to create cooperation between a probabilistic manner according to the quality measure of the network traffic situation and the agent's migration decision making to the next hop based on decision tree learning algorithms.
Abstract: This paper presents an evaluation for a wavelet-based
digital watermarking technique used in estimating the quality of
video sequences transmitted over Additive White Gaussian Noise
(AWGN) channel in terms of a classical objective metric, such as
Peak Signal-to-Noise Ratio (PSNR) without the need of the original
video. In this method, a watermark is embedded into the Discrete
Wavelet Transform (DWT) domain of the original video frames
using a quantization method. The degradation of the extracted
watermark can be used to estimate the video quality in terms of
PSNR with good accuracy. We calculated PSNR for video frames
contaminated with AWGN and compared the values with those
estimated using the Watermarking-DWT based approach. It is found
that the calculated and estimated quality measures of the video
frames are highly correlated, suggesting that this method can provide
a good quality measure for video frames transmitted over AWGN
channel without the need of the original video.
Abstract: To model the human visual system (HVS) in the region of interest, we propose a new objective metric evaluation adapted to wavelet foveation-based image compression quality measurement, which exploits a foveation setup filter implementation technique in the DWT domain, based especially on the point and region of fixation of the human eye. This model is then used to predict the visible divergences between an original and compressed image with respect to this region field and yields an adapted and local measure error by removing all peripheral errors. The technique, which we call foveation wavelet visible difference prediction (FWVDP), is demonstrated on a number of noisy images all of which have the same local peak signal to noise ratio (PSNR), but visibly different errors. We show that the FWVDP reliably predicts the fixation areas of interest where error is masked, due to high image contrast, and the areas where the error is visible, due to low image contrast. The paper also suggests ways in which the FWVDP can be used to determine a visually optimal quantization strategy for foveation-based wavelet coefficients and to produce a quantitative local measure of image quality.
Abstract: The paper presents new results of a recent industry
supported research and development study in which an efficient
framework for evaluating practical and meaningful power system
reliability and quality indices was applied. The system-wide
integrated performance indices are capable of addressing and
revealing areas of deficiencies and bottlenecks as well as
redundancies in the composite generation-transmission-demand
structure of large-scale power grids. The technique utilizes a linear
programming formulation, which simulates practical operating
actions and offers a general and comprehensive framework to assess
the harmony and compatibility of generation, transmission and
demand in a power system. Practical applications to a reduced
system model as well as a portion of the Saudi power grid are also
presented in the paper for demonstration purposes.
Abstract: Measurement of the quality of image compression is important for image processing application. In this paper, we propose an objective image quality assessment to measure the quality of gray scale compressed image, which is correlation well with subjective quality measurement (MOS) and least time taken. The new objective image quality measurement is developed from a few fundamental of objective measurements to evaluate the compressed image quality based on JPEG and JPEG2000. The reliability between each fundamental objective measurement and subjective measurement (MOS) is found. From the experimental results, we found that the Maximum Difference measurement (MD) and a new proposed measurement, Structural Content Laplacian Mean Square Error (SCLMSE), are the suitable measurements that can be used to evaluate the quality of JPEG200 and JPEG compressed image, respectively. In addition, MD and SCLMSE measurements are scaled to make them equivalent to MOS, given the rate of compressed image quality from 1 to 5 (unacceptable to excellent quality).
Abstract: This paper presents findings from the evaluation study carried out to review the UAE national ID card software. The paper consults the relevant literature to explain many of the concepts and frameworks explained herein. The findings of the evaluation work that was primarily based on the ISO 9126 standard for system quality measurement highlighted many practical areas that if taken into account is argued to more likely increase the success chances of similar system implementation projects.
Abstract: In this paper, we present a novel objective nonreference performance assessment algorithm for image fusion. It takes into account local measurements to estimate how well the important information in the source images is represented by the fused image. The metric is based on the Universal Image Quality Index and uses the similarity between blocks of pixels in the input images and the fused image as the weighting factors for the metrics. Experimental results confirm that the values of the proposed metrics correlate well with the subjective quality of the fused images, giving a significant improvement over standard measures based on mean squared error and mutual information.
Abstract: We present a general comparison of punctual kriging based image restoration for different neighbourhood sizes. The formulation of the technique under consideration is based on punctual kriging and fuzzy concepts for image restoration in spatial domain. Three different neighbourhood windows are considered to estimate the semivariance at different lags for studying its effect in reduction of negative weights resulted in punctual kriging, consequently restoration of degraded images. Our results show that effect of neighbourhood size higher than 5x5 on reduction in negative weights is insignificant. In addition, image quality measures, such as structure similarity indices, peak signal to noise ratios and the new variogram based quality measures; show that 3x3 window size gives better performance as compared with larger window sizes.
Abstract: The overriding goal of software engineering is to
provide a high quality system, application or a product. To achieve
this goal, software engineers must apply effective methods coupled
with modern tools within the context of a mature software process
[2]. In addition, it is also must to assure that high quality is realized.
Although many quality measures can be collected at the project
levels, the important measures are errors and defects. Deriving a
quality measure for reusable components has proven to be
challenging task now a days. The results obtained from the study are
based on the empirical evidence of reuse practices, as emerged from
the analysis of industrial projects. Both large and small companies,
working in a variety of business domains, and using object-oriented
and procedural development approaches contributed towards this
study. This paper proposes a quality metric that provides benefit at
both project and process level, namely defect removal efficiency
(DRE).