Abstract: Combating climate change is becoming a hot topic in various sectors. Building construction and infrastructure sectors contributed a significant proportion of waste and greenhouse gas (GHG) emissions in the environment of different countries and cities. However, there is little research on the micro-level of waste management, “building construction material wastage management,” and fewer reviews about regulatory control in the building construction sector. This paper focuses on the potentialities and importance of material wastage management and reviews the deficiencies of the current standard to take into account the reduction of material wastage in a systematic and quantitative approach.
Abstract: The patenting of inventions is the result of an organized effort to achieve technological improvement and its consequent positive impact on the population's standard of living. Technology exports, either of high-tech goods or of Information and Communication Technology (ICT) services, represent the level of acceptance that world markets have of that technology acquired or developed by a country, either in public or private settings. A quantitative measure of the above variables is expected to have a positive and relevant impact on the level of economic development of the countries, measured on this first occasion through their level of Gross Domestic Product (GDP). And in that sense, it not only explains the performance of an economy but the difference between nations. We present an econometric model where we seek to explain the difference between the GDP levels of 178 countries through their different performance in the outputs of the technological production process. We take the variables of Patenting, ICT Exports and High Technology Exports as results of the innovation process. This model achieves an explanatory power for four annual cuts (2000, 2005, 2010 and 2015) equivalent to an adjusted r2 of 0.91, 0.87, 0.91 and 0.96, respectively.
Abstract: Comparative research has been conducted to allow us to determine the content of macro and microelements in the vegetative and reproductive organs of grass pea and the quality of grass pea seeds, as well as to identify the possibility of grass pea growth on soils contaminated by heavy metals. The experiment was conducted on an agricultural field subjected to contamination from the Non-Ferrous-Metal Works (MFMW) near Plovdiv, Bulgaria. The experimental plots were situated at different distances of 0.5 km and 8 km, respectively, from the source of pollution. On reaching commercial ripeness the grass pea plants were gathered. The composition of the macro and microelements in plant materials (roots, stems, leaves, seeds), and the dry matter content, sugars, proteins, fats and ash contained in the grass pea seeds were determined. Translocation factors (TF) and bioaccumulation factor (BCF) were also determined. The quantitative measurements were carried out through inductively-coupled plasma (ICP). The grass pea plant can successfully be grown on soils contaminated by heavy metals. Soil pollution with heavy metals does not affect the quality of the grass pea seeds. The seeds of the grass pea contain significant amounts of nutrients (K, P, Cu, Fe Mn, Zn) and protein (23.18-29.54%). The distribution of heavy metals in the organs of the grass pea has a selective character, which reduces in the following order: leaves > roots > stems > seeds. BCF and TF values were greater than one suggesting efficient accumulation in the above ground parts of grass pea plant. Grass pea is a plant that is tolerant to heavy metals and can be referred to the accumulator plants. The results provide valuable information about the chemical and nutritional composition of the seeds of the grass pea grown on contaminated soils in Bulgaria. The high content of macro and microelements and the low concentrations of toxic elements in the grass pea grown in contaminated soil make it possible to use the seeds of the grass pea as animal feed.
Abstract: The analysis of geographic inequality heavily relies on the use of location-enabled statistical data and quantitative measures to present the spatial patterns of the selected phenomena and analyze their differences. To protect the privacy of individual instance and link to administrative units, point-based datasets are spatially aggregated to area-based statistical datasets, where only the overall status for the selected levels of spatial units is used for decision making. The partition of the spatial units thus has dominant influence on the outcomes of the analyzed results, well known as the Modifiable Areal Unit Problem (MAUP). A new spatial reference framework, the Taiwan Geographical Statistical Classification (TGSC), was recently introduced in Taiwan based on the spatial partition principles of homogeneous consideration of the number of population and households. Comparing to the outcomes of the traditional township units, TGSC provides additional levels of spatial units with finer granularity for presenting spatial phenomena and enables domain experts to select appropriate dissemination level for publishing statistical data. This paper compares the results of respectively using TGSC and township unit on the mortality data and examines the spatial characteristics of their outcomes. For the mortality data between the period of January 1st, 2008 and December 31st, 2010 of the Taitung County, the all-cause age-standardized death rate (ASDR) ranges from 571 to 1757 per 100,000 persons, whereas the 2nd dissemination area (TGSC) shows greater variation, ranged from 0 to 2222 per 100,000. The finer granularity of spatial units of TGSC clearly provides better outcomes for identifying and evaluating the geographic inequality and can be further analyzed with the statistical measures from other perspectives (e.g., population, area, environment.). The management and analysis of the statistical data referring to the TGSC in this research is strongly supported by the use of Geographic Information System (GIS) technology. An integrated workflow that consists of the tasks of the processing of death certificates, the geocoding of street address, the quality assurance of geocoded results, the automatic calculation of statistic measures, the standardized encoding of measures and the geo-visualization of statistical outcomes is developed. This paper also introduces a set of auxiliary measures from a geographic distribution perspective to further examine the hidden spatial characteristics of mortality data and justify the analyzed results. With the common statistical area framework like TGSC, the preliminary results demonstrate promising potential for developing a web-based statistical service that can effectively access domain statistical data and present the analyzed outcomes in meaningful ways to avoid wrong decision making.
Abstract: Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.
Abstract: Comparative research has been conducted to allow us to determine the accumulation of heavy metals (Pb, Zn and Cd) in the vegetative and reproductive organs of safflower, and to identify the possibility of its growth on soils contaminated by heavy metals and efficacy for phytoremediation. The experiment was performed on an agricultural field contaminated by the Non-Ferrous-Metal Works (MFMW) near Plovdiv, Bulgaria. The experimental plots were situated at different distances (0.1, 0.5, 2.0, and 15 km) from the source of pollution. The contents of heavy metals in plant materials (roots, stems, leaves, seeds) were determined. The quality of safflower oils (heavy metals and fatty acid composition) was also determined. The quantitative measurements were carried out with inductively-coupled plasma (ICP). Safflower is a plant that is tolerant to heavy metals and can be referred to the hyperaccumulators of lead and cadmium and the accumulators of zinc. The plant can be successfully used in the phytoremediation of heavy metal contaminated soils. The processing of safflower seeds into oil and the use of the obtained oil will greatly reduce the cost of phytoremediation.
Abstract: The purpose of this study is to evaluate the English
version and a Malay translation of the 21-item Learner Awareness
Questionnaire for its application to assess student learning in higher
education. The Learner Awareness Questionnaire, originally written
in English, is a quantitative measure of how and why students learn.
The questionnaire gives an indication of the process and motives to
learn using four scales: survival, establishing stability, approval and
loving to learn. Data in the present study came from 680 university
students enrolled in various programmes in Malaysia. The Malay
version of the questionnaire supported a similar four factor structure
and internal consistency to the English version. The four factors of
the Malay version also showed moderate to strong correlations with
those of the English versions. The results suggest that the Malay
version of the questionnaire is similar to the English version.
However, further refinement to the questions is needed to strengthen
the correlations between the two questionnaires.
Abstract: This work proposes a data-driven multiscale based
quantitative measures to reveal the underlying complexity of
electroencephalogram (EEG), applying to a rodent model of
hypoxic-ischemic brain injury and recovery. Motivated by that real
EEG recording is nonlinear and non-stationary over different
frequencies or scales, there is a need of more suitable approach over
the conventional single scale based tools for analyzing the EEG data.
Here, we present a new framework of complexity measures
considering changing dynamics over multiple oscillatory scales. The
proposed multiscale complexity is obtained by calculating entropies of
the probability distributions of the intrinsic mode functions extracted
by the empirical mode decomposition (EMD) of EEG. To quantify
EEG recording of a rat model of hypoxic-ischemic brain injury
following cardiac arrest, the multiscale version of Tsallis entropy is
examined. To validate the proposed complexity measure, actual EEG
recordings from rats (n=9) experiencing 7 min cardiac arrest followed
by resuscitation were analyzed. Experimental results demonstrate that
the use of the multiscale Tsallis entropy leads to better discrimination
of the injury levels and improved correlations with the neurological
deficit evaluation after 72 hours after cardiac arrest, thus suggesting an
effective metric as a prognostic tool.
Abstract: A cyclostationary Gaussian linearization method is
formulated for investigating the time average response of nonlinear
system under sinusoidal signal and white noise excitation. The
quantitative measure of cyclostationary mean, variance, spectrum of
mean amplitude, and mean power spectral density of noise are
analyzed. The qualitative response behavior of stochastic jump and
bifurcation are investigated. The validity of the present approach in
predicting the quantitative and qualitative statistical responses is
supported by utilizing Monte Carlo simulations. The present analysis
without imposing restrictive analytical conditions can be directly
derived by solving non-linear algebraic equations. The analytical
solution gives reliable quantitative and qualitative prediction of mean
and noise response for the Duffing system subjected to both sinusoidal
signal and white noise excitation.
Abstract: This paper provides a quantitative measure of the
time-varying multiunit neuronal spiking activity using an entropy
based approach. To verify the status embedded in the neuronal activity
of a population of neurons, the discrete wavelet transform (DWT) is
used to isolate the inherent spiking activity of MUA. Due to the
de-correlating property of DWT, the spiking activity would be
preserved while reducing the non-spiking component. By evaluating
the entropy of the wavelet coefficients of the de-noised MUA, a
multiresolution Shannon entropy (MRSE) of the MUA signal is
developed. The proposed entropy was tested in the analysis of both
simulated noisy MUA and actual MUA recorded from cortex in rodent
model. Simulation and experimental results demonstrate that the
dynamics of a population can be quantified by using the proposed
entropy.
Abstract: The process in which the complementary information from multiple images is integrated to provide composite image that contains more information than the original input images is called image fusion. Medical image fusion provides useful information from multimodality medical images that provides additional information to the doctor for diagnosis of diseases in a better way. This paper represents the wavelet based medical image fusion algorithm on different multimodality medical images. In order to fuse the medical images, images are decomposed using Redundant Wavelet Transform (RWT). The high frequency coefficients are convolved with morphological operator followed by the maximum-selection (MS) rule. The low frequency coefficients are processed by MS rule. The reconstructed image is obtained by inverse RWT. The quantitative measures which includes Mean, Standard Deviation, Average Gradient, Spatial frequency, Edge based Similarity Measures are considered for evaluating the fused images. The performance of this proposed method is compared with Pixel averaging, PCA, and DWT fusion methods. When compared with conventional methods, the proposed framework provides better performance for analysis of multimodality medical images.
Abstract: Measures of complexity and entropy have not converged to a single quantitative description of levels of organization of complex systems. The need for such a measure is increasingly necessary in all disciplines studying complex systems. To address this problem, starting from the most fundamental principle in Physics, here a new measure for quantity of organization and rate of self-organization in complex systems based on the principle of least (stationary) action is applied to a model system - the central processing unit (CPU) of computers. The quantity of organization for several generations of CPUs shows a double exponential rate of change of organization with time. The exact functional dependence has a fine, S-shaped structure, revealing some of the mechanisms of self-organization. The principle of least action helps to explain the mechanism of increase of organization through quantity accumulation and constraint and curvature minimization with an attractor, the least average sum of actions of all elements and for all motions. This approach can help describe, quantify, measure, manage, design and predict future behavior of complex systems to achieve the highest rates of self organization to improve their quality. It can be applied to other complex systems from Physics, Chemistry, Biology, Ecology, Economics, Cities, network theory and others where complex systems are present.
Abstract: Electroencephalogram (EEG) recordings are often
contaminated with ocular and muscle artifacts. In this paper, the
canonical correlation analysis (CCA) is used as blind source
separation (BSS) technique (BSS-CCA) to decompose the artifact
contaminated EEG into component signals. We combine the BSSCCA
technique with wavelet filtering approach for minimizing both
ocular and muscle artifacts simultaneously, and refer the proposed
method as wavelet enhanced BSS-CCA. In this approach, after
careful visual inspection, the muscle artifact components are
discarded and ocular artifact components are subjected to wavelet
filtering to retain high frequency cerebral information, and then clean
EEG is reconstructed. The performance of the proposed wavelet
enhanced BSS-CCA method is tested on real EEG recordings
contaminated with ocular and muscle artifacts, for which power
spectral density is used as a quantitative measure. Our results suggest
that the proposed hybrid approach minimizes ocular and muscle
artifacts effectively, minimally affecting underlying cerebral activity
in EEG recordings.
Abstract: Many systems in the natural world exhibit chaos or non-linear behavior, the complexity of which is so great that they appear to be random. Identification of chaos in experimental data is essential for characterizing the system and for analyzing the predictability of the data under analysis. The Lyapunov exponents provide a quantitative measure of the sensitivity to initial conditions and are the most useful dynamical diagnostic for chaotic systems. However, it is difficult to accurately estimate the Lyapunov exponents of chaotic signals which are corrupted by a random noise. In this work, a method for estimation of Lyapunov exponents from noisy time series using unscented transformation is proposed. The proposed methodology was validated using time series obtained from known chaotic maps. In this paper, the objective of the work, the proposed methodology and validation results are discussed in detail.
Abstract: The present microfluidic study is emphasizing the flow behavior within a Y shape micro-bifurcation in two similar flow configurations. We report here a numerical and experimental investigation on the velocity profiles evolution and secondary flows, manifested at different Reynolds numbers (Re) and for two different boundary conditions. The experiments are performed using special designed setup based on optical microscopic devices. With this setup, direct visualizations and quantitative measurements of the path-lines are obtained. A Micro-PIV measurement system is used to obtain velocity profiles distributions in a spatial evolution in the main flows domains. The experimental data is compared with numerical simulations performed with commercial computational code FLUENT in a 3D geometry with the same dimensions as the experimental one. The numerical flow patterns are found to be in good agreement with the experimental manifestations.
Abstract: Accurate assessment of the primary tumor response to
treatment is important in the management of breast cancer. This
paper introduces a new set of treatment evaluation indicators for
breast cancer cases based on the computational process of three
known metrics, the Euclidian, Hamming and Levenshtein distances.
The distance principals are applied to pairs of mammograms and/or
echograms, recorded before and after treatment, determining a
reference point in judging the evolution amount of the studied
carcinoma. The obtained numerical results are indeed very
transparent and indicate not only the evolution or the involution of
the tumor under treatment, but also a quantitative measurement of the
benefit in using the selected method of treatment.
Abstract: Fourier transform infrared (FT-IR) spectroscopic imaging
is an emerging technique that provides both chemically and
spatially resolved information. The rich chemical content of data
may be utilized for computer-aided determinations of structure and
pathologic state (cancer diagnosis) in histological tissue sections for
prostate cancer. FT-IR spectroscopic imaging of prostate tissue has
shown that tissue type (histological) classification can be performed to
a high degree of accuracy [1] and cancer diagnosis can be performed
with an accuracy of about 80% [2] on a microscopic (≈ 6μm)
length scale. In performing these analyses, it has been observed
that there is large variability (more than 60%) between spectra from
different points on tissue that is expected to consist of the same
essential chemical constituents. Spectra at the edges of tissues are
characteristically and consistently different from chemically similar
tissue in the middle of the same sample. Here, we explain these
differences using a rigorous electromagnetic model for light-sample
interaction. Spectra from FT-IR spectroscopic imaging of chemically
heterogeneous samples are different from bulk spectra of individual
chemical constituents of the sample. This is because spectra not
only depend on chemistry, but also on the shape of the sample.
Using coupled wave analysis, we characterize and quantify the nature
of spectral distortions at the edges of tissues. Furthermore, we
present a method of performing histological classification of tissue
samples. Since the mid-infrared spectrum is typically assumed to
be a quantitative measure of chemical composition, classification
results can vary widely due to spectral distortions. However, we
demonstrate that the selection of localized metrics based on chemical
information can make our data robust to the spectral distortions
caused by scattering at the tissue boundary.
Abstract: Quantitative measurements of tumor in general and tumor volume in particular, become more realistic with the use of Magnetic Resonance imaging, especially when the tumor morphological changes become irregular and difficult to assess by clinical examination. However, tumor volume estimation strongly depends on the image segmentation, which is fuzzy by nature. In this paper a fuzzy approach is presented for tumor volume segmentation based on the fuzzy connectedness algorithm. The fuzzy affinity matrix resulting from segmentation is then used to estimate a fuzzy volume based on a certainty parameter, an Alpha Cut, defined by the user. The proposed method was shown to highly affect treatment decisions. A statistical analysis was performed in this study to validate the results based on a manual method for volume estimation and the importance of using the Alpha Cut is further explained.