Abstract: The Taiwan Health Literacy Scale (THLS) was developed to cope with the need of measuring heath literacy of Chinese-speaking adults in Taiwan. Although the scale was proven having good reliability and validity, it was not popularly adopted by the practitioners due to the length, and the time required completing. Based on the THLS, this research further invited healthcare professionals to review the original scale for a possible shorten work. Under the logic of THLS, the research adopted an analytic hierarchy process technique to consolidate the healthcare experts- assessments to shorten the original scale. There are fifteen items out of the original 66 items were identified having higher loadings. Confirmed by the experts and passed a pilot test with 40 undergraduate students, a short form of THLS is then introduced. This research then used 839 samples from the major cities of the Hua-lien county in the eastern part of Taiwan to test the reliability and validity of this new scale. The reliability of the scale is high and acceptable. The current scale is also highly correlated with the original, of which provide evidence for the validity of the scale.
Abstract: Utilization of diverse germplasm is needed to enhance
the genetic diversity of cultivars. The objective of this study was to
evaluate the genetic relationships of 98 alfalfa germplasm accessions
using morphological traits and SSR markers. From the 98 tested
populations, 81 were locals originating in Europe, 17 were introduced
from USA, Australia, New Zealand and Canada. Three primers
generated 67 polymorphic bands. The average polymorphic
information content (PIC) was very high (> 0.90) over all three used
primer combinations. Cluster analysis using Unweighted Pair Group
Method with Arithmetic Means (UPGMA) and Jaccard´s coefficient
grouped the accessions into 2 major clusters with 4 sub-clusters with
no correlation between genetic and morphological diversity. The SSR
analysis clearly indicated that even with three polymorphic primers,
reliable estimation of genetic diversity could be obtained.
Abstract: Limited infrastructure development on peats and
organic soils is a serious geotechnical issues common to many
countries of the world especially Malaysia which distributed 1.5 mill
ha of those problematic soil. These soils have high water content and
organic content which exhibit different mechanical properties and
may also change chemically and biologically with time. Constructing
structures on peaty ground involves the risk of ground failure and
extreme settlement. Nowdays, much efforts need to be done in
making peatlands usable for construction due to increased landuse.
Deep mixing method employing cement as binders, is generally used
as measure again peaty/ organic ground failure problem. Where the
technique is widely adopted because it can improved ground
considerably in a short period of time. An understanding of
geotechnical properties as shear strength, stiffness and compressibility
behavior of these soils was requires before continues construction on
it. Therefore, 1- 1.5 meter peat soil sample from states of Johor and
an organic soil from Melaka, Malaysia were investigated. Cement
were added to the soil in the pre-mixing stage with water cement ratio
at range 3.5,7,14,140 for peats and 5,10,30 for organic soils,
essentially to modify the original soil textures and properties. The
mixtures which in slurry form will pour to polyvinyl chloride (pvc)
tube and cured at room temperature 250C for 7,14 and 28 days.
Laboratory experiments were conducted including unconfined
compressive strength and bender element , to monitor the improved
strength and stiffness of the 'stabilised mixed soils'. In between,
scanning electron miscroscopic (SEM) were observations to
investigate changes in microstructures of stabilised soils and to
evaluated hardening effect of a peat and organic soils stabilised
cement. This preliminary effort indicated that pre-mixing peat and
organic soils contributes in gaining soil strength while help the
engineers to establish a new method for those problematic ground
improvement in further practical and long term applications.
Abstract: Five original strains of entomopathogenic bacteria
with insecticidal activity against mosquito larvae of the genera Aedes,
Culex and Anopheles have been isolated from natural conditions in
Armenia and characterized. According to morphological,
physiological and biochemical parameters, all isolates were identified
as Bacillus thuringiensis spp. israelensis (Bti). High larvicidal
activity has been showed by three strains Bti. These strains can be
recommended for industrial production of bacterial preparations.
Abstract: Serial Analysis of Gene Expression is a powerful
quantification technique for generating cell or tissue gene expression
data. The profile of the gene expression of cell or tissue in several
different states is difficult for biologists to analyze because of the large
number of genes typically involved. However, feature selection in
machine learning can successfully reduce this problem. The method
allows reducing the features (genes) in specific SAGE data, and
determines only relevant genes. In this study, we used a genetic
algorithm to implement feature selection, and evaluate the
classification accuracy of the selected features with the K-nearest
neighbor method. In order to validate the proposed method, we used
two SAGE data sets for testing. The results of this study conclusively
prove that the number of features of the original SAGE data set can be
significantly reduced and higher classification accuracy can be
achieved.
Abstract: This paper sets forth the possibility and importance about applying Data Mining in Web logs mining and shows some problems in the conventional searching engines. Then it offers an improved algorithm based on the original AprioriAll algorithm which has been used in Web logs mining widely. The new algorithm adds the property of the User ID during the every step of producing the candidate set and every step of scanning the database by which to decide whether an item in the candidate set should be put into the large set which will be used to produce next candidate set. At the meantime, in order to reduce the number of the database scanning, the new algorithm, by using the property of the Apriori algorithm, limits the size of the candidate set in time whenever it is produced. Test results show the improved algorithm has a more lower complexity of time and space, better restrain noise and fit the capacity of memory.
Abstract: This paper discusses on the use of Spline Interpolation
and Mean Square Error (MSE) as tools to process data acquired from
the developed simulator that shall replicate sea bed logging environment.
Sea bed logging (SBL) is a new technique that uses marine
controlled source electromagnetic (CSEM) sounding technique and is
proven to be very successful in detecting and characterizing hydrocarbon
reservoirs in deep water area by using resistivity contrasts. It uses
very low frequency of 0.1Hz to 10 Hz to obtain greater wavelength.
In this work the in house built simulator was used and was provided
with predefined parameters and the transmitted frequency was varied
for sediment thickness of 1000m to 4000m for environment with and
without hydrocarbon. From series of simulations, synthetics data were
generated. These data were interpolated using Spline interpolation
technique (degree of three) and mean square error (MSE) were
calculated between original data and interpolated data. Comparisons
were made by studying the trends and relationship between frequency
and sediment thickness based on the MSE calculated. It was found
that the MSE was on increasing trends in the set up that has the
presence of hydrocarbon in the setting than the one without. The MSE
was also on decreasing trends as sediment thickness was increased
and with higher transmitted frequency.
Abstract: Architecture education was based on apprenticeship
models and its nature has not changed much during long period but
the Source of changes was its evaluation process and system. It is
undeniable that art and architecture education is completely based on
transmitting knowledge from instructor to students. In contrast to
other majors this transmitting is by iteration and practice and studio
masters try to control the design process and improving skills in the
form of supervision and criticizing. Also the evaluation will end by
giving marks to students- achievements. Therefore the importance of
the evaluation and assessment role is obvious and it is not irrelevant
to say that if we want to know about the architecture education
system, we must first study its assessment procedures. The evolution
of these changes in western countries has literate and documented
well. However it seems that this procedure has unregarded in
Malaysia and there is a severe lack of research and documentation in
this area. Malaysia as an under developing and multicultural country
which is involved different races and cultures is a proper origin for
scrutinizing and understanding the evaluation systems and
acceptability amount of current implemented models to keep the
evaluation and assessment procedure abreast with needs of different
generations, cultures and even genders. This paper attempts to
answer the questions of how evaluation and assessments are
performed and how students perceive this evaluation system in the
context Malaysia. The main advantage of this work is that it
contributes in international debate on evaluation model.
Abstract: The automatic discrimination of seismic signals is an important practical goal for earth-science observatories due to the large amount of information that they receive continuously. An essential discrimination task is to allocate the incoming signal to a group associated with the kind of physical phenomena producing it. In this paper, two classes of seismic signals recorded routinely in geophysical laboratory of the National Center for Scientific and Technical Research in Morocco are considered. They correspond to signals associated to local earthquakes and chemical explosions. The approach adopted for the development of an automatic discrimination system is a modular system composed by three blocs: 1) Representation, 2) Dimensionality reduction and 3) Classification. The originality of our work consists in the use of a new wavelet called "modified Mexican hat wavelet" in the representation stage. For the dimensionality reduction, we propose a new algorithm based on the random projection and the principal component analysis.
Abstract: Hair is a non homogenous complex material which
can be associated with a polymer. It is made up 95% of Keratin.
Hair has a great social significance for human beings. In the High
Middle Ages, for example, long hairs have been reserved for kings
and nobles.
Most common interest in hair is focused on hair growth, hair types
and hair care, but hair is also an important biomaterial which can
vary depending on ethnic origin or on age, hair colour for example
can be a sign of ethnic ancestry or age (dark hair for Asiatic, blond
hair for Caucasian and white hair for old people in general).
In this context, different approaches have been conducted to
determine the differences in mechanical properties and characterize
the fracture topography at the surface of hair depending on its type
and its age.
A tensile testing machine was especially designed to achieve
tensile tests on hair. This device is composed of a microdisplacement
system and a force sensor whose peak load is limited to
3N. The curves and the values extracted from each experiment, allow
us to compare the evolution of the mechanical properties from one
hair to another.
Observations with a Scanning Electron Microscope (SEM) and
with an interferometer were made on different hairs. Thus, it is
possible to access the cuticle state and the fracture topography for
each category.
Abstract: To compress, improve bit error performance and also enhance 2D images, a new scheme, called Iterative Cellular-Turbo System (IC-TS) is introduced. In IC-TS, the original image is partitioned into 2N quantization levels, where N is denoted as bit planes. Then each of the N-bit-plane is coded by Turbo encoder and transmitted over Additive White Gaussian Noise (AWGN) channel. At the receiver side, bit-planes are re-assembled taking into consideration of neighborhood relationship of pixels in 2-D images. Each of the noisy bit-plane values of the image is evaluated iteratively using IC-TS structure, which is composed of equalization block; Iterative Cellular Image Processing Algorithm (ICIPA) and Turbo decoder. In IC-TS, there is an iterative feedback link between ICIPA and Turbo decoder. ICIPA uses mean and standard deviation of estimated values of each pixel neighborhood. It has extra-ordinary satisfactory results of both Bit Error Rate (BER) and image enhancement performance for less than -1 dB Signal-to-Noise Ratio (SNR) values, compared to traditional turbo coding scheme and 2-D filtering, applied separately. Also, compression can be achieved by using IC-TS systems. In compression, less memory storage is used and data rate is increased up to N-1 times by simply choosing any number of bit slices, sacrificing resolution. Hence, it is concluded that IC-TS system will be a compromising approach in 2-D image transmission, recovery of noisy signals and image compression.
Abstract: The paper presents a method for multivariate time
series forecasting using Independent Component Analysis (ICA), as a preprocessing tool. The idea of this approach is to do the forecasting in the space of independent components (sources), and then to transform back the results to the original time series
space. The forecasting can be done separately and with a different
method for each component, depending on its time structure. The
paper gives also a review of the main algorithms for independent component analysis in the case of instantaneous mixture models, using second and high-order statistics. The method has been applied in simulation to an artificial multivariate time series
with five components, generated from three sources and a mixing matrix, randomly generated.
Abstract: In the paper the results of calculations of the dynamic
response of a multi-storey reinforced concrete building to a strong
mining shock originated from the main region of mining activity in
Poland (i.e. the Legnica-Glogow Copper District) are presented. The
representative time histories of accelerations registered in three
directions were used as ground motion data in calculations of the
dynamic response of the structure. Two variants of a numerical model
were applied: the model including only structural elements of the
building and the model including both structural and non-structural
elements (i.e. partition walls and ventilation ducts made of brick). It
turned out that non-structural elements of multi-storey RC buildings
have a small impact of about 10 % on natural frequencies of these
structures. It was also proved that the dynamic response of building
to mining shock obtained in case of inclusion of all non-structural
elements in the numerical model is about 20 % smaller than in case
of consideration of structural elements only. The principal stresses
obtained in calculations of dynamic response of multi-storey building
to strong mining shock are situated on the level of about 30% of
values obtained from static analysis (dead load).
Abstract: Calibration estimation is a method of adjusting the
original design weights to improve the survey estimates by using
auxiliary information such as the known population total (or mean)
of the auxiliary variables. A calibration estimator uses calibrated
weights that are determined to minimize a given distance measure to
the original design weights while satisfying a set of constraints
related to the auxiliary information. In this paper, we propose a new
multivariate calibration estimator for the population mean in the
stratified sampling design, which incorporates information available
for more than one auxiliary variable. The problem of determining the
optimum calibrated weights is formulated as a Mathematical
Programming Problem (MPP) that is solved using the Lagrange
multiplier technique.
Abstract: In recent years image watermarking has become an
important research area in data security, confidentiality and image
integrity. Many watermarking techniques were proposed for medical
images. However, medical images, unlike most of images, require
extreme care when embedding additional data within them because
the additional information must not affect the image quality and
readability. Also the medical records, electronic or not, are linked to
the medical secrecy, for that reason, the records must be confidential.
To fulfill those requirements, this paper presents a lossless
watermarking scheme for DICOM images. The proposed a fragile
scheme combines two reversible techniques based on difference
expansion for patient's data hiding and protecting the region of
interest (ROI) with tamper detection and recovery capability.
Patient's data are embedded into ROI, while recovery data are
embedded into region of non-interest (RONI). The experimental
results show that the original image can be exactly extracted from the
watermarked one in case of no tampering. In case of tampered ROI,
tampered area can be localized and recovered with a high quality
version of the original area.
Abstract: A given polynomial, possibly with multiple roots, is
factored into several lower-degree distinct-root polynomials with
natural-order-integer powers. All the roots, including multiplicities,
of the original polynomial may be obtained by solving these lowerdegree
distinct-root polynomials, instead of the original high-degree
multiple-root polynomial directly.
The approach requires polynomial Greatest Common Divisor
(GCD) computation. The very simple and effective process, “Monic
polynomial subtractions" converted trickily from “Longhand
polynomial divisions" of Euclidean algorithm is employed. It
requires only simple elementary arithmetic operations without any
advanced mathematics.
Amazingly, the derived routine gives the expected results for the
test polynomials of very high degree, such as p( x) =(x+1)1000.
Abstract: In this paper a mixed method by combining an evolutionary and a conventional technique is proposed for reduction of Single Input Single Output (SISO) continuous systems into Reduced Order Model (ROM). In the conventional technique, the mixed advantages of Mihailov stability criterion and continued Fraction Expansions (CFE) technique is employed where the reduced denominator polynomial is derived using Mihailov stability criterion and the numerator is obtained by matching the quotients of the Cauer second form of Continued fraction expansions. Then, retaining the numerator polynomial, the denominator polynomial is recalculated by an evolutionary technique. In the evolutionary method, the recently proposed Differential Evolution (DE) optimization technique is employed. DE method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. The proposed method is illustrated through a numerical example and compared with ROM where both numerator and denominator polynomials are obtained by conventional method to show its superiority.
Abstract: Nanoemulsions are a class of emulsions with a droplet
size in the range of 50–500 nm and have attracted a great deal of
attention in recent years because it is unique characteristics. The
physicochemical properties of nanoemulsion suggests that it can be
successfully used to recover the residual oil which is trapped in the
fine pore of reservoir rock by capillary forces after primary and
secondary recovery. Oil-in-water nanoemulsion which can be formed
by high-energy emulsification techniques using specific surfactants
can reduce oil-water interfacial tension (IFT) by 3-4 orders of
magnitude. The present work is aimed on characterization of oil-inwater
nanoemulsion in terms of its phase behavior, morphological
studies; interfacial energy; ability to reduce the interfacial tension and
understanding the mechanisms of mobilization and displacement of
entrapped oil blobs by lowering interfacial tension both at the
macroscopic and microscopic level. In order to investigate the
efficiency of oil-water nanoemulsion in enhanced oil recovery
(EOR), experiments were performed to characterize the emulsion in
terms of their physicochemical properties and size distribution of the
dispersed oil droplet in water phase. Synthetic mineral oil and a series
of surfactants were used to prepare oil-in-water emulsions.
Characterization of emulsion shows that it follows pseudo-plastic
behaviour and drop size of dispersed oil phase follows lognormal
distribution. Flooding experiments were also carried out in a
sandpack system to evaluate the effectiveness of the nanoemulsion as
displacing fluid for enhanced oil recovery. Substantial additional
recoveries (more than 25% of original oil in place) over conventional
water flooding were obtained in the present investigation.
Abstract: In this paper we present a modification to existed model of threshold for shot cut detection, which is able to adapt itself to the sequence statistics and operate in real time, because it use for calculation only previously evaluated frames. The efficiency of proposed modified adaptive threshold scheme was verified through extensive test experiment with several similarity metrics and achieved results were compared to the results reached by the original model. According to results proposed threshold scheme reached higher accuracy than existed original model.
Abstract: In this paper, a novel copyright protection scheme for digital images based on Visual Cryptography and Statistics is proposed. In our scheme, the theories and properties of sampling distribution of means and visual cryptography are employed to achieve the requirements of robustness and security. Our method does not need to alter the original image and can identify the ownership without resorting to the original image. Besides, our method allows multiple watermarks to be registered for a single host image without causing any damage to other hidden watermarks. Moreover, it is also possible for our scheme to cast a larger watermark into a smaller host image. Finally, experimental results will show the robustness of our scheme against several common attacks.