Abstract: Recently, a quality of motors is inspected by human
ears. In this paper, I propose two systems using a method of speech
recognition for automation of the inspection. The first system is based
on a method of linear processing which uses K-means and Nearest
Neighbor method, and the second is based on a method of non-linear
processing which uses neural networks. I used motor sounds in these
systems, and I successfully recognize 86.67% of motor sounds in the
linear processing system and 97.78% in the non-linear processing
system.
Abstract: Versatile dual-mode class-AB CMOS four-quadrant
analog multiplier circuit is presented. The dual translinear loops and
current mirrors are the basic building blocks in realization scheme.
This technique provides; wide dynamic range, wide-bandwidth response
and low power consumption. The major advantages of this
approach are; its has single ended inputs; since its input is dual translinear
loop operate in class-AB mode which make this multiplier
configuration interesting for low-power applications; current multiplying,
voltage multiplying, or current and voltage multiplying can
be obtainable with balanced input. The simulation results of versatile
analog multiplier demonstrate a linearity error of 1.2 %, a -3dB bandwidth
of about 19MHz, a maximum power consumption of 0.46mW,
and temperature compensated. Operation of versatile analog multiplier
was also confirmed through an experiment using CMOS transistor
array.
Abstract: The purpose of this study was to evaluate and
compare new indices based on the discrete wavelet transform
with another spectral parameters proposed in the literature as
mean average voltage, median frequency and ratios between
spectral moments applied to estimate acute exercise-induced
changes in power output, i.e., to assess peripheral muscle
fatigue during a dynamic fatiguing protocol. 15 trained
subjects performed 5 sets consisting of 10 leg press, with 2
minutes rest between sets. Surface electromyography was
recorded from vastus medialis (VM) muscle. Several surface
electromyographic parameters were compared to detect
peripheral muscle fatigue. These were: mean average voltage
(MAV), median spectral frequency (Fmed), Dimitrov spectral
index of muscle fatigue (FInsm5), as well as other five
parameters obtained from the discrete wavelet transform
(DWT) as ratios between different scales. The new wavelet
indices achieved the best results in Pearson correlation
coefficients with power output changes during acute dynamic
contractions. Their regressions were significantly different
from MAV and Fmed. On the other hand, they showed the
highest robustness in presence of additive white gaussian
noise for different signal to noise ratios (SNRs). Therefore,
peripheral impairments assessed by sEMG wavelet indices
may be a relevant factor involved in the loss of power output
after dynamic high-loading fatiguing task.
Abstract: In this work, we improve a previously developed
segmentation scheme aimed at extracting edge information from
speckled images using a maximum likelihood edge detector. The
scheme was based on finding a threshold for the probability density
function of a new kernel defined as the arithmetic mean-to-geometric
mean ratio field over a circular neighborhood set and, in a general
context, is founded on a likelihood random field model (LRFM). The
segmentation algorithm was applied to discriminated speckle areas
obtained using simple elliptic discriminant functions based on
measures of the signal-to-noise ratio with fractional order moments.
A rigorous stochastic analysis was used to derive an exact expression
for the cumulative density function of the probability density
function of the random field. Based on this, an accurate probability
of error was derived and the performance of the scheme was
analysed. The improved segmentation scheme performed well for
both simulated and real images and showed superior results to those
previously obtained using the original LRFM scheme and standard
edge detection methods. In particular, the false alarm probability was
markedly lower than that of the original LRFM method with
oversegmentation artifacts virtually eliminated. The importance of
this work lies in the development of a stochastic-based segmentation,
allowing an accurate quantification of the probability of false
detection. Non visual quantification and misclassification in medical
ultrasound speckled images is relatively new and is of interest to
clinicians.
Abstract: In this article, a method has been offered to classify
normal and defective tiles using wavelet transform and artificial
neural networks. The proposed algorithm calculates max and min
medians as well as the standard deviation and average of detail
images obtained from wavelet filters, then comes by feature vectors
and attempts to classify the given tile using a Perceptron neural
network with a single hidden layer. In this study along with the
proposal of using median of optimum points as the basic feature and
its comparison with the rest of the statistical features in the wavelet
field, the relational advantages of Haar wavelet is investigated. This
method has been experimented on a number of various tile designs
and in average, it has been valid for over 90% of the cases. Amongst
the other advantages, high speed and low calculating load are
prominent.
Abstract: The Taiwan government has started to promote the “Plain Landscape Afforestation and Greening Program" since 2002. A key task of the program was the payment for environmental services (PES), entitled the “Plain Landscape Afforestation Policy" (PLAP), which was certificated by the Executive Yuan on August 31, 2001 and enacted on January 1, 2002. According to the policy, it is estimated that the total area of afforestation will be 25,100 hectares by December 31, 2007. Until the end of 2007, the policy had been enacted for six years in total and the actual area of afforestation was 8,919.18 hectares. Among them, Taiwan Sugar Corporation (TSC) was accounted for 7,960 hectares (with 2,450.83 hectares as public service area) which occupied 86.22% of the total afforestation area; the private farmland promoted by local governments was accounted for 869.18 hectares which occupied 9.75% of the total afforestation area. Based on the above, we observe that most of the afforestation area in this policy is executed by TSC, and the achievement ratio by TSC is better than by others. It implies that the success of the PLAP is seriously related to the execution of TSC. The objective of this study is to analyze the relevant policy planning of TSC-s participation in the PLAP, suggest complementary measures, and draw up effective adjustment mechanisms, so as to improve the effectiveness of executing the policy. Our main conclusions and suggestions are summarized as follows: 1. The main reason for TSC-s participation in the PLAP is based on their passive cooperation with the central government or company policy. Prior to TSC-s participation in the PLAP, their lands were mainly used for growing sugarcane. 2. The main factors of TSC-s consideration on the selection of tree species are based on the suitability of land and species. The largest proportion of tree species is allocated to economic forests, and the lack of technical instruction was the main problem during afforestation. Moreover, the method of improving TSC-s future development in leisure agriculture and landscape business becomes a key topic. 3. TSC has developed short and long-term plans on participating in the PLAP for the future. However, there is no great willingness or incentive on budgeting for such detailed planning. 4. Most people from TSC interviewed consider the requirements on PLAP unreasonable. Among them, an unreasonable requirement on the number of trees accounted for the greatest proportion; furthermore, most interviewees suggested that the government should continue to provide incentives even after 20 years. 5. Since the government shares the same goals as TSC, there should be sufficient cooperation and communication that support the technical instruction and reduction of afforestation cost, which will also help to improve effectiveness of the policy.
Abstract: Whilst there is growing evidence that activity
across the lifespan is beneficial for improved health, there are
also many changes involved with the aging process and
subsequently the potential for reduced indices of health. The
nexus between health, physical activity and aging is complex
and has raised much interest in recent times due to the
realization that a multifaceted approached is necessary in
order to counteract a growing obesity epidemic. By
investigating age based trends within a population adhering to
competitive sport at older ages, further insight might be
gleaned to assist in understanding one of many factors
influencing this relationship.
BMI was derived using data gathered on a total of 6,071
masters athletes (51.9% male, 48.1% female) aged 25 to 91
years ( =51.5, s =±9.7), competing at the Sydney World
Masters Games (2009). Using linear and loess regression it
was demonstrated that the usual tendency for prevalence of
higher BMI increasing with age was reversed in the sample.
This trend in reversal was repeated for both male and female
only sub-sets of the sample participants, indicating the
possibility of improved prevalence of BMI with increasing
age for both the sample as a whole and these individual subgroups.
This evidence of improved classification in one index of
health (reduced BMI) for masters athletes (when compared to
the general population) implies there are either improved
levels of this index of health with aging due to adherence to
sport or possibly the reduced BMI is advantageous and
contributes to this cohort adhering (or being attracted) to
masters sport at older ages. Demonstration of this
proportionately under-investigated World Masters Games
population having an improved relationship between BMI and
increasing age over the general population is of particular
interest in the context of the measures being taken globally to
curb an obesity epidemic.
Abstract: The school / university orientation interests a broad and
often badly informed public. Technically, it is an important
multicriterion decision problem, which supposes the combination of
much academic professional and/or lawful knowledge, which in turn
justifies software resorting to the techniques of Artificial Intelligence.
CORUS is an expert system of the "Conseil et ORientation
Universitaire et Scolaire", based on a knowledge representation
language (KRL) with rules and objects, called/ known as Ibn Rochd.
CORUS was developed thanks to DéGSE, a workshop of cognitive
engineering which supports this LRC. CORUS works out many
acceptable solutions for the case considered, and retains the most
satisfactory among them. Several versions of CORUS have extended
its services gradually.
Abstract: Interest in Human Consciousness has been revived in the late 20th century from different scientific disciplines. Consciousness studies involve both its understanding and its application. In this paper, a computational model of the minimum consciousness functions necessary in my point of view for Artificial Intelligence applications is presented with the aim of improving the way computations will be made in the future. In section I, human consciousness is briefly described according to the scope of this paper. In section II, a minimum set of consciousness functions is defined - based on the literature reviewed - to be modelled, and then a computational model of these functions is presented in section III. In section IV, an analysis of the model is carried out to describe its functioning in detail.
Abstract: In order to provide existing SOAP (Simple Object
Access Protocol)-based Web services with users who are familiar with
REST (REpresentational State Transfer)-style Web services, this
paper proposes Web service providing method using Web service
transformation. This enables SOAP-based service providers to define
rules for mapping from RESTful Web services to SOAP-based ones.
Using these mapping rules, HTTP request messages for RESTful
services are converted automatically into SOAP-based service
invocations. Web service providers need not develop duplicate
RESTful services and they can avoid programming mediation
modules per service. Furthermore, they need not equip mediation
middleware like ESB (Enterprise Service Bus) only for the purpose of
transformation of two different Web service styles.
Abstract: Cancer classification to their corresponding cohorts has been key area of research in bioinformatics aiming better prognosis of the disease. High dimensionality of gene data has been makes it a complex task and requires significance data identification technique in order to reducing the dimensionality and identification of significant information. In this paper, we have proposed a novel approach for classification of oral cancer into metastasis positive and negative patients. We have used significance analysis of microarrays (SAM) for identifying significant genes which constitutes gene signature. 3 different gene signatures were identified using SAM from 3 different combination of training datasets and their classification accuracy was calculated on corresponding testing datasets using k-Nearest Neighbour (kNN), Fuzzy C-Means Clustering (FCM), Support Vector Machine (SVM) and Backpropagation Neural Network (BPNN). A final gene signature of only 9 genes was obtained from above 3 individual gene signatures. 9 gene signature-s classification capability was compared using same classifiers on same testing datasets. Results obtained from experimentation shows that 9 gene signature classified all samples in testing dataset accurately while individual genes could not classify all accurately.
Abstract: Application of nanoparticles as additives in membrane
synthesis for improving the resistance of membranes against fouling
has triggered recent interest in new membrane types. However, most
nanoparticle-enhanced membranes suffer from the tradeoff between
permeability and selectivity. In this study, nano-WS2 was explored as
the additive in membrane synthesis by non-solvent induced phase
separation. Blended PES-WS2 flat-sheet membranes with the
incorporation of ultra-low concentrations of nanoparticles (from 0.025
to 0.25%, WS2/PES ratio) were manufactured and investigated in
terms of permeability, fouling resistance and solute rejection.
Remarkably, a significant enhancement in the permeability was
observed as a result of the incorporation of ultra-low fractions of
nano-WS2 to the membrane structure. Optimal permeability values
were obtained for modified membranes with 0.10%
nanoparticle/polymer concentration ratios. Furthermore, fouling
resistance and solute rejection were significantly improved by the
incorporation of nanoparticles into the membrane matrix. Specifically,
fouling resistance of modified membrane can increase by around 50%.
Abstract: The paper proposes a way of parallel processing of
SURF and Optical Flow for moving object recognition and tracking.
The object recognition and tracking is one of the most important task
in computer vision, however disadvantage are many operations cause
processing speed slower so that it can-t do real-time object recognition
and tracking. The proposed method uses a typical way of feature
extraction SURF and moving object Optical Flow for reduce
disadvantage and real-time moving object recognition and tracking,
and parallel processing techniques for speed improvement. First
analyse that an image from DB and acquired through the camera using
SURF for compared to the same object recognition then set ROI
(Region of Interest) for tracking movement of feature points using
Optical Flow. Secondly, using Multi-Thread is for improved
processing speed and recognition by parallel processing. Finally,
performance is evaluated and verified efficiency of algorithm
throughout the experiment.
Abstract: Rapid Prototyping (RP) is a technology that produces models and prototype parts from 3D CAD model data, CT/MRI scan data, and model data created from 3D object digitizing systems. There are several RP process like Stereolithography (SLA), Solid Ground Curing (SGC), Selective Laser Sintering (SLS), Fused Deposition Modeling (FDM), 3D Printing (3DP) among them SLS and FDM RP processes are used to fabricate pattern of custom cranial implant. RP technology is useful in engineering and biomedical application. This is helpful in engineering for product design, tooling and manufacture etc. RP biomedical applications are design and development of medical devices, instruments, prosthetics and implantation; it is also helpful in planning complex surgical operation. The traditional approach limits the full appreciation of various bony structure movements and therefore the custom implants produced are difficult to measure the anatomy of parts and analyze the changes in facial appearances accurately. Cranioplasty surgery is a surgical correction of a defect in cranial bone by implanting a metal or plastic replacement to restore the missing part. This paper aims to do a comparative study on the dimensional error of CAD and SLS RP Models for reconstruction of cranial defect by comparing the virtual CAD with the physical RP model of a cranial defect.
Abstract: Since prestressed concrete members rely on the tensile
strength of the prestressing strands to resist loads, loss of even few
them could result catastrophic. Therefore, it is important to measure
present residual prestress force. Although there are some techniques
for obtaining present prestress force, some problems still remain. One
method is to install load cell in front of anchor head but this may
increase cost. Load cell is a transducer using the elastic material
property. Anchor head is also an elastic material and this might result
in monitoring monitor present prestress force. Features of fiber optic
sensor such as small size, great sensitivity, high durability can assign
sensing function to anchor head. This paper presents the concept of
smart anchor head which acts as load cell and experiment for the
applicability of it. Test results showed the smart anchor head worked
good and strong linear relationship between load and response.
Abstract: Knowledge Discovery in Databases (KDD) has
evolved into an important and active area of research because of
theoretical challenges and practical applications associated with the
problem of discovering (or extracting) interesting and previously
unknown knowledge from very large real-world databases. Rough
Set Theory (RST) is a mathematical formalism for representing
uncertainty that can be considered an extension of the classical set
theory. It has been used in many different research areas, including
those related to inductive machine learning and reduction of
knowledge in knowledge-based systems. One important concept
related to RST is that of a rough relation. In this paper we presented
the current status of research on applying rough set theory to KDD,
which will be helpful for handle the characteristics of real-world
databases. The main aim is to show how rough set and rough set
analysis can be effectively used to extract knowledge from large
databases.
Abstract: Availability of high dimensional biological datasets such as from gene expression, proteomic, and metabolic experiments can be leveraged for the diagnosis and prognosis of diseases. Many classification methods in this area have been studied to predict disease states and separate between predefined classes such as patients with a special disease versus healthy controls. However, most of the existing research only focuses on a specific dataset. There is a lack of generic comparison between classifiers, which might provide a guideline for biologists or bioinformaticians to select the proper algorithm for new datasets. In this study, we compare the performance of popular classifiers, which are Support Vector Machine (SVM), Logistic Regression, k-Nearest Neighbor (k-NN), Naive Bayes, Decision Tree, and Random Forest based on mock datasets. We mimic common biological scenarios simulating various proportions of real discriminating biomarkers and different effect sizes thereof. The result shows that SVM performs quite stable and reaches a higher AUC compared to other methods. This may be explained due to the ability of SVM to minimize the probability of error. Moreover, Decision Tree with its good applicability for diagnosis and prognosis shows good performance in our experimental setup. Logistic Regression and Random Forest, however, strongly depend on the ratio of discriminators and perform better when having a higher number of discriminators.
Abstract: Fourty one strains of ESBL producing P.aeruginosa
which were previously isolated from burn patients in Kerman
University general hospital, Iran were subjected to PCR, RFLP and
sequencing in order to determine the type of extended spectrum β-
lactamases (ESBL), the restriction digestion pattern and possibility of
mutation among detected genes. DNA extraction was carried out by
phenol chloroform method. PCR for detection of bla genes was
performed using specific primer for each gene. Restriction Fragment
Length Polymorphism (RFLP) for ESBL genes was carried out using
EcoRI, NheI, PVUII, EcoRV, DdeI, and PstI restriction enzymes. The
PCR products were subjected to direct sequencing of both the strands
for identification of the ESBL genes.The blaCTX-M, blaVEB-1, blaPER-1,
blaGES-1, blaOXA-1, blaOXA-4 and blaOXA-10 genes were detected in the
(n=1) 2.43%, (n=41)100%, (n=28) 68.3%, (n=10) 24.4%, (n=29)
70.7%, (n=7)17.1% and (n=38) 92.7% of the ESBL producing isolates
respectively. The RFLP analysis showed that each ESBL gene has
identical pattern of digestion among the isolated strains. Sequencing
of the ESBL genes confirmed the genuinety of PCR products and
revealed no mutation in the restriction sites of the above genes. From
results of the present investigation it can be concluded that blaVEB-1
and blaCTX-M were the most and the least frequently isolated ESBL
genes among the P.aeruginosa strains isolated from burn patients. The
RFLP and sequencing analysis revealed that same clone of the bla
genes were indeed existed among the antibiotic resistant strains.
Abstract: Dengue is a mosquito-borne infection that has peaked to an alarming rate in recent decades. It can be found in tropical and sub-tropical climate. In Malaysia, dengue has been declared as one of the national health threat to the public. This study aimed to map the spatial distributions of dengue cases in the district of Hulu Langat, Selangor via a combination of Geographic Information System (GIS) and spatial statistic tools. Data related to dengue was gathered from the various government health agencies. The location of dengue cases was geocoded using a handheld GPS Juno SB Trimble. A total of 197 dengue cases occurring in 2003 were used in this study. Those data then was aggregated into sub-district level and then converted into GIS format. The study also used population or demographic data as well as the boundary of Hulu Langat. To assess the spatial distribution of dengue cases three spatial statistics method (Moran-s I, average nearest neighborhood (ANN) and kernel density estimation) were applied together with spatial analysis in the GIS environment. Those three indices were used to analyze the spatial distribution and average distance of dengue incidence and to locate the hot spot of dengue cases. The results indicated that the dengue cases was clustered (p < 0.01) when analyze using Moran-s I with z scores 5.03. The results from ANN analysis showed that the average nearest neighbor ratio is less than 1 which is 0.518755 (p < 0.0001). From this result, we can expect the dengue cases pattern in Hulu Langat district is exhibiting a cluster pattern. The z-score for dengue incidence within the district is -13.0525 (p < 0.0001). It was also found that the significant spatial autocorrelation of dengue incidences occurs at an average distance of 380.81 meters (p < 0.0001). Several locations especially residential area also had been identified as the hot spots of dengue cases in the district.
Abstract: In this paper, the application of GRNN in
modeling of SOFC fuel cells were studied. The parameters
are of interested as voltage and power value and the current
changes are investigated. In addition, the comparison between
GRNN neural network application and conventional method
was made. The error value showed the superlative results.