Abstract: The ε4 allele of the ε2, ε3 and ε4 protein isoform polymorphism in the gene encoding apolipoprotein E (Apo E) has previously been associated with increased cardiac artery disease (CAD); therefore to investigate the significance of this polymorphism in pathogenesis of CAD in Iranian patients with stenosis and control subjects. To investigate the association between
Apo E polymorphism and coronary artery disease we performed a comparative case control study of the frequency of Apo E
polymorphism in One hundred CAD patients with stenosis who underwent coronary angiography (>50% stenosis) and 100 control subjects (
Abstract: InGaAsN and GaAsN epitaxial layers with similar
nitrogen compositions in a sample were successfully grown on a
GaAs (001) substrate by solid source molecular beam epitaxy. An
electron cyclotron resonance nitrogen plasma source has been used to
generate atomic nitrogen during the growth of the nitride layers. The
indium composition changed from sample to sample to give
compressive and tensile strained InGaAsN layers. Layer
characteristics have been assessed by high-resolution x-ray
diffraction to determine the relationship between the lattice constant
of the GaAs1-yNy layer and the fraction x of In. The objective was to
determine the In fraction x in an InxGa1-xAs1-yNy epitaxial layer which
exactly cancels the strain present in a GaAs1-yNy epitaxial layer with
the same nitrogen content when grown on a GaAs substrate.
Abstract: This essay presents applicative methods to reduce human exposure levels in the area around base transceiver stations in a environment with multiple sources based on ITU-T recommendation K.70. An example is presented to understand the mitigation techniques and their results and also to learn how they can be applied, especially in developing countries where there is not much research on non-ionizing radiations.
Abstract: A dissimilarity measure between the empiric
characteristic functions of the subsamples associated to the different
classes in a multivariate data set is proposed. This measure can be
efficiently computed, and it depends on all the cases of each class. It
may be used to find groups of similar classes, which could be joined
for further analysis, or it could be employed to perform an
agglomerative hierarchical cluster analysis of the set of classes. The
final tree can serve to build a family of binary classification models,
offering an alternative approach to the multi-class SVM problem. We
have tested this dendrogram based SVM approach with the oneagainst-
one SVM approach over four publicly available data sets,
three of them being microarray data. Both performances have been
found equivalent, but the first solution requires a smaller number of
binary SVM models.
Abstract: This paper introduces a novel design for boring bar with enhanced damping capability. The principle followed in the design phase was to enhance the damping capability minimizing the loss in static stiffness through implementation of composite material interfaces. The newly designed tool has been compared to a conventional tool. The evaluation criteria were the dynamic characteristics, frequency and damping ratio, of the machining system, as well as the surface roughness of the machined workpieces. The use of composite material in the design of damped tool has been demonstrated effective. Furthermore, the autoregressive moving average (ARMA) models presented in this paper take into consideration the interaction between the elastic structure of the machine tool and the cutting process and can therefore be used to characterize the machining system in operational conditions.
Abstract: Microstrip lines, widely used for good reason, are
broadband in frequency and provide circuits that are compact and
light in weight. They are generally economical to produce since they
are readily adaptable to hybrid and monolithic integrated circuit (IC)
fabrication technologies at RF and microwave frequencies. Although,
the existing EM simulation models used for the synthesis and
analysis of microstrip lines are reasonably accurate, they are
computationally intensive and time consuming. Neural networks
recently gained attention as fast and flexible vehicles to microwave
modeling, simulation and optimization. After learning and
abstracting from microwave data, through a process called training,
neural network models are used during microwave design to provide
instant answers to the task learned.This paper presents simple and
accurate ANN models for the synthesis and analysis of Microstrip
lines to more accurately compute the characteristic parameters and
the physical dimensions respectively for the required design
specifications.
Abstract: Psoriasis is a chronic inflammatory skin condition
which affects 2-3% of population around the world. Psoriasis Area
and Severity Index (PASI) is a gold standard to assess psoriasis
severity as well as the treatment efficacy. Although a gold standard,
PASI is rarely used because it is tedious and complex. In practice,
PASI score is determined subjectively by dermatologists, therefore
inter and intra variations of assessment are possible to happen even
among expert dermatologists. This research develops an algorithm to
assess psoriasis lesion for PASI scoring objectively. Focus of this
research is thickness assessment as one of PASI four parameters
beside area, erythema and scaliness. Psoriasis lesion thickness is
measured by averaging the total elevation from lesion base to lesion
surface. Thickness values of 122 3D images taken from 39 patients
are grouped into 4 PASI thickness score using K-means clustering.
Validation on lesion base construction is performed using twelve
body curvature models and show good result with coefficient of
determinant (R2) is equal to 1.
Abstract: Several studies have been carried out, using various techniques, including neural networks, to discriminate vigilance states in humans from electroencephalographic (EEG) signals, but we are still far from results satisfactorily useable results. The work presented in this paper aims at improving this status with regards to 2 aspects. Firstly, we introduce an original procedure made of the association of two neural networks, a self organizing map (SOM) and a learning vector quantization (LVQ), that allows to automatically detect artefacted states and to separate the different levels of vigilance which is a major breakthrough in the field of vigilance. Lastly and more importantly, our study has been oriented toward real-worked situation and the resulting model can be easily implemented as a wearable device. It benefits from restricted computational and memory requirements and data access is very limited in time. Furthermore, some ongoing works demonstrate that this work should shortly results in the design and conception of a non invasive electronic wearable device.
Abstract: This work proposes an approach to address automatic
text summarization. This approach is a trainable summarizer, which
takes into account several features, including sentence position,
positive keyword, negative keyword, sentence centrality, sentence
resemblance to the title, sentence inclusion of name entity, sentence
inclusion of numerical data, sentence relative length, Bushy path of
the sentence and aggregated similarity for each sentence to generate
summaries. First we investigate the effect of each sentence feature on
the summarization task. Then we use all features score function to
train genetic algorithm (GA) and mathematical regression (MR)
models to obtain a suitable combination of feature weights. The
proposed approach performance is measured at several compression
rates on a data corpus composed of 100 English religious articles.
The results of the proposed approach are promising.
Abstract: In this paper a functional interpretation of quantum
theory (QT) with emphasis on quantum field theory (QFT) is proposed.
Besides the usual statements on relations between a functions
initial state and final state, a functional interpretation also contains
a description of the dynamic evolution of the function. That is, it
describes how things function. The proposed functional interpretation
of QT/QFT has been developed in the context of the author-s work
towards a computer model of QT with the goal of supporting
the largest possible scope of QT concepts. In the course of this
work, the author encountered a number of problems inherent in the
translation of quantum physics into a computer program. He came
to the conclusion that the goal of supporting the major QT concepts
can only be satisfied, if the present model of QT is supplemented
by a "functional interpretation" of QT/QFT. The paper describes a
proposal for that
Abstract: Consider a mass production of HDD arms where
hundreds of CNC machines are used to manufacturer the HDD arms.
According to an overwhelming number of machines and models of
arm, construction of separate control chart for monitoring each HDD
arm model by each machine is not feasible. This research proposed a
strategy to optimize the SPC management on shop floor. The
procedure started from identifying the clusters of the machine with
similar manufacturing performance using clustering technique. The
three way control chart ( I - MR - R ) is then applied to each
clustered group of machine. This proposed research has
advantageous to the manufacturer in terms of not only better
performance of the SPC but also the quality management paradigm.
Abstract: Estimation of runoff water quality parameters is required to determine appropriate water quality management options. Various models are used to estimate runoff water quality parameters. However, most models provide event-based estimates of water quality parameters for specific sites. The work presented in this paper describes the development of a model that continuously simulates the accumulation and wash-off of water quality pollutants in a catchment. The model allows estimation of pollutants build-up during dry periods and pollutants wash-off during storm events. The model was developed by integrating two individual models; rainfall-runoff model, and catchment water quality model. The rainfall-runoff model is based on the time-area runoff estimation method. The model allows users to estimate the time of concentration using a range of established methods. The model also allows estimation of the continuing runoff losses using any of the available estimation methods (i.e., constant, linearly varying or exponentially varying). Pollutants build-up in a catchment was represented by one of three pre-defined functions; power, exponential, or saturation. Similarly, pollutants wash-off was represented by one of three different functions; power, rating-curve, or exponential. The developed runoff water quality model was set-up to simulate the build-up and wash-off of total suspended solids (TSS), total phosphorus (TP) and total nitrogen (TN). The application of the model was demonstrated using available runoff and TSS field data from road and roof surfaces in the Gold Coast, Australia. The model provided excellent representation of the field data demonstrating the simplicity yet effectiveness of the proposed model.
Abstract: AAM has been successfully applied to face alignment,
but its performance is very sensitive to initial values. In case the initial
values are a little far distant from the global optimum values, there
exists a pretty good possibility that AAM-based face alignment may
converge to a local minimum. In this paper, we propose a progressive
AAM-based face alignment algorithm which first finds the feature
parameter vector fitting the inner facial feature points of the face and
later localize the feature points of the whole face using the first
information. The proposed progressive AAM-based face alignment
algorithm utilizes the fact that the feature points of the inner part of the
face are less variant and less affected by the background surrounding
the face than those of the outer part (like the chin contour). The
proposed algorithm consists of two stages: modeling and relation
derivation stage and fitting stage. Modeling and relation derivation
stage first needs to construct two AAM models: the inner face AAM
model and the whole face AAM model and then derive relation matrix
between the inner face AAM parameter vector and the whole face
AAM model parameter vector. In the fitting stage, the proposed
algorithm aligns face progressively through two phases. In the first
phase, the proposed algorithm will find the feature parameter vector
fitting the inner facial AAM model into a new input face image, and
then in the second phase it localizes the whole facial feature points of
the new input face image based on the whole face AAM model using
the initial parameter vector estimated from using the inner feature
parameter vector obtained in the first phase and the relation matrix
obtained in the first stage. Through experiments, it is verified that the
proposed progressive AAM-based face alignment algorithm is more
robust with respect to pose, illumination, and face background than the
conventional basic AAM-based face alignment algorithm.
Abstract: In this work we present an efficient approach for face
recognition in the infrared spectrum. In the proposed approach
physiological features are extracted from thermal images in order to
build a unique thermal faceprint. Then, a distance transform is used
to get an invariant representation for face recognition. The obtained
physiological features are related to the distribution of blood vessels
under the face skin. This blood network is unique to each individual
and can be used in infrared face recognition. The obtained results are
promising and show the effectiveness of the proposed scheme.
Abstract: In this paper, we present symbolic recognition models to extract knowledge characterized by document structures. Focussing on the extraction and the meticulous exploitation of the semantic structure of documents, we obtain a meaningful contextual tagging corresponding to different unit types (title, chapter, section, enumeration, etc.).
Abstract: In this study, a fuzzy integrated logical forecasting method (FILF) is extended for multi-variate systems by using a vector autoregressive model. Fuzzy time series forecasting (FTSF) method was recently introduced by Song and Chissom [1]-[2] after that Chen improved the FTSF method. Rather than the existing literature, the proposed model is not only compared with the previous FTS models, but also with the conventional time series methods such as the classical vector autoregressive model. The cluster optimization is based on the C-means clustering method. An empirical study is performed for the prediction of the chartering rates of a group of dry bulk cargo ships. The root mean squared error (RMSE) metric is used for the comparing of results of methods and the proposed method has superiority than both traditional FTS methods and also the classical time series methods.
Abstract: The history of technology and banking is examined as
it relates to risk and technological determinism. It is proposed that
the services that banks offer are determined by technology and that
banks must adopt new technologies to be competitive. The adoption
of technologies paradoxically forces the adoption of other new
technologies to protect the bank from the increased risk of
technology. This cycle will lead to bank examiners and regulators to
focus on human behavior, not on the ever changing technology.
Abstract: To reveal the temperature field distribution of disc
brake in downward belt conveyor, mathematical models of heat
transfer for disc brake were established combined with heat transfer
theory. Then, the simulation process was stated in detail and the
temperature field of disc brake under conditions of dynamic speed and
dynamic braking torque was numerically simulated by using ANSYS
software. Finally the distribution and variation laws of temperature
field in the braking process were analyzed. Results indicate that the
maximum surface temperature occurs at a time before the brake end
and there exist large temperature gradients in both radial and axial
directions, while it is relatively small in the circumferential direction.
Abstract: This paper presents a novel method for inferring the
odor based on neural activities observed from rats- main olfactory
bulbs. Multi-channel extra-cellular single unit recordings were done
by micro-wire electrodes (tungsten, 50μm, 32 channels) implanted in
the mitral/tufted cell layers of the main olfactory bulb of anesthetized
rats to obtain neural responses to various odors. Neural response
as a key feature was measured by substraction of neural firing rate
before stimulus from after. For odor inference, we have developed a
decoding method based on the maximum likelihood (ML) estimation.
The results have shown that the average decoding accuracy is about
100.0%, 96.0%, 84.0%, and 100.0% with four rats, respectively. This
work has profound implications for a novel brain-machine interface
system for odor inference.
Abstract: Image compression plays a vital role in today-s
communication. The limitation in allocated bandwidth leads to
slower communication. To exchange the rate of transmission in the
limited bandwidth the Image data must be compressed before
transmission. Basically there are two types of compressions, 1)
LOSSY compression and 2) LOSSLESS compression. Lossy
compression though gives more compression compared to lossless
compression; the accuracy in retrievation is less in case of lossy
compression as compared to lossless compression. JPEG, JPEG2000
image compression system follows huffman coding for image
compression. JPEG 2000 coding system use wavelet transform,
which decompose the image into different levels, where the
coefficient in each sub band are uncorrelated from coefficient of
other sub bands. Embedded Zero tree wavelet (EZW) coding exploits
the multi-resolution properties of the wavelet transform to give a
computationally simple algorithm with better performance compared
to existing wavelet transforms. For further improvement of
compression applications other coding methods were recently been
suggested. An ANN base approach is one such method. Artificial
Neural Network has been applied to many problems in image
processing and has demonstrated their superiority over classical
methods when dealing with noisy or incomplete data for image
compression applications. The performance analysis of different
images is proposed with an analysis of EZW coding system with
Error Backpropagation algorithm. The implementation and analysis
shows approximately 30% more accuracy in retrieved image
compare to the existing EZW coding system.