Abstract: Fourier transform infrared (FT-IR) spectroscopic imaging
is an emerging technique that provides both chemically and
spatially resolved information. The rich chemical content of data
may be utilized for computer-aided determinations of structure and
pathologic state (cancer diagnosis) in histological tissue sections for
prostate cancer. FT-IR spectroscopic imaging of prostate tissue has
shown that tissue type (histological) classification can be performed to
a high degree of accuracy [1] and cancer diagnosis can be performed
with an accuracy of about 80% [2] on a microscopic (≈ 6μm)
length scale. In performing these analyses, it has been observed
that there is large variability (more than 60%) between spectra from
different points on tissue that is expected to consist of the same
essential chemical constituents. Spectra at the edges of tissues are
characteristically and consistently different from chemically similar
tissue in the middle of the same sample. Here, we explain these
differences using a rigorous electromagnetic model for light-sample
interaction. Spectra from FT-IR spectroscopic imaging of chemically
heterogeneous samples are different from bulk spectra of individual
chemical constituents of the sample. This is because spectra not
only depend on chemistry, but also on the shape of the sample.
Using coupled wave analysis, we characterize and quantify the nature
of spectral distortions at the edges of tissues. Furthermore, we
present a method of performing histological classification of tissue
samples. Since the mid-infrared spectrum is typically assumed to
be a quantitative measure of chemical composition, classification
results can vary widely due to spectral distortions. However, we
demonstrate that the selection of localized metrics based on chemical
information can make our data robust to the spectral distortions
caused by scattering at the tissue boundary.
Abstract: A cross sectional survey design was used to collect
data from 370 diabetic patients. Two instruments were used in
obtaining data; in-depth interview guide and researchers- developed
questionnaire. Fisher's exact test was used to investigate association
between the identified factors and nonadherence. Factors identified
were: socio-demographic factors such as: gender, age, marital status,
educational level and occupation; psychosocial obstacles such as:
non-affordability of prescribed diet, frustration due to the restriction,
limited spousal support, feelings of deprivation, feeling that
temptation is inevitable, difficulty in adhering in social gatherings
and difficulty in revealing to host that one is diabetic; health care
providers obstacles were: poor attitude of health workers, irregular
diabetes education in clinics , limited number of nutrition education
sessions/ inability of the patients to estimate the desired quantity of
food, no reminder post cards or phone calls about upcoming patient
appointments and delayed start of appointment / time wasting in
clinics.
Abstract: Lately, significant work in the area of Intelligent
Manufacturing has become public and mainly applied within the
frame of industrial purposes. Special efforts have been made in the
implementation of new technologies, management and control
systems, among many others which have all evolved the field. Aware
of all this and due to the scope of new projects and the need of
turning the existing flexible ideas into more autonomous and
intelligent ones, i.e.: Intelligent Manufacturing, the present paper
emerges with the main aim of contributing to the design and analysis
of the material flow in either systems, cells or work stations under
this new “intelligent" denomination. For this, besides offering a
conceptual basis in some of the key points to be taken into account
and some general principles to consider in the design and analysis of
the material flow, also some tips on how to define other possible
alternative material flow scenarios and a classification of the states a
system, cell or workstation are offered as well. All this is done with
the intentions of relating it with the use of simulation tools, for which
these have been briefly addressed with a special focus on the Witness
simulation package. For a better comprehension, the previous
elements are supported by a detailed layout, other figures and a few
expressions which could help obtaining necessary data. Such data and
others will be used in the future, when simulating the scenarios in the
search of the best material flow configurations.
Abstract: The study of a real function of two real variables can be supported by visualization using a Computer Algebra System (CAS). One type of constraints of the system is due to the algorithms implemented, yielding continuous approximations of the given function by interpolation. This often masks discontinuities of the function and can provide strange plots, not compatible with the mathematics. In recent years, point based geometry has gained increasing attention as an alternative surface representation, both for efficient rendering and for flexible geometry processing of complex surfaces. In this paper we present different artifacts created by mesh surfaces near discontinuities and propose a point based method that controls and reduces these artifacts. A least squares penalty method for an automatic generation of the mesh that controls the behavior of the chosen function is presented. The special feature of this method is the ability to improve the accuracy of the surface visualization near a set of interior points where the function may be discontinuous. The present method is formulated as a minimax problem and the non uniform mesh is generated using an iterative algorithm. Results show that for large poorly conditioned matrices, the new algorithm gives more accurate results than the classical preconditioned conjugate algorithm.
Abstract: Based on Rayleigh beam theory, the sub-impacts of a
free-free beam struck horizontally by a round-nosed rigid mass is
simulated by the finite difference method and the impact-separation
conditions. In order to obtain the sub-impact force, a uniaxial
compression elastic-plastic contact model is employed to analyze the
local deformation field on contact zone. It is found that the horizontal
impact is a complicated process including the elastic plastic
sub-impacts in sequence. There are two sub-zones of sub-impact. In
addition, it found that the elastic energy of the free-free beam is more
suitable for the Poisson collision hypothesis to explain compression
and recovery processes.
Abstract: Evaluation and survey of curriculum quality as one of the most important components of universities system is necessary for different levels in higher education. The main purpose of this study was to survey of the curriculum quality of Actuarial science field. Case: University of SHahid Beheshti and Higher education institute of Eco insurance (according to viewpoint of students, alumni, employers and faculty members). Descriptive statistics (mean, tables, percentage, and frequency distribution) and inferential statistics (CHI SQUARE) were used to analyze the data. Six criteria considered for the Quality of curriculum: objectives, content, teaching and learning methods, space and facilities, Time, assessment of learning. Content, teaching and learning methods, space and facilities, assessment of learning criteria were relatively desirable level, objectives and time criterions were desirable level. The quality of curriculum of Actuarial Science field was relatively desirable level.
Abstract: An optimal power flow (OPF) based on particle swarm
optimization (PSO) was developed with more realistic generator
security constraint using the capability curve instead of only Pmin/Pmax
and Qmin/Qmax. Neural network (NN) was used in designing digital
capability curve and the security check algorithm. The algorithm is
very simple and flexible especially for representing non linear
generation operation limit near steady state stability limit and under
excitation operation area. In effort to avoid local optimal power flow
solution, the particle swarm optimization was implemented with
enough widespread initial population. The objective function used in
the optimization process is electric production cost which is
dominated by fuel cost. The proposed method was implemented at
Java Bali 500 kV power systems contain of 7 generators and 20
buses. The simulation result shows that the combination of generator
power output resulted from the proposed method was more economic
compared with the result using conventional constraint but operated
at more marginal operating point.
Abstract: A systems approach model for prostate cancer in prostate duct, as a sub-system of the organism is developed. It is accomplished in two steps. First this research work starts with a nonlinear system of coupled Fokker-Plank equations which models continuous process of the system like motion of cells. Then extended to PDEs that include discontinuous processes like cell mutations, proliferation and deaths. The discontinuous processes is modeled by using intensity poisson processes. The model incorporates the features of the prostate duct. The system of PDEs spatial coordinate is along the proximal distal axis. Its parameters depend on features of the prostate duct. The movement of cells is biased towards distal region and mutations of prostate cancer cells is localized in the proximal region. Numerical solutions of the full system of equations are provided, and are exhibit traveling wave fronts phenomena. This motivates the use of the standard transformation to derive a canonically related system of ODEs for traveling wave solutions. The results obtained show persistence of prostate cancer by showing that the non-negative cone for the traveling wave system is time invariant. The traveling waves have a unique global attractor is proved also. Biologically, the global attractor verifies that evolution of prostate cancer stem cells exhibit the avascular tumor growth. These numerical solutions show that altering prostate stem cell movement or mutation of prostate cancer cells lead to avascular tumor. Conclusion with comments on clinical implications of the model is discussed.
Abstract: In this paper electrical characteristics of various kinds
of multiple-gate silicon nanowire transistors (SNWT) with the
channel length equal to 7 nm are compared. A fully ballistic quantum
mechanical transport approach based on NEGF was employed to
analyses electrical characteristics of rectangular and cylindrical
silicon nanowire transistors as well as a Double gate MOS FET. A
double gate, triple gate, and gate all around nano wires were studied
to investigate the impact of increasing the number of gates on the
control of the short channel effect which is important in nanoscale
devices. Also in the case of triple gate rectangular SNWT inserting
extra gates on the bottom of device can improve the application of
device. The results indicate that by using gate all around structures
short channel effects such as DIBL, subthreshold swing and delay
reduces.
Abstract: To evaluate the ability to predict xerostomia after
radiotherapy, we constructed and compared neural network and
logistic regression models. In this study, 61 patients who completed a
questionnaire about their quality of life (QoL) before and after a full
course of radiation therapy were included. Based on this questionnaire,
some statistical data about the condition of the patients’ salivary
glands were obtained, and these subjects were included as the inputs of
the neural network and logistic regression models in order to predict
the probability of xerostomia. Seven variables were then selected from
the statistical data according to Cramer’s V and point-biserial
correlation values and were trained by each model to obtain the
respective outputs which were 0.88 and 0.89 for AUC, 9.20 and 7.65
for SSE, and 13.7% and 19.0% for MAPE, respectively. These
parameters demonstrate that both neural network and logistic
regression methods are effective for predicting conditions of parotid
glands.
Abstract: The paper presents the optimization problem for the
multi-element synthetic transmit aperture method (MSTA) in
ultrasound imaging applications. The optimal choice of the transmit
aperture size is performed as a trade-off between the lateral
resolution, penetration depth and the frame rate. Results of the
analysis obtained by a developed optimization algorithm are
presented. Maximum penetration depth and the best lateral resolution
at given depths are chosen as the optimization criteria. The
optimization algorithm was tested using synthetic aperture data of
point reflectors simulated by Filed II program for Matlab® for the
case of 5MHz 128-element linear transducer array with 0.48 mm
pitch are presented. The visualization of experimentally obtained
synthetic aperture data of a tissue mimicking phantom and in vitro
measurements of the beef liver are also shown. The data were
obtained using the SonixTOUCH Research systemequipped with a
linear 4MHz 128 element transducerwith 0.3 mm element pitch, 0.28
mm element width and 70% fractional bandwidth was excited by one
sine cycle pulse burst of transducer's center frequency.
Abstract: Many computational techniques were applied to
solution of heat conduction problem. Those techniques were the
finite difference (FD), finite element (FE) and recently meshless
methods. FE is commonly used in solution of equation of heat
conduction problem based on the summation of stiffness matrix of
elements and the solution of the final system of equations. Because
of summation process of finite element, convergence rate was
decreased. Hence in the present paper Cellular Automata (CA)
approach is presented for the solution of heat conduction problem.
Each cell considered as a fixed point in a regular grid lead to the
solution of a system of equations is substituted by discrete systems of
equations with small dimensions. Results show that CA can be used
for solution of heat conduction problem.
Abstract: Camera calibration is an indispensable step for augmented
reality or image guided applications where quantitative information
should be derived from the images. Usually, a camera
calibration is obtained by taking images of a special calibration object
and extracting the image coordinates of projected calibration marks
enabling the calculation of the projection from the 3d world coordinates
to the 2d image coordinates. Thus such a procedure exhibits
typical steps, including feature point localization in the acquired
images, camera model fitting, correction of distortion introduced by
the optics and finally an optimization of the model-s parameters. In
this paper we propose to extend this list by further step concerning
the identification of the optimal subset of images yielding the smallest
overall calibration error. For this, we present a Monte Carlo based
algorithm along with a deterministic extension that automatically
determines the images yielding an optimal calibration. Finally, we
present results proving that the calibration can be significantly
improved by automated image selection.
Abstract: We present a new method to reconstruct a temporally
coherent 3D animation from single or multi-view RGB-D video data
using unbiased feature point sampling. Given RGB-D video data, in
form of a 3D point cloud sequence, our method first extracts feature
points using both color and depth information. In the subsequent
steps, these feature points are used to match two 3D point clouds in
consecutive frames independent of their resolution. Our new motion
vectors based dynamic alignement method then fully reconstruct
a spatio-temporally coherent 3D animation. We perform extensive
quantitative validation using novel error functions to analyze the
results. We show that despite the limiting factors of temporal and
spatial noise associated to RGB-D data, it is possible to extract
temporal coherence to faithfully reconstruct a temporally coherent
3D animation from RGB-D video data.
Abstract: Quality control charts indicate out of control
conditions if any nonrandom pattern of the points is observed or any
point is plotted beyond the control limits. Nonrandom patterns of
Shewhart control charts are tested with sensitizing rules. When the
processes are defined with fuzzy set theory, traditional sensitizing
rules are insufficient for defining all out of control conditions. This is
due to the fact that fuzzy numbers increase the number of out of
control conditions. The purpose of the study is to develop a set of
fuzzy sensitizing rules, which increase the flexibility and sensitivity
of fuzzy control charts. Fuzzy sensitizing rules simplify the
identification of out of control situations that results in a decrease in
the calculation time and number of evaluations in fuzzy control chart
approach.
Abstract: Biometric techniques are gaining importance for
personal authentication and identification as compared to the
traditional authentication methods. Biometric templates are
vulnerable to variety of attacks due to their inherent nature. When a
person-s biometric is compromised his identity is lost. In contrast to
password, biometric is not revocable. Therefore, providing security
to the stored biometric template is very crucial. Crypto biometric
systems are authentication systems, which blends the idea of
cryptography and biometrics. Fuzzy vault is a proven crypto
biometric construct which is used to secure the biometric templates.
However fuzzy vault suffer from certain limitations like nonrevocability,
cross matching. Security of the fuzzy vault is affected
by the non-uniform nature of the biometric data. Fuzzy vault when
hardened with password overcomes these limitations. Password
provides an additional layer of security and enhances user privacy.
Retina has certain advantages over other biometric traits. Retinal
scans are used in high-end security applications like access control to
areas or rooms in military installations, power plants, and other high
risk security areas. This work applies the idea of fuzzy vault for
retinal biometric template. Multimodal biometric system
performance is well compared to single modal biometric systems.
The proposed multi modal biometric fuzzy vault includes combined
feature points from retina and fingerprint. The combined vault is
hardened with user password for achieving high level of security.
The security of the combined vault is measured using min-entropy.
The proposed password hardened multi biometric fuzzy vault is
robust towards stored biometric template attacks.
Abstract: C-control chart assumes that process nonconformities follow a Poisson distribution. In actuality, however, this Poisson distribution does not always occur. A process control for semiconductor based on a Poisson distribution always underestimates the true average amount of nonconformities and the process variance. Quality is described more accurately if a compound Poisson process is used for process control at this time. A cumulative sum (CUSUM) control chart is much better than a C control chart when a small shift will be detected. This study calculates one-sided CUSUM ARLs using a Markov chain approach to construct a CUSUM control chart with an underlying Poisson-Gamma compound distribution for the failure mechanism. Moreover, an actual data set from a wafer plant is used to demonstrate the operation of the proposed model. The results show that a CUSUM control chart realizes significantly better performance than EWMA.
Abstract: This paper introduces new algorithms (Fuzzy relative
of the CLARANS algorithm FCLARANS and Fuzzy c Medoids
based on randomized search FCMRANS) for fuzzy clustering of
relational data. Unlike existing fuzzy c-medoids algorithm (FCMdd)
in which the within cluster dissimilarity of each cluster is minimized
in each iteration by recomputing new medoids given current
memberships, FCLARANS minimizes the same objective function
minimized by FCMdd by changing current medoids in such away
that that the sum of the within cluster dissimilarities is minimized.
Computing new medoids may be effected by noise because outliers
may join the computation of medoids while the choice of medoids in
FCLARANS is dictated by the location of a predominant fraction of
points inside a cluster and, therefore, it is less sensitive to the
presence of outliers. In FCMRANS the step of computing new
medoids in FCMdd is modified to be based on randomized search.
Furthermore, a new initialization procedure is developed that add
randomness to the initialization procedure used with FCMdd. Both
FCLARANS and FCMRANS are compared with the robust and
linearized version of fuzzy c-medoids (RFCMdd). Experimental
results with different samples of the Reuter-21578, Newsgroups
(20NG) and generated datasets with noise show that FCLARANS is
more robust than both RFCMdd and FCMRANS. Finally, both
FCMRANS and FCLARANS are more efficient and their outputs
are almost the same as that of RFCMdd in terms of classification
rate.
Abstract: Dengue fever is an important human arboviral disease. Outbreaks are now reported quite often from many parts of the world. The number of cases involving pregnant women and infant cases are increasing every year. The illness is often severe and complications may occur. Deaths often occur because of the difficulties in early diagnosis and in the improper management of the diseases. Dengue antibodies from pregnant women are passed on to infants and this protects the infants from dengue infections. Antibodies from the mother are transferred to the fetus when it is still in the womb. In this study, we formulate a mathematical model to describe the transmission of this disease in pregnant women. The model is formulated by dividing the human population into pregnant women and non-pregnant human (men and non-pregnant women). Each class is subdivided into susceptible (S), infectious (I) and recovered (R) subclasses. We apply standard dynamical analysis to our model. Conditions for the local stability of the equilibrium points are given. The numerical simulations are shown. The bifurcation diagrams of our model are discussed. The control of this disease in pregnant women is discussed in terms of the threshold conditions.
Abstract: In this article, a method has been offered to classify
normal and defective tiles using wavelet transform and artificial
neural networks. The proposed algorithm calculates max and min
medians as well as the standard deviation and average of detail
images obtained from wavelet filters, then comes by feature vectors
and attempts to classify the given tile using a Perceptron neural
network with a single hidden layer. In this study along with the
proposal of using median of optimum points as the basic feature and
its comparison with the rest of the statistical features in the wavelet
field, the relational advantages of Haar wavelet is investigated. This
method has been experimented on a number of various tile designs
and in average, it has been valid for over 90% of the cases. Amongst
the other advantages, high speed and low calculating load are
prominent.