Abstract: Using 1km grid datasets representing monthly mean
precipitation, monthly mean temperature, and dry matter production
(DMP), we considered the regional plant production ability in
Southeast and South Asia, and also employed pixel-by-pixel
correlation analysis to assess the intensity of relation between climate
factors and plant production. While annual DMP in South Asia was
approximately less than 2,000kg, the one in most part of Southeast
Asia exceeded 2,500 - 3,000kg. It suggested that plant production in
Southeast Asia was superior to South Asia, however, Rain-Use
Efficiency (RUE) representing dry matter production per 1mm
precipitation showed that inland of Indochina Peninsula and India
were higher than islands in Southeast Asia. By the results of
correlation analysis between climate factors and DMP, while the area
in most parts of Indochina Peninsula indicated negative correlation
coefficients between DMP and precipitation or temperature, the area
in Malay Peninsula and islands showed negative correlation to
precipitation and positive one to temperature, and most part of India
dominating South Asia showed positive to precipitation and negative
to temperature. In addition, the areas where the correlation coefficients
exceeded |0.8| were regarded as “susceptible" to climate factors, and
the areas smaller than |0.2| were “insusceptible". By following the
discrimination, the map implying expected impacts by climate change
was provided.
Abstract: This paper gives an introduction to Web mining, then
describes Web Structure mining in detail, and explores the data
structure used by the Web. This paper also explores different Page
Rank algorithms and compare those algorithms used for Information
Retrieval. In Web Mining, the basics of Web mining and the Web
mining categories are explained. Different Page Rank based
algorithms like PageRank (PR), WPR (Weighted PageRank), HITS
(Hyperlink-Induced Topic Search), DistanceRank and DirichletRank
algorithms are discussed and compared. PageRanks are calculated for
PageRank and Weighted PageRank algorithms for a given hyperlink
structure. Simulation Program is developed for PageRank algorithm
because PageRank is the only ranking algorithm implemented in the
search engine (Google). The outputs are shown in a table and chart
format.
Abstract: In this paper, in order to categorize ORL database face
pictures, principle Component Analysis (PCA) and Kernel Principal
Component Analysis (KPCA) methods by using Elman neural
network and Support Vector Machine (SVM) categorization methods
are used. Elman network as a recurrent neural network is proposed
for modeling storage systems and also it is used for reviewing the
effect of using PCA numbers on system categorization precision rate
and database pictures categorization time. Categorization stages are
conducted with various components numbers and the obtained results
of both Elman neural network categorization and support vector
machine are compared. In optimum manner 97.41% recognition
accuracy is obtained.
Abstract: We report on the results of a pilot study in which a data-mining tool was developed for mining audiology records. The records were heterogeneous in that they contained numeric, category and textual data. The tools developed are designed to observe associations between any field in the records and any other field. The techniques employed were the statistical chi-squared test, and the use of self-organizing maps, an unsupervised neural learning approach.
Abstract: Hazardous Material transportation by road is coupled
with inherent risk of accidents causing loss of lives, grievous injuries,
property losses and environmental damages. The most common type
of hazmat road accident happens to be the releases (78%) of
hazardous substances, followed by fires (28%), explosions (14%) and
vapour/ gas clouds (6 %.).
The paper is discussing initially the probable 'Impact Zones'
likely to be caused by one flammable (LPG) and one toxic (ethylene
oxide) chemicals being transported through a sizable segment of a
State Highway connecting three notified Industrial zones in Surat
district in Western India housing 26 MAH industrial units. Three
'hotspots' were identified along the highway segment depending on
the particular chemical traffic and the population distribution within
500 meters on either sides. The thermal radiation and explosion
overpressure have been calculated for LPG / Ethylene Oxide BLEVE
scenarios along with toxic release scenario for ethylene oxide.
Besides, the dispersion calculations for ethylene oxide toxic release
have been made for each 'hotspot' location and the impact zones
have been mapped for the LOC concentrations. Subsequently, the
maximum Initial Isolation and the protective zones were calculated
based on ERPG-3 and ERPG-2 values of ethylene oxide respectively
which are estimated taking the worst case scenario under worst
weather conditions. The data analysis will be helpful to the local
administration in capacity building with respect to rescue /
evacuation and medical preparedness and quantitative inputs to
augment the District Offsite Emergency Plan document.
Abstract: Amount of dissolve oxygen in a river has a great direct affect on aquatic macroinvertebrates and this would influence on the region ecosystem indirectly. In this paper it is tried to predict dissolved oxygen in rivers by employing an easy Fuzzy Logic Modeling, Wang Mendel method. This model just uses previous records to estimate upcoming values. For this purpose daily and hourly records of eight stations in Au Sable watershed in Michigan, United States are employed for 12 years and 50 days period respectively. Calculations indicate that for long period prediction it is better to increase input intervals. But for filling missed data it is advisable to decrease the interval. Increasing partitioning of input and output features influence a little on accuracy but make the model too time consuming. Increment in number of input data also act like number of partitioning. Large amount of train data does not modify accuracy essentially, so, an optimum training length should be selected.
Abstract: We proposed a technique to identify road traffic
congestion levels from velocity of mobile sensors with high accuracy
and consistent with motorists- judgments. The data collection utilized
a GPS device, a webcam, and an opinion survey. Human perceptions
were used to rate the traffic congestion levels into three levels: light,
heavy, and jam. Then the ratings and velocity were fed into a
decision tree learning model (J48). We successfully extracted vehicle
movement patterns to feed into the learning model using a sliding
windows technique. The parameters capturing the vehicle moving
patterns and the windows size were heuristically optimized. The
model achieved accuracy as high as 99.68%. By implementing the
model on the existing traffic report systems, the reports will cover
comprehensive areas. The proposed method can be applied to any
parts of the world.
Abstract: The third generation (3G) of cellular system adopted
the spread spectrum as solution for the transmission of the data in the
physical layer. Contrary to systems IS-95 or CDMAOne (systems
with spread spectrum of the preceding generation), the new standard,
called Universal Mobil Telecommunications System (UMTS), uses
long codes in the down link. The system is conceived for the vocal
communication and the transmission of the data. In particular, the
down link is very important, because of the asymmetrical request of
the data, i.e., more remote loading towards the mobiles than towards
the basic station. Moreover, the UMTS uses for the down link an
orthogonal spreading out with a variable factor of spreading out
(OVSF for Orthogonal Variable Spreading Factor). This
characteristic makes it possible to increase the flow of data of one or
more users by reducing their factor of spreading out without
changing the factor of spreading out of other users. In the current
standard of the UMTS, two techniques to increase the performances
of the down link were proposed, the diversity of sending antenna and
the codes space-time. These two techniques fight only fainding. The
receiver proposed for the mobil station is the RAKE, but one can
imagine a receiver more sophisticated, able to reduce the interference
between users and the impact of the coloured noise and interferences
to narrow band. In this context, where the users have long codes
synchronized with variable factor of spreading out and ignorance by
the mobile of the other active codes/users, the use of the sequences of
code pseudo-noises different lengths is presented in the form of one
of the most appropriate solutions.
Abstract: Data Envelopment Analysis (DEA) is a methodology
that computes efficiency values for decision making units (DMU) in a
given period by comparing the outputs with the inputs. In many cases,
there are some time lag between the consumption of inputs and the
production of outputs. For a long-term research project, it is hard to
avoid the production lead time phenomenon. This time lag effect
should be considered in evaluating the performance of organizations.
This paper suggests a model to calculate efficiency values for the
performance evaluation problem with time lag. In the experimental
part, the proposed methods are compared with the CCR and an
existing time lag model using the data set of the 21st century frontier
R&D program which is a long-term national R&D program of Korea.
Abstract: This paper describes text mining technique for automatically extracting association rules from collections of textual documents. The technique called, Extracting Association Rules from Text (EART). It depends on keyword features for discover association rules amongst keywords labeling the documents. In this work, the EART system ignores the order in which the words occur, but instead focusing on the words and their statistical distributions in documents. The main contributions of the technique are that it integrates XML technology with Information Retrieval scheme (TFIDF) (for keyword/feature selection that automatically selects the most discriminative keywords for use in association rules generation) and use Data Mining technique for association rules discovery. It consists of three phases: Text Preprocessing phase (transformation, filtration, stemming and indexing of the documents), Association Rule Mining (ARM) phase (applying our designed algorithm for Generating Association Rules based on Weighting scheme GARW) and Visualization phase (visualization of results). Experiments applied on WebPages news documents related to the outbreak of the bird flu disease. The extracted association rules contain important features and describe the informative news included in the documents collection. The performance of the EART system compared with another system that uses the Apriori algorithm throughout the execution time and evaluating extracted association rules.
Abstract: Different problems may causes distortion of the rotor,
and hence vibration, which is the most severe damage of the turbine
rotors. In many years different techniques have been developed for
the straightening of bent rotors. The method for straightening can be
selected according to initial information from preliminary inspections
and tests such as nondestructive tests, chemical analysis, run out tests
and also a knowledge of the shaft material. This article covers the
various causes of excessive bends and then some applicable common
straightening methods are reviewed. Finally, hot spotting is opted for
a particular bent rotor. A 325 MW steam turbine rotor is modeled and
finite element analyses are arranged to investigate this straightening
process. Results of experimental data show that performing the exact
hot spot straightening process reduced the bending of the rotor
significantly.
Abstract: In this paper an algorithm is used to detect the color defects of ceramic tiles. First the image of a normal tile is clustered using GCMA; Genetic C-means Clustering Algorithm; those results in best cluster centers. C-means is a common clustering algorithm which optimizes an objective function, based on a measure between data points and the cluster centers in the data space. Here the objective function describes the mean square error. After finding the best centers, each pixel of the image is assigned to the cluster with closest cluster center. Then, the maximum errors of clusters are computed. For each cluster, max error is the maximum distance between its center and all the pixels which belong to it. After computing errors all the pixels of defected tile image are clustered based on the centers obtained from normal tile image in previous stage. Pixels which their distance from their cluster center is more than the maximum error of that cluster are considered as defected pixels.
Abstract: Background: Widespread use of chemotherapeutic
drugs in the treatment of cancer has lead to higher health hazards
among employee who handle and administer such drugs, so nurses
should know how to protect themselves, their patients and their work
environment against toxic effects of chemotherapy. Aim of this study
was carried out to examine the effect of chemotherapy safety protocol
for oncology nurses on their protective measure practices. Design: A
quasi experimental research design was utilized. Setting: The study
was carried out in oncology department of Menoufia university
hospital and Tanta oncology treatment center. Sample: A
convenience sample of forty five nurses in Tanta oncology treatment
center and eighteen nurses in Menoufiya oncology department.
Tools: 1. an interviewing questionnaire that covering sociodemographic
data, assessment of unit and nurses' knowledge about
chemotherapy. II: Obeservational check list to assess nurses' actual
practices of handling and adminestration of chemotherapy. A base
line data were assessed before implementing Chemotherapy Safety
protocol, then Chemotherapy Safety protocol was implemented, and
after 2 monthes they were assessed again. Results: reveled that 88.9%
of study group I and 55.6% of study group II improved to good total
knowledge scores after educating on the safety protocol, also 95.6%
of study group I and 88.9% of study group II had good total practice
score after educating on the safety protocol. Moreover less than half
of group I (44.4%) reported that heavy workload is the most barriers
for them, while the majority of group II (94.4%) had many barriers
for adhering to the safety protocol such as they didn’t know the
protocol, the heavy work load and inadequate equipment.
Conclusions: Safety protocol for Oncology Nurses seemed to have
positive effect on improving nurses' knowledge and practice.
Recommendation: chemotherapy safety protocol should be instituted
for all oncology nurses who are working in any oncology unit and/ or
center to enhance compliance, and this protocol should be done at
frequent intervals.
Abstract: Independent component analysis (ICA) in the
frequency domain is used for solving the problem of blind source
separation (BSS). However, this method has some problems. For
example, a general ICA algorithm cannot determine the permutation
of signals which is important in the frequency domain ICA. In this
paper, we propose an approach to the solution for a permutation
problem. The idea is to effectively combine two conventional
approaches. This approach improves the signal separation
performance by exploiting features of the conventional approaches.
We show the simulation results using artificial data.
Abstract: We present a new method for the fully automatic 3D
reconstruction of the coronary artery centerlines, using two X-ray
angiogram projection images from a single rotating monoplane
acquisition system. During the first stage, the input images are
smoothed using curve evolution techniques. Next, a simple yet
efficient multiscale method, based on the information of the Hessian
matrix, for the enhancement of the vascular structure is introduced.
Hysteresis thresholding using different image quantiles, is used to
threshold the arteries. This stage is followed by a thinning procedure
to extract the centerlines. The resulting skeleton image is then pruned
using morphological and pattern recognition techniques to remove
non-vessel like structures. Finally, edge-based stereo correspondence
is solved using a parallel evolutionary optimization method based on
f symbiosis. The detected 2D centerlines combined with disparity
map information allow the reconstruction of the 3D vessel
centerlines. The proposed method has been evaluated on patient data
sets for evaluation purposes.
Abstract: IP multicasting is a key technology for many existing and emerging applications on the Internet. Furthermore, with increasing popularity of wireless devices and mobile equipment, it is necessary to determine the best way to provide this service in a wireless environment. IETF Mobile IP, that provides mobility for hosts in IP networks, proposes two approaches for mobile multicasting, namely, remote subscription (MIP-RS) and bi-directional tunneling (MIP-BT). In MIP-RS, a mobile host re-subscribes to the multicast groups each time it moves to a new foreign network. MIP-RS suffers from serious packet losses while mobile host handoff occurs. In MIP-BT, mobile hosts send and receive multicast packets by way of their home agents (HAs), using Mobile IP tunnels. Therefore, it suffers from inefficient routing and wastage of system resources. In this paper, we propose a protocol called Mobile Multicast support using Old Foreign Agent (MMOFA) for Mobile Hosts. MMOFA is derived from MIP-RS and with the assistance of Mobile host's Old foreign agent, routes the missing datagrams due to handoff in adjacent network via tunneling. Also, we studied the performance of the proposed protocol by simulation under ns-2.27. The results demonstrate that MMOFA has optimal routing efficiency and low delivery cost, as compared to other approaches.
Abstract: The principle of frequency and amplitude measurement of a vibrating object in water using ultrasonic speckle technique is presented in this paper. Compared with other traditional techniques, the ultrasonic speckle technique can be applied to vibration measurement of a nonmetal object with rough surface in water in a noncontact way. The relationship between speckle movement and object movement was analyzed. Based on this study, an ultrasonic speckle measurement system was set up. With this system the frequency and amplitude of an underwater vibrating cantilever beam was detected. The result shows that the experimental data is in good agreement with the calibrating data.
Abstract: In this paper, to optimize the “Characteristic Straight Line Method" which is used in the soil displacement analysis, a “best estimate" of the geodetic leveling observations has been achieved by taking in account the concept of 'Height systems'. This concept has been discussed in detail and consequently the concept of “height". In landslides dynamic analysis, the soil is considered as a mosaic of rigid blocks. The soil displacement has been monitored and analyzed by using the “Characteristic Straight Line Method". Its characteristic components have been defined constructed from a “best estimate" of the topometric observations. In the measurement of elevation differences, we have used the most modern leveling equipment available. Observational procedures have also been designed to provide the most effective method to acquire data. In addition systematic errors which cannot be sufficiently controlled by instrumentation or observational techniques are minimized by applying appropriate corrections to the observed data: the level collimation correction minimizes the error caused by nonhorizontality of the leveling instrument's line of sight for unequal sight lengths, the refraction correction is modeled to minimize the refraction error caused by temperature (density) variation of air strata, the rod temperature correction accounts for variation in the length of the leveling rod' s Invar/LO-VAR® strip which results from temperature changes, the rod scale correction ensures a uniform scale which conforms to the international length standard and the introduction of the concept of the 'Height systems' where all types of height (orthometric, dynamic, normal, gravity correction, and equipotential surface) have been investigated. The “Characteristic Straight Line Method" is slightly more convenient than the “Characteristic Circle Method". It permits to evaluate a displacement of very small magnitude even when the displacement is of an infinitesimal quantity. The inclination of the landslide is given by the inverse of the distance reference point O to the “Characteristic Straight Line". Its direction is given by the bearing of the normal directed from point O to the Characteristic Straight Line (Fig..6). A “best estimate" of the topometric observations was used to measure the elevation of points carefully selected, before and after the deformation. Gross errors have been eliminated by statistical analyses and by comparing the heights within local neighborhoods. The results of a test using an area where very interesting land surface deformation occurs are reported. Monitoring with different options and qualitative comparison of results based on a sufficient number of check points are presented.
Abstract: The problem of estimating time-varying regression is
inevitably concerned with the necessity to choose the appropriate
level of model volatility - ranging from the full stationarity of instant
regression models to their absolute independence of each other. In the
stationary case the number of regression coefficients to be estimated
equals that of regressors, whereas the absence of any smoothness
assumptions augments the dimension of the unknown vector by the
factor of the time-series length. The Akaike Information Criterion
is a commonly adopted means of adjusting a model to the given
data set within a succession of nested parametric model classes,
but its crucial restriction is that the classes are rigidly defined by
the growing integer-valued dimension of the unknown vector. To
make the Kullback information maximization principle underlying the
classical AIC applicable to the problem of time-varying regression
estimation, we extend it onto a wider class of data models in which
the dimension of the parameter is fixed, but the freedom of its values
is softly constrained by a family of continuously nested a priori
probability distributions.