Abstract: This paper presents a technique for diagnosis of the abdominal aorta aneurysm in magnetic resonance imaging (MRI) images. First, our technique is designed to segment the aorta image in MRI images. This is a required step to determine the volume of aorta image which is the important step for diagnosis of the abdominal aorta aneurysm. Our proposed technique can detect the volume of aorta in MRI images using a new external energy for snakes model. The new external energy for snakes model is calculated from Law-s texture. The new external energy can increase the capture range of snakes model efficiently more than the old external energy of snakes models. Second, our technique is designed to diagnose the abdominal aorta aneurysm by Bayesian classifier which is classification models based on statistical theory. The feature for data classification of abdominal aorta aneurysm was derived from the contour of aorta images which was a result from segmenting of our snakes model, i.e., area, perimeter and compactness. We also compare the proposed technique with the traditional snakes model. In our experiment results, 30 images are trained, 20 images are tested and compared with expert opinion. The experimental results show that our technique is able to provide more accurate results than 95%.
Abstract: This paper investigates the problem of sampling from transactional data streams. We introduce CFISDS as a content based sampling algorithm that works on a landmark window model of data streams and preserve more informed sample in sample space. This algorithm that work based on closed frequent itemset mining tasks, first initiate a concept lattice using initial data, then update lattice structure using an incremental mechanism.Incremental mechanism insert, update and delete nodes in/from concept lattice in batch manner. Presented algorithm extracts the final samples on demand of user. Experimental results show the accuracy of CFISDS on synthetic and real datasets, despite on CFISDS algorithm is not faster than exist sampling algorithms such as Z and DSS.
Abstract: This paper present the study carried out of accident
analysis, black spot study and to develop accident predictive models
based on the data collected at rural roadway, Federal Route 50 (F050)
Malaysia. The road accident trends and black spot ranking were
established on the F050. The development of the accident prediction
model will concentrate in Parit Raja area from KM 19 to KM 23.
Multiple non-linear regression method was used to relate the discrete
accident data with the road and traffic flow explanatory variable. The
dependent variable was modeled as the number of crashes namely
accident point weighting, however accident point weighting have
rarely been account in the road accident prediction Models. The result
show that, the existing number of major access points, without traffic
light, rise in speed, increasing number of Annual Average Daily
Traffic (AADT), growing number of motorcycle and motorcar and
reducing the time gap are the potential contributors of increment
accident rates on multiple rural roadway.
Abstract: Generalized Center String (GCS) problem are
generalized from Common Approximate Substring problem
and Common substring problems. GCS are known to be
NP-hard allowing the problems lies in the explosion of
potential candidates. Finding longest center string without
concerning the sequence that may not contain any motifs is
not known in advance in any particular biological gene
process. GCS solved by frequent pattern-mining techniques
and known to be fixed parameter tractable based on the
fixed input sequence length and symbol set size. Efficient
method known as Bpriori algorithms can solve GCS with
reasonable time/space complexities. Bpriori 2 and Bpriori
3-2 algorithm are been proposed of any length and any
positions of all their instances in input sequences. In this
paper, we reduced the time/space complexity of Bpriori
algorithm by Constrained Based Frequent Pattern mining
(CBFP) technique which integrates the idea of Constraint
Based Mining and FP-tree mining. CBFP mining technique
solves the GCS problem works for all center string of any
length, but also for the positions of all their mutated copies
of input sequence. CBFP mining technique construct TRIE
like with FP tree to represent the mutated copies of center
string of any length, along with constraints to restraint
growth of the consensus tree. The complexity analysis for
Constrained Based FP mining technique and Bpriori
algorithm is done based on the worst case and average case
approach. Algorithm's correctness compared with the
Bpriori algorithm using artificial data is shown.
Abstract: In Virtual organization, Knowledge Discovery (KD)
service contains distributed data resources and computing grid nodes.
Computational grid is integrated with data grid to form Knowledge
Grid, which implements Apriori algorithm for mining association
rule on grid network. This paper describes development of parallel
and distributed version of Apriori algorithm on Globus Toolkit using
Message Passing Interface extended with Grid Services (MPICHG2).
The creation of Knowledge Grid on top of data and
computational grid is to support decision making in real time
applications. In this paper, the case study describes design and
implementation of local and global mining of frequent item sets. The
experiments were conducted on different configurations of grid
network and computation time was recorded for each operation. We
analyzed our result with various grid configurations and it shows
speedup of computation time is almost superlinear.
Abstract: To determine if the murine insulinoma, β-TC-6, is a
suitable substitute for primary pancreatic β-cells in the study of β-
cell functional heterogeneity, we used three distinct functional assays
to ascertain the cell line-s response to glucose or a glucose analog.
These assays include: (i) a 2-NBDG uptake assay; (ii) a calcium
influx assay, and; (iii) a quinacrine secretion assay. We show that a
population of β-TC-6 cells endocytoses the glucose analog, 2-
NBDG, at different rates, has non-uniform intracellular calcium ion
concentrations and releases quinacrine at different rates when
challenged with glucose. We also measured the Km for β-TC-6
glucose uptake to be 46.9 mM and the Vm to be 8.36 x 10-5
mmole/million cells/min. These data suggest that β-TC-6 might be
used as an alternative to primary pancreatic β-cells for the study of
glucose-dependent β-cell functional heterogeneity.
Abstract: the elastic scattering of protons, deuterons and 3He on 6Li at different incident energies have been analyzed in the framework of the optical model using ECIS88 as well as SPI GENOA codes. The potential parameters were extracted in the phenomenological treatment of measured by us angular distributions and literature data. A good agreement between theoretical and experimental differential cross sections was obtained in whole angular range. Parameters for real part of potential have been also calculated microscopically with singleand double-folding model for the p and d, 3He scattering, respectively, using DFPOT code. For best agreement with experiment the normalization factor N for the potential depth is obtained in the range of 0.7-0.9.
Abstract: This paper proposes a novel multi-format stream grid
architecture for real-time image monitoring system. The system, based
on a three-tier architecture, includes stream receiving unit, stream
processor unit, and presentation unit. It is a distributed computing and
a loose coupling architecture. The benefit is the amount of required
servers can be adjusted depending on the loading of the image
monitoring system. The stream receive unit supports multi capture
source devices and multi-format stream compress encoder. Stream
processor unit includes three modules; they are stream clipping
module, image processing module and image management module.
Presentation unit can display image data on several different platforms.
We verified the proposed grid architecture with an actual test of image
monitoring. We used a fast image matching method with the
adjustable parameters for different monitoring situations. Background
subtraction method is also implemented in the system. Experimental
results showed that the proposed architecture is robust, adaptive, and
powerful in the image monitoring system.
Abstract: Crude oil blending is an important unit operation in
petroleum refining industry. A good model for the blending system is
beneficial for supervision operation, prediction of the export
petroleum quality and realizing model-based optimal control. Since
the blending cannot follow the ideal mixing rule in practice, we
propose a static neural network to approximate the blending
properties. By the dead-zone approach, we propose a new robust
learning algorithm and give theoretical analysis. Real data of crude
oil blending is applied to illustrate the neuro modeling approach.
Abstract: Short message integrated distributed monitoring systems (SM-DMS) are growing rapidly in wireless communication applications in various areas, such as electromagnetic field (EMF) management, wastewater monitoring, and air pollution supervision, etc. However, delay in short messages often makes the data embedded in SM-DMS transmit unreliably. Moreover, there are few regulations dealing with this problem in SMS transmission protocols. In this study, based on the analysis of the command and data requirements in the SM-DMS, we developed a processing model for the control center to solve the delay problem in data transmission. Three components of the model: the data transmission protocol, the receiving buffer pool method, and the timer mechanism were described in detail. Discussions on adjusting the threshold parameter in the timer mechanism were presented for the adaptive performance during the runtime of the SM-DMS. This model optimized the data transmission reliability in SM-DMS, and provided a supplement to the data transmission reliability protocols at the application level.
Abstract: To investigate the applicability of the EDR-2 film for
clinical radiation dosimetry, percentage depth-doses, profiles and
distributions in open and dynamically wedged fields were measured
using film and compared with data from a Treatment Planning
system.The validity of the EDR2 film to measure dose in a plane
parallel to the beam was tested by irradiating 10 cm×10 cm and 4
cm×4 cm fields from a Siemens, primus linac with a 6MV beam and
a source-to-surface distance of 100 cm. The film was placed
Horizontally between solid water phantom blocks and marked
with pin holes at a depth of 10 cm from the incident beam surface.
The film measurement results, in absolute dose, were compared with
ion chamber measurements using a Welhoffer scanning water tank
system and Treatment Planning system. Our results indicate a
maximum underestimate of calculated dose of 8 % with Treatment
Planning system.
Abstract: The paper deals with an analysis of visibility records collected from 210 European airports to obtain a realistic estimation of the availability of Free Space Optical (FSO) data links. Commercially available optical links usually operate in the 850nm waveband. Thus the influence of the atmosphere on the optical beam and on the visible light is similar. Long-term visibility records represent an invaluable source of data for the estimation of the quality of service of FSO links. The model used characterizes both the statistical properties of fade depths and the statistical properties of individual fade durations. Results are presented for Italy, France, and Germany.
Abstract: Self-organizing map (SOM) is a well known data reduction technique used in data mining. Data visualization can reveal structure in data sets that is otherwise hard to detect from raw data alone. However, interpretation through visual inspection is prone to errors and can be very tedious. There are several techniques for the automatic detection of clusters of code vectors found by SOMs, but they generally do not take into account the distribution of code vectors; this may lead to unsatisfactory clustering and poor definition of cluster boundaries, particularly where the density of data points is low. In this paper, we propose the use of a generic particle swarm optimization (PSO) algorithm for finding cluster boundaries directly from the code vectors obtained from SOMs. The application of our method to unlabeled call data for a mobile phone operator demonstrates its feasibility. PSO algorithm utilizes U-matrix of SOMs to determine cluster boundaries; the results of this novel automatic method correspond well to boundary detection through visual inspection of code vectors and k-means algorithm.
Abstract: A data warehouse (DW) is a system which has value and role for decision-making by querying. Queries to DW are critical regarding to their complexity and length. They often access millions of tuples, and involve joins between relations and aggregations. Materialized views are able to provide the better performance for DW queries. However, these views have maintenance cost, so materialization of all views is not possible. An important challenge of DW environment is materialized view selection because we have to realize the trade-off between performance and view maintenance. Therefore, in this paper, we introduce a new approach aimed to solve this challenge based on Two-Phase Optimization (2PO), which is a combination of Simulated Annealing (SA) and Iterative Improvement (II), with the use of Multiple View Processing Plan (MVPP). Our experiments show that 2PO outperform the original algorithms in terms of query processing cost and view maintenance cost.
Abstract: Matching algorithms have significant importance in
speaker recognition. Feature vectors of the unknown utterance are
compared to feature vectors of the modeled speakers as a last step in
speaker recognition. A similarity score is found for every model in
the speaker database. Depending on the type of speaker recognition,
these scores are used to determine the author of unknown speech
samples. For speaker verification, similarity score is tested against a
predefined threshold and either acceptance or rejection result is
obtained. In the case of speaker identification, the result depends on
whether the identification is open set or closed set. In closed set
identification, the model that yields the best similarity score is
accepted. In open set identification, the best score is tested against a
threshold, so there is one more possible output satisfying the
condition that the speaker is not one of the registered speakers in
existing database. This paper focuses on closed set speaker
identification using a modified version of a well known matching
algorithm. The results of new matching algorithm indicated better
performance on YOHO international speaker recognition database.
Abstract: In this paper, the sum of squares in linear regression is
reduced to sum of squares in semi-parametric regression. We
indicated that different sums of squares in the linear regression are
similar to various deviance statements in semi-parametric regression.
In addition to, coefficient of the determination derived in linear
regression model is easily generalized to coefficient of the
determination of the semi-parametric regression model. Then, it is
made an application in order to support the theory of the linear
regression and semi-parametric regression. In this way, study is
supported with a simulated data example.
Abstract: This paper aims to describe how student satisfaction is
measured for work-based learners as these are non-traditional
learners, conducting academic learning in the workplace, typically
their curricula have a high degree of negotiation, and whose
motivations are directly related to their employers- needs, as well as
their own career ambitions. We argue that while increasing WBL
participation, and use of SSD are both accepted as being of strategic
importance to the HE agenda, the use of WBL SSD is rarely
examined, and lessons can be learned from the comparison of SSD
from a range of WBL programmes, and increased visibility of this
type of data will provide insight into ways to improve and develop
this type of delivery. The key themes that emerged from the analysis
of the interview data were: learners profiles and needs, employers
drivers, academic staff drivers, organizational approach, tools for
collecting data and visibility of findings. The paper concludes with
observations on best practice in the collection, analysis and use of
WBL SSD, thus offering recommendations for both academic
managers and practitioners.
Abstract: Young patients suffering from Cerebral Palsy are
facing difficult choices concerning heavy surgeries. Diagnosis settled
by surgeons can be complex and on the other hand decision for
patient about getting or not such a surgery involves important
reflection effort. Proposed software combining prediction for
surgeries and post surgery kinematic values, and from 3D model
representing the patient is an innovative tool helpful for both patients
and medicine professionals. Beginning with analysis and
classification of kinematics values from Data Base extracted from
gait analysis in 3 separated clusters, it is possible to determine close
similarity between patients. Prediction surgery best adapted to
improve a patient gait is then determined by operating a suitable
preconditioned neural network. Finally, patient 3D modeling based
on kinematic values analysis, is animated thanks to post surgery
kinematic vectors characterizing the closest patient selected from
patients clustering.
Abstract: Authentication plays a vital role in many secure
systems. Most of these systems require user to log in with his or her
secret password or pass phrase before entering it. This is to ensure all
the valuables information is kept confidential guaranteeing also its
integrity and availability. However, to achieve this goal, users are
required to memorize high entropy passwords or pass phrases.
Unfortunately, this sometimes causes difficulty for user to remember
meaningless strings of data. This paper presents a new scheme which
assigns a weight to each personal question given to the user in
revealing the encrypted secrets or password. Concentration of this
scheme is to offer fault tolerance to users by allowing them to forget
the specific password to a subset of questions and still recover the
secret and achieve successful authentication. Comparison on level of
security for weight-based and weightless secret recovery scheme is
also discussed. The paper concludes with the few areas that requires
more investigation in this research.
Abstract: Numerical integration of initial boundary problem for advection equation in 3 ℜ is considered. The method used is
conditionally stable semi-Lagrangian advection scheme with high order interpolation on unstructured mesh. In order to increase time step integration the BFECC method with limiter TVD correction is used. The method is adopted on parallel graphic processor unit environment using NVIDIA CUDA and applied in Navier-Stokes solver. It is shown that the calculation on NVIDIA GeForce 8800
GPU is 184 times faster than on one processor AMDX2 4800+ CPU. The method is extended to the incompressible fluid dynamics solver. Flow over a Cylinder for 3D case is compared to the experimental data.