Abstract: We demonstrate the synthesis of intermediary views
within a sequence of color encoded, materials discriminating, X-ray
images that exhibit animated depth in a visual display. During the
image acquisition process, the requirement for a linear X-ray detector
array is replaced by synthetic image. Scale Invariant Feature
Transform, SIFT, in combination with material segmented morphing
is employed to produce synthetic imagery. A quantitative analysis of
the feature matching performance of the SIFT is presented along with
a comparative study of the synthetic imagery. We show that the total
number of matches produced by SIFT reduces as the angular
separation between the generating views increases. This effect is
accompanied by an increase in the total number of synthetic pixel
errors. The trends observed are obtained from 15 different luggage
items. This programme of research is in collaboration with the UK
Home Office and the US Dept. of Homeland Security.
Abstract: In the present study, the incorporation of graphene
into blends of acrylonitrile-butadiene-styrene terpolymer with
polypropylene (ABS/PP) was investigated focusing on the
improvement of their thermomechanical characteristics and the effect
on their rheological behavior. The blends were prepared by melt
mixing in a twin-screw extruder and were characterized by measuring
the MFI as well as by performing DSC, TGA and mechanical tests.
The addition of graphene to ABS/PP blends tends to increase their
melt viscosity, due to the confinement of polymer chains motion.
Also, graphene causes an increment of the crystallization temperature
(Tc), especially in blends with higher PP content, because of the
reduction of surface energy of PP nucleation, which is a consequence
of the attachment of PP chains to the surface of graphene through the
intermolecular CH-π interaction. Moreover, the above nanofiller
improves the thermal stability of PP and increases the residue of
thermal degradation at all the investigated compositions of blends,
due to the thermal isolation effect and the mass transport barrier
effect. Regarding the mechanical properties, the addition of graphene
improves the elastic modulus, because of its intrinsic mechanical
characteristics and its rigidity, and this effect is particularly strong in
the case of pure PP.
Abstract: In this report we present a rule-based approach to
detect anomalous telephone calls. The method described here uses
subscriber usage CDR (call detail record) data sampled over two
observation periods: study period and test period. The study period
contains call records of customers- non-anomalous behaviour.
Customers are first grouped according to their similar usage
behaviour (like, average number of local calls per week, etc). For
customers in each group, we develop a probabilistic model to describe
their usage. Next, we use maximum likelihood estimation (MLE) to
estimate the parameters of the calling behaviour. Then we determine
thresholds by calculating acceptable change within a group. MLE is
used on the data in the test period to estimate the parameters of the
calling behaviour. These parameters are compared against thresholds.
Any deviation beyond the threshold is used to raise an alarm. This
method has the advantage of identifying local anomalies as compared
to techniques which identify global anomalies. The method is tested
for 90 days of study data and 10 days of test data of telecom
customers. For medium to large deviations in the data in test window,
the method is able to identify 90% of anomalous usage with less than
1% false alarm rate.
Abstract: This paper presents a new method for estimating the mean curve of impulse voltage waveforms that are recorded during impulse tests. In practice, these waveforms are distorted by noise, oscillations and overshoot. The problem is formulated as an estimation problem. Estimation of the current signal parameters is achieved using a fast and accurate technique. The method is based on discrete dynamic filtering algorithm (DDF). The main advantage of the proposed technique is its ability in producing the estimates in a very short time and at a very high degree of accuracy. The algorithm uses sets of digital samples of the recorded impulse waveform. The proposed technique has been tested using simulated data of practical waveforms. Effects of number of samples and data window size are studied. Results are reported and discussed.
Abstract: This paper presents the work of signal discrimination
specifically for Electrocardiogram (ECG) waveform. ECG signal is
comprised of P, QRS, and T waves in each normal heart beat to
describe the pattern of heart rhythms corresponds to a specific
individual. Further medical diagnosis could be done to determine any
heart related disease using ECG information. The emphasis on QRS
Complex classification is further discussed to illustrate the
importance of it. Pan-Tompkins Algorithm, a widely known
technique has been adapted to realize the QRS Complex
classification process. There are eight steps involved namely
sampling, normalization, low pass filter, high pass filter (build a band
pass filter), derivation, squaring, averaging and lastly is the QRS
detection. The simulation results obtained is represented in a
Graphical User Interface (GUI) developed using MATLAB.
Abstract: This study was designed to investigate the role of serum nitric oxide and sialic acid in the development of diabetic nephropathy as disease marker. Total 210 diabetic patients (age and sex matched) were selected followed by informed consent and divided into four groups (70 each) as I: control; II: diabetic; III: diabetic hypertensive; IV: diabetic nephropathy. The blood samples of all subjects were collected and analyzed for serum nitric oxide, sialic acid, fasting blood glucose, serum urea, creatinine, HbA1c and GFR. The BMI, systolic and diastolic blood pressures, blood glucose, HbA1c and serum sialic acid levels were high (p
Abstract: Domain-specific languages describe specific solutions to problems in the application domain. Traditionally they form a solution composing black-box abstractions together. This, usually, involves non-deep transformations over the target model. In this paper we argue that it is potentially powerful to operate with grey-box abstractions to build a domain-specific software system. We present parametric code templates as grey-box abstractions and conceptual tools to encapsulate and manipulate these templates. Manipulations introduce template-s merging routines and can be defined in a generic way. This involves reasoning mechanisms at the code templates level. We introduce the concept of Neurath Modelling Language (NML) that operates with parametric code templates and specifies a visualisation mapping mechanism for target models. Finally we provide an example of calculating a domain-specific software system with predefined NML elements.
Abstract: This paper introduces a framework that aims to
support the design and development of mobile services. The
traditional innovation process and its supporting instruments in form
of creativity tools, acceptance research and user-generated content
analysis are screened for potentials for improvement. The result is a
reshaped innovation process where acceptance research and usergenerated
content analysis are fully integrated within a creativity
tool. Advantages of this method are the enhancement of design
relevant information for developers and designers and the possibility
to forecast market success.
Abstract: The aim of this paper is to present current and future
procedures in castings procurement. Differences in procurement are
highlighted. The supplier selection criteria used in practice is
compared to literature findings. Different trends related to supply
chains are presented and it is described how they are reflected in
reality to castings procurement. To fulfil the aim, interviews were
conducted in nine companies using castings. It was found that largest
casting users have the most subcontractor foundries and it is more
typical that they have multiple suppliers for the same parts. Currently
only two companies out of nine purchase castings outside Europe,
but the others are also progressing in the same direction. The main
reason is the need to lower purchasing costs. Another trend is that all
companies want to buy cast components or sub-assemblies instead of
raw castings from foundries. It was found that price is a main
supplier selection criterion. All companies use competitive bidding in
supplier selection.
Abstract: A Picard-Newton iteration method is studied to accelerate the numerical solution procedure of a class of two-dimensional nonlinear coupled parabolic-hyperbolic system. The Picard-Newton iteration is designed by adding higher-order terms of small quantity to an existing Picard iteration. The discrete functional analysis and inductive hypothesis reasoning techniques are used to overcome difficulties coming from nonlinearity and coupling, and theoretical analysis is made for the convergence and approximation properties of the iteration scheme. The Picard-Newton iteration has a quadratic convergent ratio, and its solution has second order spatial approximation and first order temporal approximation to the exact solution of the original problem. Numerical tests verify the results of the theoretical analysis, and show the Picard-Newton iteration is more efficient than the Picard iteration.
Abstract: Fecal sterol has been proposed as a chemical indicator
of human fecal pollution even when fecal coliform populations have
diminished due to water chlorination or toxic effects of industrial
effluents. This paper describes an improved derivatization procedure
for simultaneous determination of four fecal sterols including
coprostanol, epicholestanol, cholesterol and cholestanol using gas
chromatography-mass spectrometry (GC-MS), via optimization study
on silylation procedures using N-O-bis
(trimethylsilyl)-trifluoroacetamide (BSTFA), and
N-(tert-butyldimethylsilyl)-N-methyltrifluoroacetamide
(MTBSTFA), which lead to the formation of trimethylsilyl (TMS) and
tert-butyldimethylsilyl (TBS) derivatives, respectively. Two
derivatization processes of injection-port derivatization and water bath
derivatization (60 oC, 1h) were inspected and compared. Furthermore,
the methylation procedure at 25 oC for 2h with
trimethylsilydiazomethane (TMSD) for fecal sterols analysis was also
studied. It was found that most of TMS derivatives demonstrated the
highest sensitivities, followed by methylated derivatives. For BSTFA
or MTBSTFA derivatization processes, the simple injection-port
derivatization process could achieve the same efficiency as that in the
tedious water bath derivatization procedure.
Abstract: We consider herein a concise view of discreet
programming models and methods. There has been conducted the
models and methods analysis. On the basis of discreet programming
models there has been elaborated and offered a new class of
problems, i.e. block-symmetry models and methods of applied tasks
statements and solutions.
Abstract: Regression testing is a maintenance activity applied to
modified software to provide confidence that the changed parts are
correct and that the unchanged parts have not been adversely affected
by the modifications. Regression test selection techniques reduce the
cost of regression testing, by selecting a subset of an existing test
suite to use in retesting modified programs. This paper presents the
first general regression-test-selection technique, which based on code
and allows selecting test cases for any programs written in any
programming language. Then it handles incomplete program. We
also describe RTSDiff, a regression-test-selection system that
implements the proposed technique. The results of the empirical
studied that performed in four programming languages java, C#, Cµ
and Visual basic show that the efficiency and effective in reducing
the size of test suit.
Abstract: This paper describes a novel method for automatic
estimation of the contours of weld defect in radiography images.
Generally, the contour detection is the first operation which we apply
in the visual recognition system. Our approach can be described as a
region based maximum likelihood formulation of parametric
deformable contours. This formulation provides robustness against
the poor image quality, and allows simultaneous estimation of the
contour parameters together with other parameters of the model.
Implementation is performed by a deterministic iterative algorithm
with minimal user intervention. Results testify for the very good
performance of the approach especially in synthetic weld defect
images.
Abstract: A multi-rate discrete-time model, whose response
agrees exactly with that of a continuous-time original at all sampling
instants for any sampling periods, is developed for a linear system,
which is assumed to have multiple real eigenvalues. The sampling
rates can be chosen arbitrarily and individually, so that their ratios
can even be irrational. The state space model is obtained as a
combination of a linear diagonal state equation and a nonlinear output
equation. Unlike the usual lifted model, the order of the proposed
model is the same as the number of sampling rates, which is less than
or equal to the order of the original continuous-time system. The
method is based on a nonlinear variable transformation, which can be
considered as a generalization of linear similarity transformation,
which cannot be applied to systems with multiple eigenvalues in
general. An example and its simulation result show that the proposed
multi-rate model gives exact responses at all sampling instants.
Abstract: This paper describes a novel approach for deriving
modules from protein-protein interaction networks, which combines
functional information with topological properties of the network.
This approach is based on weighted clustering coefficient, which
uses weights representing the functional similarities between the
proteins. These weights are calculated according to the semantic
similarity between the proteins, which is based on their Gene
Ontology terms. We recently proposed an algorithm for identification
of functional modules, called SWEMODE (Semantic WEights for
MODule Elucidation), that identifies dense sub-graphs containing
functionally similar proteins. The rational underlying this approach is
that each module can be reduced to a set of triangles (protein triplets
connected to each other). Here, we propose considering semantic
similarity weights of all triangle-forming edges between proteins. We
also apply varying semantic similarity thresholds between
neighbours of each node that are not neighbours to each other (and
hereby do not form a triangle), to derive new potential triangles to
include in module-defining procedure. The results show an
improvement of pure topological approach, in terms of number of
predicted modules that match known complexes.
Abstract: The significance of emissions from the road transport
sector (such as air pollution, noise, etc) has grown considerably in
recent years. In Australia, 14.3% of national greenhouse gas
emissions in 2000 were the transport sector-s share which 12.9% of
net national emissions were related to a road transport alone.
Considering the growing attention to the green house gas(GHG)
emissions, this paper attempts to provide air pollution modeling
aspects of environmental consequences of the road transport by using
one of the best computer based tools including the Geographic
Information System (GIS). In other word, in this study, GIS and its
applications is explained, models which are used to model air
pollution and GHG emissions from vehicles are described and GIS is
applied in real case study that attempts to forecast GHG emission
from people who travel to work by car in 2031 in Melbourne for
analysing results as thematic maps.
Abstract: This study aimed to investigate the influence of selected antecedents, which were tourists’ satisfaction towards attractions in Bangkok, perceived value of the attractions, feelings of engagement with the attractions, acquaintance with the attractions, push factors, pull factors and motivation to seek novelty, on foreign tourist’s loyalty towards tourist attractions in Bangkok. By using multi stage sampling technique, 400 international tourists were sampled. After that, Semi Structural Equation Model was utilized in the analysis stage by LISREL. The Semi Structural Equation Model of the selected antecedents of tourist’s loyalty attractions had a correlation with the empirical data through the following statistical descriptions: Chi- square = 3.43, df = 4, P- value = 0.48893; RMSEA = 0.000; CFI = 1.00; CN = 1539.75; RMR = 0.0022; GFI = 1.00 and AGFI = 0.98. The findings indicated that all antecedents were able together to predict the loyalty of the foreign tourists who visited Bangkok at 73 percent.
Abstract: Phytases (myo-inositol hexakisphosphate
phosphohydrolases; EC 3.1.3.8) catalyze the hydrolysis of phytic acid
(myoinositol hexakisphosphate) to the mono-, di-, tri-, tetra-, and
pentaphosphates of myo-inositol and inorganic phosphate.
Therrmophilic bacteria isolated from water sampled from hot spring.
About 120 isolates of bacteria were successfully isolated form hot
spring water sample and tested for extracellular phytase producing.
After 5 passages of the screening on the PSM media, 4 isolates were
found stable in producing phytase enzyme. The 16s RDNA
sequencing for identification of bacteria using molecular technique
revealed that all isolates those positive in phytase producing are
belong to Geobacillus spp. And Anoxybacillus spp. Anoxybacillus
rupiensis UniSZA-7 were identified for their carbon source utilization
using Phenotype Microarray Plate of Biolog and found they utilize
several kind of carbon source provided.
Abstract: As a popular rank-reduced vector space approach,
Latent Semantic Indexing (LSI) has been used in information
retrieval and other applications. In this paper, an LSI-based content
vector model for text classification is presented, which constructs
multiple augmented category LSI spaces and classifies text by their
content. The model integrates the class discriminative information
from the training data and is equipped with several pertinent feature
selection and text classification algorithms. The proposed classifier
has been applied to email classification and its experiments on a
benchmark spam testing corpus (PU1) have shown that the approach
represents a competitive alternative to other email classifiers based
on the well-known SVM and naïve Bayes algorithms.