Abstract: The use of a Bayesian Hierarchical Model (BHM) to interpret breath measurements obtained during a 13C Octanoic Breath Test (13COBT) is demonstrated. The statistical analysis was implemented using WinBUGS, a commercially available computer package for Bayesian inference. A hierarchical setting was adopted where poorly defined parameters associated with a delayed Gastric Emptying (GE) were able to "borrow" strength from global distributions. This is proved to be a sufficient tool to correct model's failures and data inconsistencies apparent in conventional analyses employing a Non-linear least squares technique (NLS). Direct comparison of two parameters describing gastric emptying ng ( tlag -lag phase, t1/ 2 -half emptying time) revealed a strong correlation between the two methods. Despite our large dataset ( n = 164 ), Bayesian modeling was fast and provided a successful fitting for all subjects. On the contrary, NLS failed to return acceptable estimates in cases where GE was delayed.
Abstract: The existing information system (IS) developments
methods are not met the requirements to resolve the security related
IS problems and they fail to provide a successful integration of
security and systems engineering during all development process
stages. Hence, the security should be considered during the whole
software development process and identified with the requirements
specification. This paper aims to propose an integrated security and
IS engineering approach in all software development process stages
by using i* language. This proposed framework categorizes into three
separate parts: modelling business environment part, modelling
information technology system part and modelling IS security part.
The results show that considering security IS goals in the whole
system development process can have a positive influence on system
implementation and better meet business expectations.
Abstract: In high powered dense wavelength division
multiplexed (WDM) systems with low chromatic dispersion,
four-wave mixing (FWM) can prove to be a major source of noise.
The MultiCanonical Monte Carlo Method (MCMC) and the Split
Step Fourier Method (SSFM) are combined to accurately evaluate the
probability density function of the decision variable of a receiver,
limited by FWM. The combination of the two methods leads to more
accurate results, and offers the possibility of adding other optical
noises such as the Amplified Spontaneous Emission (ASE) noise.
Abstract: Plasma Wind Tunnels (PWT) are extensively used for screening and qualification of re-entry Thermel Protection System (TPS) materials. Proper design of a supersonic diffuser for plasma wind tunnel is of importance for achieving good pressurerecovery (thereby reducing vacuum pumping requirement & run time costs) and isolating downstream stream fluctuations from propagating costs) and isolating downstream stream fluctuationnts the details of a rapid design methodology successfully employed for designing supersonic diffuser for high power (several megawatts)plasma wind tunnels and numerical performance analysis of a diffuser configuration designed for one megawatt power rated plasma wind tunnel(enthalpy ~ 30 MJ/kg) using FLUENT 6.3® solver for different diffuser operating sub-atmospheric back-pressures.
Abstract: The paper presents an investigation into the role of virtual reality and web technologies in the field of distance education. Within this frame, special emphasis is given on the building of web-based virtual learning environments so as to successfully fulfill their educational objectives. In particular, basic pedagogical methods are studied, focusing mainly on the efficient preparation, approach and presentation of learning content, and specific designing rules are presented considering the hypermedia, virtual and educational nature of this kind of applications. The paper also aims to highlight the educational benefits arising from the use of virtual reality technology in medicine and study the emerging area of web-based medical simulations. Finally, an innovative virtual reality environment for distance education in medicine is demonstrated. The proposed environment reproduces conditions of the real learning process and enhances learning through a real-time interactive simulator.
Abstract: Understanding proteins functions is a major goal in
the post-genomic era. Proteins usually work in context of other
proteins and rarely function alone. Therefore, it is highly relevant to
study the interaction partners of a protein in order to understand its
function. Machine learning techniques have been widely applied to
predict protein-protein interactions. Kernel functions play an
important role for a successful machine learning technique. Choosing
the appropriate kernel function can lead to a better accuracy in a
binary classifier such as the support vector machines. In this paper,
we describe a Bayesian kernel for the support vector machine to
predict protein-protein interactions. The use of Bayesian kernel can
improve the classifier performance by incorporating the probability
characteristic of the available experimental protein-protein
interactions data that were compiled from different sources. In
addition, the probabilistic output from the Bayesian kernel can assist
biologists to conduct more research on the highly predicted
interactions. The results show that the accuracy of the classifier has
been improved using the Bayesian kernel compared to the standard
SVM kernels. These results imply that protein-protein interaction can
be predicted using Bayesian kernel with better accuracy compared to
the standard SVM kernels.
Abstract: D-erythro-cyclohexylserine (D
chiral unnatural β-hydroxy amino acid expected for the synthesis of drug for AIDS treatment. To develop a continuous bioconversion
system with whole cell biocatalyst of D-threonine aldolase (D genes for the D-erythro-CHS production, D-threonine aldolase gene
was amplified from Ensifer arboris 100383 by direct PCR amplication using two degenerated oligonucleotide primers designed based on
genomic sequence of Shinorhizobium meliloti
Sequence analysis of the cloned DNA fragment revealed one
open-reading frame of 1059 bp and 386 amino acids. This putative
D-TA gene was cloned into NdeI and EcoRI (pEnsi
His-tag sequence or BamHI (pEnsi-DTA[2])
sequence of the pET21(a) vector. The expression level of the cloned gene was extremely overexpressed by E. coli BL21(DE3) transformed with pEnsi-DTA[1] compared to E. coli BL21(DE3) transformed with
pEnsi-DTA[2]. When the cells expressing the wild
used for D-TA enzyme activity, 12 mM glycine was successfully
detected in HPLC analysis. Moreover, the whole cells harbouring the
recombinant D-TA was able to synthesize D-erythro
of 0.6 mg/ml in a batch reaction.
Abstract: Biofuels, like biobutanol, have been recognized for
being renewable and sustainable fuels which can be produced from
lignocellulosic biomass. To convert lignocellulosic biomass to
biofuel, pretreatment process is an important step to remove
hemicelluloses and lignin to improve enzymatic hydrolysis. Dilute
acid pretreatment has been successful developed for pretreatment of
corncobs and the optimum conditions of dilute sulfuric and
phosphoric acid pretreatment were obtained at 120 °C for 5 min with
15:1 liquid to solid ratio and 140 °C for 10 min with 10:1 liquid to
solid ratio, respectively. The result shows that both of acid
pretreatments gave the content of total sugar approximately 34–35
g/l. In case of inhibitor content (furfural), phosphoric acid
pretreatment gives higher than sulfuric acid pretreatment.
Characterizations of corncobs after pretreatment indicate that both of
acid pretreatments can improve enzymatic accessibility and the better
results present in corncobs pretreated with sulfuric acid in term of
surface area, crystallinity, and composition analysis.
Abstract: In this paper, we propose a face recognition algorithm
using AAM and Gabor features. Gabor feature vectors which are well
known to be robust with respect to small variations of shape, scaling,
rotation, distortion, illumination and poses in images are popularly
employed for feature vectors for many object detection and
recognition algorithms. EBGM, which is prominent among face
recognition algorithms employing Gabor feature vectors, requires
localization of facial feature points where Gabor feature vectors are
extracted. However, localization method employed in EBGM is based
on Gabor jet similarity and is sensitive to initial values. Wrong
localization of facial feature points affects face recognition rate. AAM
is known to be successfully applied to localization of facial feature
points. In this paper, we devise a facial feature point localization
method which first roughly estimate facial feature points using AAM
and refine facial feature points using Gabor jet similarity-based facial
feature localization method with initial points set by the rough facial
feature points obtained from AAM, and propose a face recognition
algorithm using the devised localization method for facial feature
localization and Gabor feature vectors. It is observed through
experiments that such a cascaded localization method based on both
AAM and Gabor jet similarity is more robust than the localization
method based on only Gabor jet similarity. Also, it is shown that the
proposed face recognition algorithm using this devised localization
method and Gabor feature vectors performs better than the
conventional face recognition algorithm using Gabor jet
similarity-based localization method and Gabor feature vectors like
EBGM.
Abstract: Most routing protocols (DSR, AODV etc.) that have
been designed for wireless adhoc networks incorporate the broadcasting
operation in their route discovery scheme. Probabilistic broadcasting
techniques have been developed to optimize the broadcast operation
which is otherwise very expensive in terms of the redundancy
and the traffic it generates. In this paper we have explored percolation
theory to gain a different perspective on probabilistic broadcasting
schemes which have been actively researched in the recent years.
This theory has helped us estimate the value of broadcast probability
in a wireless adhoc network as a function of the size of the network.
We also show that, operating at those optimal values of broadcast
probability there is at least 25-30% reduction in packet regeneration
during successful broadcasting.
Abstract: Independent component analysis (ICA) is a computational method for finding underlying signals or components from multivariate statistical data. The ICA method has been successfully applied in many fields, e.g. in vision research, brain imaging, geological signals and telecommunications. In this paper, we apply the ICA method to an analysis of mass spectra of oligomeric species emerged from aluminium sulphate. Mass spectra are typically complex, because they are linear combinations of spectra from different types of oligomeric species. The results show that ICA can decomposite the spectral components for useful information. This information is essential in developing coagulation phases of water treatment processes.
Abstract: A diamond-like carbon (DLC) based solid-lubricant
film was designed and DLC films were successfully prepared using a
microwave plasma enhanced magnetron sputtering deposition
technology. Post-test characterizations including Raman
spectrometry, X-ray diffraction, nano-indentation test, adhesion test,
friction coefficient test were performed to study the influence of
substrate bias voltage on the mechanical properties of the W- and
S-doped DLC films. The results indicated that the W- and S-doped
DLC films also had the typical structure of DLC films and a better
mechanical performance achieved by the application of a substrate
bias of -200V.
Abstract: An innovative tri-axes micro-power receiver is
proposed. The tri-axes micro-power receiver consists of two sets 3-D
micro-solenoids and one set planar micro-coils in which iron core is
embedded. The three sets of micro-coils are designed to be orthogonal
to each other. Therefore, no matter which direction the flux is present
along, the magnetic energy can be harvested and transformed into
electric power. Not only dead space of receiving power is mostly
reduced, but also transformation efficiency of electromagnetic energy
to electric power can be efficiently raised. By employing commercial
software, Ansoft Maxwell, the preliminary simulation results verify
that the proposed micro-power receiver can efficiently pick up the
energy transmitted by magnetic power source.
As to the fabrication process, the isotropic etching technique is
employed to micro-machine the inverse-trapezoid fillister so that the
copper wire can be successfully electroplated. The adhesion between
micro-coils and fillister is much enhanced.
Abstract: This paper focuses on wormhole attacks detection in wireless sensor networks. The wormhole attack is particularly challenging to deal with since the adversary does not need to compromise any nodes and can use laptops or other wireless devices to send the packets on a low latency channel. This paper introduces an easy and effective method to detect and locate the wormholes: Since beacon nodes are assumed to know their coordinates, the straight line distance between each pair of them can be calculated and then compared with the corresponding hop distance, which in this paper equals hop counts × node-s transmission range R. Dramatic difference may emerge because of an existing wormhole. Our detection mechanism is based on this. The approximate location of the wormhole can also be derived in further steps based on this information. To the best of our knowledge, our method is much easier than other wormhole detecting schemes which also use beacon nodes, and to those have special requirements on each nodes (e.g., GPS receivers or tightly synchronized clocks or directional antennas), ours is more economical. Simulation results show that the algorithm is successful in detecting and locating wormholes when the density of beacon nodes reaches 0.008 per m2.
Abstract: In first stage of each microwave receiver there is Low
Noise Amplifier (LNA) circuit, and this stage has important rule in
quality factor of the receiver. The design of a LNA in Radio
Frequency (RF) circuit requires the trade-off many importance
characteristics such as gain, Noise Figure (NF), stability, power
consumption and complexity. This situation Forces desingners to
make choices in the desing of RF circuits. In this paper the aim is to
design and simulate a single stage LNA circuit with high gain and
low noise using MESFET for frequency range of 5 GHz to 6 GHz.
The desing simulation process is down using Advance Design
System (ADS). A single stage LNA has successfully designed with
15.83 dB forward gain and 1.26 dB noise figure in frequency of 5.3
GHz. Also the designed LNA should be working stably In a
frequency range of 5 GHz to 6 GHz.
Abstract: Knowledge Discovery of Databases (KDD) is the
process of extracting previously unknown but useful and significant
information from large massive volume of databases. Data Mining is
a stage in the entire process of KDD which applies an algorithm to
extract interesting patterns. Usually, such algorithms generate huge
volume of patterns. These patterns have to be evaluated by using
interestingness measures to reflect the user requirements.
Interestingness is defined in different ways, (i) Objective measures
(ii) Subjective measures. Objective measures such as support and
confidence extract meaningful patterns based on the structure of the
patterns, while subjective measures such as unexpectedness and
novelty reflect the user perspective. In this report, we try to brief the
more widely spread and successful subjective measures and propose
a new subjective measure of interestingness, i.e. shocking.
Abstract: This paper discusses EM algorithm and Bootstrap
approach combination applied for the improvement of the satellite
image fusion process. This novel satellite image fusion method based
on estimation theory EM algorithm and reinforced by Bootstrap
approach was successfully implemented and tested. The sensor
images are firstly split by a Bayesian segmentation method to
determine a joint region map for the fused image. Then, we use the
EM algorithm in conjunction with the Bootstrap approach to develop
the bootstrap EM fusion algorithm, hence producing the fused
targeted image. We proposed in this research to estimate the
statistical parameters from some iterative equations of the EM
algorithm relying on a reference of representative Bootstrap samples
of images. Sizes of those samples are determined from a new
criterion called 'hybrid criterion'. Consequently, the obtained results
of our work show that using the Bootstrap EM (BEM) in image
fusion improve performances of estimated parameters which involve
amelioration of the fused image quality; and reduce the computing
time during the fusion process.
Abstract: The systematic manipulations of shapes and sizes of
inorganic compounds greatly benefit the various application fields
including optics, magnetic, electronics, catalysis and medicine.
However shape control has been much more difficult to achieve.
Hence exploration of novel method for the preparation of differently
shaped nanoparticles is challenging research area. II-VI group of
semiconductor cadmium sulphide (CdS) nanostructure with different
morphologies (such as, acicular like, mesoporous, spherical shapes)
and of crystallite sizes vary from 11 to 16 nm were successfully
synthesized by chemical aqueous precipitation of Cd2+ ions with
homogeneously released S2- ions from decomposition of cadmium
sulphate (CdSO4) and thioacetamide (CH3CSNH2) by annealing at
different radiations (microwave, ultrasonic and sunlight) with matter
and systematic research has been done for various factors affecting
the controlled growth rate of CdS nanoparticles. The obtained
nanomaterials have been characterized by X-ray Diffraction (XRD),
Fourier Transform Infrared Spectroscopy (FTIR),
Thermogravometric (DSC-TGA) analysis and Scanning Electron
Microscopy (SEM). The result indicates that on increasing the
reaction time particle size increases but on increasing the molar ratios
grain size decreases.
Abstract: When consistently innovative business-models can
give companies a competitive advantage, longitudinal empirical
research, which can reflect dynamic business-model changes, has yet
to prove a definitive connection. This study consequently employs a
dynamic perspective in conjunction with innovation theory to examine
the relationship between the types of business-model innovation and
firm value. This study tries to examine various types of
business-model innovation in high-end and low-end technology
industries such as HTC and the 7-Eleven chain stores with research
periods of 14 years and 32 years, respectively. The empirical results
suggest that adopting radical business-model innovation in addition to
expanding new target markets can successfully lead to a competitive
advantage. Sustained advanced technological competences and
service/product innovation are the key successful factors in high-end
and low-end technology industry business-models respectively. In
sum up, the business-model innovation can yield a higher market value
and financial value in high-end technology industries than low-end
ones.
Abstract: Educational institutions often implement policies with
the intention of influencing how learning and teaching occur.
Generally, such policies are not as effective as their makers would
like; changing the behavior of third-level teachers proves difficult.
Nevertheless, a policy instituted in 2006 at the Dublin Institute of
Technology has met with success: each newly hired faculty member
must have a post-graduate qualification in “Learning and Teaching"
or successfully complete one within the first two years of
employment. The intention is to build teachers- knowledge about
student-centered pedagogies and their capacity to implement them.
As a result of this policy (and associated programs that support it),
positive outcomes are readily apparent. Individual teachers who have
completed the programs have implemented significant change at the
course and program levels. This paper introduces the policy,
identifies outcomes in relation to existing theory, describes research
underway, and pinpoints areas where organizational learning has
occurred.