Abstract: It is known that the heart interacts with and adapts to its venous and arterial loading conditions. Various experimental studies and modeling approaches have been developed to investigate the underlying mechanisms. This paper presents a model of the left ventricle derived based on nonlinear stress-length myocardial characteristics integrated over truncated ellipsoidal geometry, and second-order dynamic mechanism for the excitation-contraction coupling system. The results of the model presented here describe the effects of the viscoelastic damping element of the electromechanical coupling system on the hemodynamic response. Different heart rates are considered to study the pacing effects on the performance of the left-ventricle against constant preload and afterload conditions under various damping conditions. The results indicate that the pacing process of the left ventricle has to take into account, among other things, the viscoelastic damping conditions of the myofilament excitation-contraction process. The effects of left ventricular dimensions on the hemdynamic response have been examined. These effects are found to be different at different viscoelastic and pacing conditions.
Abstract: Typically thermal power plants are located near to
surface coal mines that produce huge amount of fly ash as a waste
byproduct. Disposal of fly ash causes significant economic and
environmental problems. Now-a-days, research is going on for bulk
utilization of fly ash. In order to increase its percentage utilization, an
investigation was carried out to evaluate its potential for haul road
construction. This paper presents the laboratory California bearing
ratio (CBR) tests and evaluates the effect of lime on CBR behavior of
fly ash - mine overburden mixes. Tests were performed with different
percentages of lime (2%, 3%, 6%, and 9%). The results show that the
increase in bearing ratio of fly ash-overburden mixes was achieved
by lime treatment. Scanning electron microscopy (SEM) analyses
were conducted on 28 days cured specimens. The SEM study showed
that the bearing ratio development is related to the microstructural
development.
Abstract: With the extensive inclusion of document, especially
text, in the business systems, data mining does not cover the full
scope of Business Intelligence. Data mining cannot deliver its impact
on extracting useful details from the large collection of unstructured
and semi-structured written materials based on natural languages.
The most pressing issue is to draw the potential business intelligence
from text. In order to gain competitive advantages for the business, it
is necessary to develop the new powerful tool, text mining, to expand
the scope of business intelligence.
In this paper, we will work out the strong points of text mining in
extracting business intelligence from huge amount of textual
information sources within business systems. We will apply text
mining to each stage of Business Intelligence systems to prove that
text mining is the powerful tool to expand the scope of BI. After
reviewing basic definitions and some related technologies, we will
discuss the relationship and the benefits of these to text mining. Some
examples and applications of text mining will also be given. The
motivation behind is to develop new approach to effective and
efficient textual information analysis. Thus we can expand the scope
of Business Intelligence using the powerful tool, text mining.
Abstract: Sustainability in rural production system can only be achieved if it can suitably satisfy the local requirement as well as the outside demand with the changing time. With the increased pressure from the food sector in a globalised world, the agrarian economy
needs to re-organise its cultivable land system to be compatible with new management practices as well as the multiple needs of various stakeholders and the changing resource scenario. An attempt has been made to transform this problem into a multi-objective decisionmaking problem considering various objectives, resource constraints and conditional constraints. An interactive fuzzy multi-objective
programming approach has been used for such a purpose taking a
case study in Indian context to demonstrate the validity of the method.
Abstract: In a recent year usage of VoIP subscription has increased tremendously as compare to Public Switching Telephone System(PSTN). A VoIP subscriber would like to know the exact tariffs of the calls made using VoIP. As the usage increases, the rate of fraud is also increases, causing users complain about excess billing. This in turn hampers the growth of VoIP .This paper describe the common frauds and attack on VoIP based system and make an attempt to solve the billing attack by creating secured channel between caller and callee.
Abstract: All the available algorithms for blind estimation namely constant modulus algorithm (CMA), Decision-Directed Algorithm (DDA/DFE) suffer from the problem of convergence to local minima. Also, if the channel drifts considerably, any DDA looses track of the channel. So, their usage is limited in varying channel conditions. The primary limitation in such cases is the requirement of certain overhead bits in the transmit framework which leads to wasteful use of the bandwidth. Also such arrangements fail to use channel state information (CSI) which is an important aid in improving the quality of reception. In this work, the main objective is to reduce the overhead imposed by the pilot symbols, which in effect reduces the system throughput. Also we formulate an arrangement based on certain dynamic Artificial Neural Network (ANN) topologies which not only contributes towards the lowering of the overhead but also facilitates the use of the CSI. A 2×2 Multiple Input Multiple Output (MIMO) system is simulated and the performance variation with different channel estimation schemes are evaluated. A new semi blind approach based on dynamic ANN is proposed for channel tracking in varying channel conditions and the performance is compared with perfectly known CSI and least square (LS) based estimation.
Abstract: In this paper we present high performance
dynamically allocated multi-queue (DAMQ) buffer schemes for fault
tolerance systems on chip applications that require an interconnection
network. Two virtual channels shared the same buffer space. Fault
tolerant mechanisms for interconnection networks are becoming a
critical design issue for large massively parallel computers. It is also
important to high performance SoCs as the system complexity keeps
increasing rapidly. On the message switching layer, we make
improvement to boost system performance when there are faults
involved in the components communication. The proposed scheme is
when a node or a physical channel is deemed as faulty, the previous
hop node will terminate the buffer occupancy of messages destined
to the failed link. The buffer usage decisions are made at switching
layer without interactions with higher abstract layer, thus buffer
space will be released to messages destined to other healthy nodes
quickly. Therefore, the buffer space will be efficiently used in case
fault occurs at some nodes.
Abstract: Owing the fact that optimization of business process
is a crucial requirement to navigate, survive and even thrive in
today-s volatile business environment, this paper presents a
framework for selecting a best-fit optimization package for solving
complex business problems. Complexity level of the problem and/or
using incorrect optimization software can lead to biased solutions of
the optimization problem. Accordingly, the proposed framework
identifies a number of relevant factors (e.g. decision variables,
objective functions, and modeling approach) to be considered during
the evaluation and selection process. Application domain, problem
specifications, and available accredited optimization approaches are
also to be regarded. A recommendation of one or two optimization
software is the output of the framework which is believed to provide
the best results of the underlying problem. In addition to a set of
guidelines and recommendations on how managers can conduct an
effective optimization exercise is discussed.
Abstract: In molecular biology, microarray technology is widely and successfully utilized to efficiently measure gene activity. If working with less studied organisms, methods to design custom-made microarray probes are available. One design criterion is to select probes with minimal melting temperature variances thus ensuring similar hybridization properties. If the microarray application focuses on the investigation of metabolic pathways, it is not necessary to cover the whole genome. It is more efficient to cover each metabolic pathway with a limited number of genes. Firstly, an approach is presented which minimizes the overall melting temperature variance of selected probes for all genes of interest. Secondly, the approach is extended to include the additional constraints of covering all pathways with a limited number of genes while minimizing the overall variance. The new optimization problem is solved by a bottom-up programming approach which reduces the complexity to make it computationally feasible. The new method is exemplary applied for the selection of microarray probes in order to cover all fungal secondary metabolite gene clusters for Aspergillus terreus.
Abstract: The scientific achievements coming from molecular
biology depend greatly on the capability of computational
applications to analyze the laboratorial results. A comprehensive
analysis of an experiment requires typically the simultaneous study
of the obtained dataset with data that is available in several distinct
public databases. Nevertheless, developing a centralized access to
these distributed databases rises up a set of challenges such as: what
is the best integration strategy, how to solve nomenclature clashes,
how to solve database overlapping data and how to deal with huge
datasets. In this paper we present GeNS, a system that uses a simple and yet innovative approach to address several biological data integration issues. Compared with existing systems, the main
advantages of GeNS are related to its maintenance simplicity and to its coverage and scalability, in terms of number of supported
databases and data types. To support our claims we present the current use of GeNS in two concrete applications. GeNS currently contains more than 140 million of biological relations and it can be
publicly downloaded or remotely access through SOAP web services.
Abstract: The paper discusses a 3D numerical solution of the inverse boundary problem for a continuous casting process of alloy. The main goal of the analysis presented within the paper was to estimate heat fluxes along the external surface of the ingot. The verified information on these fluxes was crucial for a good design of a mould, effective cooling system and generally the whole caster. In the study an enthalpy-porosity technique implemented in Fluent package was used for modeling the solidification process. In this method, the phase change interface was determined on the basis of the liquid fraction approach. In inverse procedure the sensitivity analysis was applied for retrieving boundary conditions. A comparison of the measured and retrieved values showed a high accuracy of the computations. Additionally, the influence of the accuracy of measurements on the estimated heat fluxes was also investigated.
Abstract: In recent years various types of electric vehicles
has gained again increasing attention as an environmentally
benign technology in transport. Especially for urban areas with
high local pollution this Zero-emission technology (at the point
of use) is considered to provide proper solutions. Yet, the bad
economics and the limited driving ranges are still major barriers
for a broader market penetration of battery electric vehicles
(BEV) and of fuel cell vehicles (FCV). The major result of our
analyses is that the most important precondition for a further
dissemination of BEV in urban areas are emission-free zones.
This is an instrument which allows the promotion of BEV
without providing excessive subsidies. In addition, it is
important to note that the full benefits of EV can only be
harvested if the electricity used is produced from renewable
energy sources. That is to say, it has to be ensured that the use of
BEV in urban areas is clearly linked to a green electricity
purchase model. And moreover, the introduction of a CO2-
emission-based tax system would support this requirement.
Abstract: In this paper, the data correction algorithm is suggested
when the environmental air temperature varies. To correct the infrared
data in this paper, the initial temperature or the initial infrared image
data is used so that a target source system may not be necessary. The
temperature data obtained from infrared detector show nonlinear
property depending on the surface temperature. In order to handle this
nonlinear property, Taylor series approach is adopted. It is shown that
the proposed algorithm can reduce the influence of environmental
temperature on the components in the board. The main advantage of
this algorithm is to use only the initial temperature of the components
on the board rather than using other reference device such as black
body sources in order to get reference temperatures.
Abstract: This paper presents an algorithm to estimate the parameters of two closely spaced sinusoids, providing a frequency resolution that is more than 800 times greater than that obtained by using the Discrete Fourier Transform (DFT). The strategy uses a highly optimized grid search approach to accurately estimate frequency, amplitude and phase of both sinusoids, keeping at the same time the computational effort at reasonable levels. The proposed method has three main characteristics: 1) a high frequency resolution; 2) frequency, amplitude and phase are all estimated at once using one single package; 3) it does not rely on any statistical assumption or constraint. Potential applications to this strategy include the difficult task of resolving coincident partials of instruments in musical signals.
Abstract: In this paper, we present a new method for
incorporating global shift invariance in support vector machines.
Unlike other approaches which incorporate a feature extraction stage,
we first scale the image and then classify it by using the modified
support vector machines classifier. Shift invariance is achieved by
replacing dot products between patterns used by the SVM classifier
with the maximum cross-correlation value between them. Unlike the
normal approach, in which the patterns are treated as vectors, in our
approach the patterns are treated as matrices (or images). Crosscorrelation
is computed by using computationally efficient
techniques such as the fast Fourier transform. The method has been
tested on the ORL face database. The tests indicate that this method
can improve the recognition rate of an SVM classifier.
Abstract: Network-Centric Air Defense Missile Systems
(NCADMS) represents the superior development of the air defense
missile systems and has been regarded as one of the major research
issues in military domain at present. Due to lack of knowledge and
experience on NCADMS, modeling and simulation becomes an effective
approach to perform operational analysis, compared with
those equation based ones. However, the complex dynamic interactions
among entities and flexible architectures of NCADMS put forward
new requirements and challenges to the simulation framework
and models. ABS (Agent-Based Simulations) explicitly addresses
modeling behaviors of heterogeneous individuals. Agents have capability
to sense and understand things, make decisions, and act on the
environment. They can also cooperate with others dynamically to
perform the tasks assigned to them. ABS proves an effective approach
to explore the new operational characteristics emerging in
NCADMS. In this paper, based on the analysis of network-centric
architecture and new cooperative engagement strategies for
NCADMS, an agent-based simulation framework by expanding the
simulation framework in the so-called System Effectiveness Analysis
Simulation (SEAS) was designed. The simulation framework specifies
components, relationships and interactions between them, the
structure and behavior rules of an agent in NCADMS. Based on scenario
simulations, information and decision superiority and operational
advantages in NCADMS were analyzed; meanwhile some
suggestions were provided for its future development.
Abstract: To realize the vision of ubiquitous computing, it is
important to develop a context-aware infrastructure which can help
ubiquitous agents, services, and devices become aware of their
contexts because such computational entities need to adapt themselves
to changing situations. A context-aware infrastructure manages the
context model representing contextual information and provides
appropriate information. In this paper, we introduce Context-Aware
Middleware for URC System (hereafter CAMUS) as a context-aware
infrastructure for a network-based intelligent robot system and discuss
the ontology-based context modeling and reasoning approach which is
used in that infrastructure.
Abstract: The automatic discrimination of seismic signals is an important practical goal for the earth-science observatories due to the large amount of information that they receive continuously. An essential discrimination task is to allocate the incoming signal to a group associated with the kind of physical phenomena producing it. In this paper, we present new techniques for seismic signals classification: local, regional and global discrimination. These techniques were tested on seismic signals from the data base of the National Geophysical Institute of the Centre National pour la Recherche Scientifique et Technique (Morocco) by using the Moroccan software for seismic signals analysis.
Abstract: In this study, the dispersed model is used to predict
gas phase concentration, liquid drop concentration. The venturi
scrubber efficiency is calculated by gas phase concentration. The
modified model has been validated with available experimental data
of Johnstone, Field and Tasler for a range of throat gas velocities,
liquid to gas ratios and particle diameters and is used to study the
effect of some design parameters on collection efficiency.