Abstract: In most of the cases, natural disasters lead to the
necessity of evacuating people. The quality of evacuation
management is dramatically improved by the use of information
provided by decision support systems, which become indispensable
in case of large scale evacuation operations. This paper presents a
best practice case study. In November 2007, officers from the
Emergency Situations Inspectorate “Crisana" of Bihor County from
Romania participated to a cross-border evacuation exercise, when
700 people have been evacuated from Netherlands to Belgium. One
of the main objectives of the exercise was the test of four different
decision support systems. Afterwards, based on that experience,
software system called TEVAC (Trans Border Evacuation) has been
developed “in house" by the experts of this institution. This original
software system was successfully tested in September 2008, during
the deployment of the international exercise EU-HUROMEX 2008,
the scenario involving real evacuation of 200 persons from Hungary
to Romania. Based on the lessons learned and results, starting from
April 2009, the TEVAC software is used by all Emergency
Situations Inspectorates all over Romania.
Abstract: Using neural network we try to model the unknown function f for given input-output data pairs. The connection strength of each neuron is updated through learning. Repeated simulations of crisp neural network produce different values of weight factors that are directly affected by the change of different parameters. We propose the idea that for each neuron in the network, we can obtain quasi-fuzzy weight sets (QFWS) using repeated simulation of the crisp neural network. Such type of fuzzy weight functions may be applied where we have multivariate crisp input that needs to be adjusted after iterative learning, like claim amount distribution analysis. As real data is subjected to noise and uncertainty, therefore, QFWS may be helpful in the simplification of such complex problems. Secondly, these QFWS provide good initial solution for training of fuzzy neural networks with reduced computational complexity.
Abstract: With the advance in wireless networking, IEEE 802.16 WiMAX technology has been widely deployed for several applications such as “last mile" broadband service, cellular backhaul, and high-speed enterprise connectivity. As a result, military employed WiMAX as a high-speed wireless connection for data-link because of its point to multi-point and non-line-of-sight (NLOS) capability for many years. However, the risk of using WiMAX is a critical factor in some sensitive area of military applications especially in ammunition manufacturing such as solid propellant rocket production. The US DoD policy states that the following certification requirements are met for WiMAX: electromagnetic effects on the environment (E3) and Hazards of Electromagnetic Radiation to Ordnance (HERO). This paper discuses the Recommended Power Densities and Safe Separation Distance (SSD) for HERO on WiMAX systems deployed on solid propellant rocket production. The result of this research found that WiMAX is safe to operate at close proximity distances to the rocket production based on AF Guidance Memorandum immediately changing AFMAN 91-201.
Abstract: The inphase/quadrature (I/Q) amplitude and phase
imbalance effects are studied in coherent optical orthogonal
frequency division multiplexing (CO-OFDM) systems. An analytical
model for the I/Q imbalance is developed and supported by
simulation results. The results indicate that the I/Q imbalance degrades the BER performance considerably.
Abstract: HSDPA is a new feature which is introduced in
Release-5 specifications of the 3GPP WCDMA/UTRA standard to
realize higher speed data rate together with lower round-trip times.
Moreover, the HSDPA concept offers outstanding improvement of
packet throughput and also significantly reduces the packet call
transfer delay as compared to Release -99 DSCH. Till now the
HSDPA system uses turbo coding which is the best coding technique
to achieve the Shannon limit. However, the main drawbacks of turbo
coding are high decoding complexity and high latency which makes
it unsuitable for some applications like satellite communications,
since the transmission distance itself introduces latency due to
limited speed of light. Hence in this paper it is proposed to use LDPC
coding in place of Turbo coding for HSDPA system which decreases
the latency and decoding complexity. But LDPC coding increases the
Encoding complexity. Though the complexity of transmitter
increases at NodeB, the End user is at an advantage in terms of
receiver complexity and Bit- error rate. In this paper LDPC Encoder
is implemented using “sparse parity check matrix" H to generate a
codeword at Encoder and “Belief Propagation algorithm "for LDPC
decoding .Simulation results shows that in LDPC coding the BER
suddenly drops as the number of iterations increase with a small
increase in Eb/No. Which is not possible in Turbo coding. Also same
BER was achieved using less number of iterations and hence the
latency and receiver complexity has decreased for LDPC coding.
HSDPA increases the downlink data rate within a cell to a theoretical
maximum of 14Mbps, with 2Mbps on the uplink. The changes that
HSDPA enables includes better quality, more reliable and more
robust data services. In other words, while realistic data rates are
only a few Mbps, the actual quality and number of users achieved
will improve significantly.
Abstract: Generally, administrative systems in an academic
environment are disjoint and support independent queries. The
objective in this work is to semantically connect these independent
systems to provide support to queries run on the integrated platform.
The proposed framework, by enriching educational material in the
legacy systems, provides a value-added semantics layer where
activities such as annotation, query and reasoning can be carried out
to support management requirements. We discuss the development of
this ontology framework with a case study of UAE University
program administration to show how semantic web technologies can
be used by administration to develop student profiles for better
academic program management.
Abstract: This paper proposes a “soft systems" approach to
domain-driven design of computer-based information systems. We
propose a systemic framework combining techniques from Soft
Systems Methodology (SSM), the Unified Modelling Language
(UML), and an implementation pattern known as “Naked Objects".
We have used this framework in action research projects that have
involved the investigation and modelling of business processes using
object-oriented domain models and the implementation of software
systems based on those domain models. Within the proposed
framework, Soft Systems Methodology (SSM) is used as a guiding
methodology to explore the problem situation and to generate a
ubiquitous language (soft language) which can be used as the basis
for developing an object-oriented domain model. The domain model
is further developed using techniques based on the UML and is
implemented in software following the “Naked Objects"
implementation pattern. We argue that there are advantages from
combining and using techniques from different methodologies in this
way.
The proposed systemic framework is overviewed and justified as
multimethodologyusing Mingers multimethodology ideas.
This multimethodology approach is being evaluated through a
series of action research projects based on real-world case studies. A
Peer-Tutoring case study is presented here as a sample of the
framework evaluation process
Abstract: This paper unifies power optimization approaches in
various energy converters, such as: thermal, solar, chemical, and
electrochemical engines, in particular fuel cells. Thermodynamics
leads to converter-s efficiency and limiting power. Efficiency
equations serve to solve problems of upgrading and downgrading of
resources. While optimization of steady systems applies the
differential calculus and Lagrange multipliers, dynamic optimization
involves variational calculus and dynamic programming. In reacting
systems chemical affinity constitutes a prevailing component of an
overall efficiency, thus the power is analyzed in terms of an active
part of chemical affinity. The main novelty of the present paper in the
energy yield context consists in showing that the generalized heat
flux Q (involving the traditional heat flux q plus the product of
temperature and the sum products of partial entropies and fluxes of
species) plays in complex cases (solar, chemical and electrochemical)
the same role as the traditional heat q in pure heat engines.
The presented methodology is also applied to power limits in fuel
cells as to systems which are electrochemical flow engines propelled
by chemical reactions. The performance of fuel cells is determined by
magnitudes and directions of participating streams and mechanism of
electric current generation. Voltage lowering below the reversible
voltage is a proper measure of cells imperfection. The voltage losses,
called polarization, include the contributions of three main sources:
activation, ohmic and concentration. Examples show power maxima
in fuel cells and prove the relevance of the extension of the thermal
machine theory to chemical and electrochemical systems. The main
novelty of the present paper in the FC context consists in introducing
an effective or reduced Gibbs free energy change between products p
and reactants s which take into account the decrease of voltage and
power caused by the incomplete conversion of the overall reaction.
Abstract: Reduction of Single Input Single Output (SISO) discrete systems into lower order model, using a conventional and an evolutionary technique is presented in this paper. In the conventional technique, the mixed advantages of Modified Cauer Form (MCF) and differentiation are used. In this method the original discrete system is, first, converted into equivalent continuous system by applying bilinear transformation. The denominator of the equivalent continuous system and its reciprocal are differentiated successively, the reduced denominator of the desired order is obtained by combining the differentiated polynomials. The numerator is obtained by matching the quotients of MCF. The reduced continuous system is converted back into discrete system using inverse bilinear transformation. In the evolutionary technique method, Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example.
Abstract: Whereas cellular wireless communication systems are
subject to short-and long-term fading. The effect of wireless channel
has largely been ignored in most of the teletraffic assessment
researches. In this paper, a mathematical teletraffic model is proposed
to estimate blocking and forced termination probabilities of cellular
wireless networks as a result of teletraffic behavior as well as the
outage of the propagation channel. To evaluate the proposed
teletraffic model, gamma inter-arrival and general service time
distributions have been considered based on wireless channel fading
effect. The performance is evaluated and compared with the classical
model. The proposed model is dedicated and investigated in different
operational conditions. These conditions will consider not only the
arrival rate process, but also, the different faded channels models.
Abstract: Most CT reconstruction system x-ray computed
tomography (CT) is a well established visualization technique in
medicine and nondestructive testing. However, since CT scanning
requires sampling of radiographic projections from different viewing
angles, common CT systems with mechanically moving parts are too
slow for dynamic imaging, for instance of multiphase flows or live
animals. A large number of X-ray projections are needed to
reconstruct CT images, so the collection and calculation of the
projection data consume too much time and harmful for patient. For
the purpose of solving the problem, in this study, we proposed a
method for tomographic reconstruction of a sample from a limited
number of x-ray projections by using linear interpolation method. In
simulation, we presented reconstruction from an experimental x-ray
CT scan of a Aluminum phantom that follows to two steps: X-ray
projections will be interpolated using linear interpolation method and
using it for CT reconstruction based upon Ordered Subsets
Expectation Maximization (OSEM) method.
Abstract: Since the European renewable energy directives set the
target for 22.1% of electricity generation to be supplied by 2010
[1], there has been increased interest in using green technologies
also within the urban enviroment. The most commonly considered
installations are solar thermal and solar photovoltaics. Nevertheless,
as observed by Bahaj et al. [2], small scale turbines can reduce the
built enviroment related CO2 emissions. Thus, in the last few years,
an increasing number of manufacturers have developed small wind
turbines specifically designed for the built enviroment. The present
work focuses on the integration into architectural systems of such
installations and presents a survey of successful case studies.
Abstract: This paper is introduced a modification to Diffie-
Hellman protocol to be applicable on the decimal numbers, which
they are the numbers between zero and one. For this purpose we
extend the theory of the congruence. The new congruence is over
the set of the real numbers and it is called the “real congruence"
or the “real modulus". We will refer to the existing congruence by
the “integer congruence" or the “integer modulus". This extension
will define new terms and redefine the existing terms. As the
properties and the theorems of the integer modulus are extended as
well. Modified Diffie-Hellman key exchange protocol is produced a
sharing, secure and decimal secret key for the the cryptosystems that
depend on decimal numbers.
Abstract: A 3D industrial computed tomography (CT)
manufactured based on a first generation CT systems, single-source
– single-detector, was evaluated. Operation accuracy assessment of
the manufactured system was achieved using simulation in
comparison with experimental tests. 137Cs and 60Co were used as a gamma source. Simulations were achieved using MCNP4C code.
Experimental tests of 137Cs were in good agreement with the simulations
Abstract: This paper presented a modified efficient inductive
powering link based on ASK modulator and proposed efficient class-
E power amplifier. The design presents the external part which is
located outside the body to transfer power and data to the implanted
devices such as implanted Microsystems to stimulate and monitoring
the nerves and muscles. The system operated with low band
frequency 10MHZ according to industrial- scientific – medical (ISM)
band to avoid the tissue heating. For external part, the modulation
index is 11.1% and the modulation rate 7.2% with data rate 1 Mbit/s
assuming Tbit = 1us. The system has been designed using 0.35-μm
fabricated CMOS technology. The mathematical model is given and
the design is simulated using OrCAD P Spice 16.2 software tool and
for real-time simulation, the electronic workbench MULISIM 11 has
been used.
Abstract: TUSAT is a prospective Turkish
Communication Satellite designed for providing mainly data
communication and broadcasting services through Ku-Band
and C-Band channels. Thermal control is a vital issue in
satellite design process. Therefore, all satellite subsystems and
equipments should be maintained in the desired temperature
range from launch to end of maneuvering life. The main
function of the thermal control is to keep the equipments and
the satellite structures in a given temperature range for various
phases and operating modes of spacecraft during its lifetime.
This paper describes the thermal control design which uses
passive and active thermal control concepts. The active
thermal control is based on heaters regulated by software via
thermistors. Alternatively passive thermal control composes of
heat pipes, multilayer insulation (MLI) blankets, radiators,
paints and surface finishes maintaining temperature level of
the overall carrier components within an acceptable value.
Thermal control design is supported by thermal analysis using
thermal mathematical models (TMM).
Abstract: Decrease in hardware costs and advances in computer
networking technologies have led to increased interest in the use of
large-scale parallel and distributed computing systems. One of the
biggest issues in such systems is the development of effective
techniques/algorithms for the distribution of the processes/load of a
parallel program on multiple hosts to achieve goal(s) such as
minimizing execution time, minimizing communication delays,
maximizing resource utilization and maximizing throughput.
Substantive research using queuing analysis and assuming job
arrivals following a Poisson pattern, have shown that in a multi-host
system the probability of one of the hosts being idle while other host
has multiple jobs queued up can be very high. Such imbalances in
system load suggest that performance can be improved by either
transferring jobs from the currently heavily loaded hosts to the lightly
loaded ones or distributing load evenly/fairly among the hosts .The
algorithms known as load balancing algorithms, helps to achieve the
above said goal(s). These algorithms come into two basic categories -
static and dynamic. Whereas static load balancing algorithms (SLB)
take decisions regarding assignment of tasks to processors based on
the average estimated values of process execution times and
communication delays at compile time, Dynamic load balancing
algorithms (DLB) are adaptive to changing situations and take
decisions at run time.
The objective of this paper work is to identify qualitative
parameters for the comparison of above said algorithms. In future this
work can be extended to develop an experimental environment to
study these Load balancing algorithms based on comparative
parameters quantitatively.
Abstract: As wind, solar and other clean and green energy
sources gain popularity worldwide, engineers are seeking ways to
make renewable energy systems more affordable and to integrate
them with existing ac power grids. In the present paper an attempt
has been made for integrating the PV arrays to the smart nano grid
using an artificial intelligent (AI) based solar powered cascade multilevel
inverter. The AI based controller switching scheme has been
used for improving the power quality by reducing the Total Harmonic
Distortion (THD) of the multi-level inverter output voltage.
Abstract: Wind farms (WFs) with high level of penetration are
being established in power systems worldwide more rapidly than
other renewable resources. The Independent System Operator (ISO),
as a policy maker, should propose appropriate places for WF
installation in order to maximize the benefits for the investors. There
is also a possibility of congestion relief using the new installation of
WFs which should be taken into account by the ISO when proposing
the locations for WF installation. In this context, efficient wind farm
(WF) placement method is proposed in order to reduce burdens on
congested lines. Since the wind speed is a random variable and load
forecasts also contain uncertainties, probabilistic approaches are used
for this type of study. AC probabilistic optimal power flow (P-OPF)
is formulated and solved using Monte Carlo Simulations (MCS). In
order to reduce computation time, point estimate methods (PEM) are
introduced as efficient alternative for time-demanding MCS.
Subsequently, WF optimal placement is determined using generation
shift distribution factors (GSDF) considering a new parameter
entitled, wind availability factor (WAF). In order to obtain more
realistic results, N-1 contingency analysis is employed to find the
optimal size of WF, by means of line outage distribution factors
(LODF). The IEEE 30-bus test system is used to show and compare
the accuracy of proposed methodology.
Abstract: Diagnosis can be achieved by building a model of a
certain organ under surveillance and comparing it with the real time
physiological measurements taken from the patient. This paper deals
with the presentation of the benefits of using Data Mining techniques
in the computer-aided diagnosis (CAD), focusing on the cancer
detection, in order to help doctors to make optimal decisions quickly
and accurately. In the field of the noninvasive diagnosis techniques,
the endoscopic ultrasound elastography (EUSE) is a recent elasticity
imaging technique, allowing characterizing the difference between
malignant and benign tumors. Digitalizing and summarizing the main
EUSE sample movies features in a vector form concern with the use
of the exploratory data analysis (EDA). Neural networks are then
trained on the corresponding EUSE sample movies vector input in
such a way that these intelligent systems are able to offer a very
precise and objective diagnosis, discriminating between benign and
malignant tumors. A concrete application of these Data Mining
techniques illustrates the suitability and the reliability of this
methodology in CAD.