Abstract: Natural gas is the most popular fossil fuel in the
current era and future as well. Natural gas is existed in underground
reservoirs so it may contain many of non-hydrocarbon components
for instance, hydrogen sulfide, nitrogen and water vapor. These
impurities are undesirable compounds and cause several technical
problems for example, corrosion and environment pollution.
Therefore, these impurities should be reduce or removed from natural
gas stream. Khurmala dome is located in southwest Erbil-Kurdistan
region. The Kurdistan region government has paid great attention for
this dome to provide the fuel for Kurdistan region. However, the
Khurmala associated natural gas is currently flaring at the field.
Moreover, nowadays there is a plan to recover and trade this gas and
to use it either as feedstock to power station or to sell it in global
market. However, the laboratory analysis has showed that the
Khurmala sour gas has huge quantities of H2S about (5.3%) and CO2
about (4.4%). Indeed, Khurmala gas sweetening process has been
removed in previous study by using Aspen HYSYS. However,
Khurmala sweet gas still contents some quintets of water about 23
ppm in sweet gas stream. This amount of water should be removed or
reduced. Indeed, water content in natural gas cause several technical
problems such as hydrates and corrosion. Therefore, this study aims
to simulate the prospective Khurmala gas dehydration process by
using Aspen HYSYS V. 7.3 program. Moreover, the simulation
process succeeded in reducing the water content to less than 0.1ppm.
In addition, the simulation work is also achieved process
optimization by using several desiccant types for example, TEG and
DEG and it also study the relationship between absorbents type and
its circulation rate with HCs losses from glycol regenerator tower.
Abstract: The paper provides the basic overview of simulation optimization. The procedure of its practical using is demonstrated on the real example in simulator Witness. The simulation optimization is presented as a good tool for solving many problems in real praxis especially in production systems. The authors also characterize their own experiences and they mention the strengths and weakness of simulation optimization.
Abstract: A two-dimensional numerical simulation of the contribution
of both inertial and aerodynamic forces on the blade loads of
a Vertical-Axis Wind Turbine (VAWT) is presented. After describing
the computational model and the relative validation procedure, a
complete campaign of simulations - based on full RANS unsteady
calculations - is proposed for a three-bladed rotor architecture characterized
by a NACA 0021 airfoil. For each analyzed angular velocity,
the combined effect of pressure and viscous forces acting on every
rotor blade are compared to the corresponding centrifugal forces,
due to the revolution of the turbine, thus achieving a preliminary
estimation of the correlation between overall rotor efficiency and
structural blade loads.
Abstract: Market based models are frequently used in the resource
allocation on the computational grid. However, as the size of
the grid grows, it becomes difficult for the customer to negotiate
directly with all the providers. Middle agents are introduced to
mediate between the providers and customers and facilitate the
resource allocation process. The most frequently deployed middle
agents are the matchmakers and the brokers. The matchmaking agent
finds possible candidate providers who can satisfy the requirements
of the consumers, after which the customer directly negotiates with
the candidates. The broker agents are mediating the negotiation with
the providers in real time.
In this paper we present a new type of middle agent, the marketmaker.
Its operation is based on two parallel operations - through
the investment process the marketmaker is acquiring resources and
resource reservations in large quantities, while through the resale process
it sells them to the customers. The operation of the marketmaker
is based on the fact that through its global view of the grid it can
perform a more efficient resource allocation than the one possible in
one-to-one negotiations between the customers and providers.
We present the operation and algorithms governing the operation
of the marketmaker agent, contrasting it with the matchmaker and
broker agents. Through a series of simulations in the task oriented
domain we compare the operation of the three agents types. We find
that the use of marketmaker agent leads to a better performance in the
allocation of large tasks and a significant reduction of the messaging
overhead.
Abstract: This paper presents a method for determining the
uniaxial tensile properties such as Young-s modulus, yield strength
and the flow behaviour of a material in a virtually non-destructive
manner. To achieve this, a new dumb-bell shaped miniature
specimen has been designed. This helps in avoiding the removal of
large size material samples from the in-service component for the
evaluation of current material properties. The proposed miniature
specimen has an advantage in finite element modelling with respect
to computational time and memory space. Test fixtures have been
developed to enable the tension tests on the miniature specimen in a
testing machine. The studies have been conducted in a chromium
(H11) steel and an aluminum alloy (AR66). The output from the
miniature test viz. load-elongation diagram is obtained and the finite
element simulation of the test is carried out using a 2D plane stress
analysis. The results are compared with the experimental results. It is
observed that the results from the finite element simulation
corroborate well with the miniature test results. The approach seems
to have potential to predict the mechanical properties of the
materials, which could be used in remaining life estimation of the
various in-service structures.
Abstract: Model-based approaches have been applied successfully
to a wide range of tasks such as specification, simulation, testing, and
diagnosis. But one bottleneck often prevents the introduction of these
ideas: Manual modeling is a non-trivial, time-consuming task.
Automatically deriving models by observing and analyzing running
systems is one possible way to amend this bottleneck. To
derive a model automatically, some a-priori knowledge about the
model structure–i.e. about the system–must exist. Such a model
formalism would be used as follows: (i) By observing the network
traffic, a model of the long-term system behavior could be generated
automatically, (ii) Test vectors can be generated from the model,
(iii) While the system is running, the model could be used to diagnose
non-normal system behavior.
The main contribution of this paper is the introduction of a model
formalism called 'probabilistic regression automaton' suitable for the
tasks mentioned above.
Abstract: During recent years wind turbine technology has
undergone rapid developments. Growth in size and the optimization
of wind turbines has enabled wind energy to become increasingly
competitive with conventional energy sources. As a result today-s
wind turbines participate actively in the power production of several
countries around the world. These developments raise a number of
challenges to be dealt with now and in the future. The penetration of
wind energy in the grid raises questions about the compatibility of the
wind turbine power production with the grid. In particular, the
contribution to grid stability, power quality and behavior during fault
situations plays therefore as important a role as the reliability. In the
present work, we addressed two fault situations that have shown their
influence on the generator and the behavior of the wind over the
defects which are briefly discussed based on simulation results.
Abstract: The article investigates how 14- to 15- year-olds build informal conceptions of inferential statistics as they engage in a modelling process and build their own computer simulations with dynamic statistical software. This study proposes four primary phases of informal inferential reasoning for the students in the statistical modeling and simulation process. Findings show shifts in the conceptual structures across the four phases and point to the potential of all of these phases for fostering the development of students- robust knowledge of the logic of inference when using computer based simulations to model and investigate statistical questions.
Abstract: The paper proposes the novel design of a 3T XOR gate combining complementary CMOS with pass transistor logic. The design has been compared with earlier proposed 4T and 6T XOR gates and a significant improvement in silicon area and power-delay product has been obtained. An eight transistor full adder has been designed using the proposed three-transistor XOR gate and its performance has been investigated using 0.15um and 0.35um technologies. Compared to the earlier designed 10 transistor full adder, the proposed adder shows a significant improvement in silicon area and power delay product. The whole simulation has been carried out using HSPICE.
Abstract: This article describes design of the 8-bit asynchronous
microcontroller simulation model in VHDL. The model is created in
ISE Foundation design tool and simulated in Modelsim tool. This
model is a simple application example of asynchronous systems
designed in synchronous design tools. The design process of creating
asynchronous system with 4-phase bundled-data protocol and with
matching delays is described in the article. The model is described in
gate-level abstraction.
The simulation waveform of the functional construction is the
result of this article. Described construction covers only the
simulation model. The next step would be creating synthesizable
model to FPGA.
Abstract: The aim of this paper is to emphasize and alleviate the effect of phase noise due to imperfect local oscillators on the performances of a Multi-Carrier CDMA system. After the cancellation of Common Phase Error (CPE), an iterative approach is introduced which iteratively estimates Inter-Carrier Interference (ICI) components in the frequency domain and cancels their contribution in the time domain. Simulation are conducted in order to investigate the achievable performances for several parameters, such as the spreading factor, the modulation order, the phase noise power and the transmission Signal-to-Noise Ratio.
Abstract: An electronic portal image device (EPID) has become
a method of patient-specific IMRT dose verification for radiotherapy.
Research studies have focused on pre and post-treatment verification,
however, there are currently no interventional procedures using EPID
dosimetry that measure the dose in real time as a mechanism to
ensure that overdoses do not occur and underdoses are detected as
soon as is practically possible. As a result, an EPID-based real time
dose verification system for dynamic IMRT was developed and was
implemented with MATLAB/Simulink. The EPID image acquisition
was set to continuous acquisition mode at 1.4 images per second. The
system defined the time constraint gap, or execution gap at the image
acquisition time, so that every calculation must be completed before
the next image capture is completed. In addition, the
Abstract: The objective of this paper is to compare the time
specification performance between conventional controller PID and
modern controller SMC for an inverted pendulum system. The goal is
to determine which control strategy delivers better performance with
respect to pendulum-s angle and cart-s position. The inverted
pendulum represents a challenging control problem, which
continually moves toward an uncontrolled state. Two controllers are
presented such as Sliding Mode Control (SMC) and Proportional-
Integral-Derivatives (PID) controllers for controlling the highly
nonlinear system of inverted pendulum model. Simulation study has
been done in Matlab Mfile and simulink environment shows that both
controllers are capable to control multi output inverted pendulum
system successfully. The result shows that Sliding Mode Control
(SMC) produced better response compared to PID control strategies
and the responses are presented in time domain with the details
analysis.
Abstract: This paper presented a proposed design for
transcutaneous inductive powering links. The design used to transfer
power and data to the implanted devices such as implanted
Microsystems to stimulate and monitoring the nerves and muscles.
The system operated with low band frequency 13.56 MHZ according
to industrial- scientific – medical (ISM) band to avoid the tissue
heating. For external part, the modulation index is 13 % and the
modulation rate 7.3% with data rate 1 Mbit/s assuming Tbit=1us. The
system has been designed using 0.35-μm fabricated CMOS
technology. The mathematical model is given and the design is
simulated using OrCAD P Spice 16.2 software tool and for real-time
simulation the electronic workbench MULISIM 11 has been used.
The novel circular plane (pancake) coils was simulated using
ANSOFT- HFss software.
Abstract: In this paper we present a technique to speed up
ICA based on the idea of reducing the dimensionality of the data
set preserving the quality of the results. In particular we refer to
FastICA algorithm which uses the Kurtosis as statistical property
to be maximized. By performing a particular Johnson-Lindenstrauss
like projection of the data set, we find the minimum dimensionality
reduction rate ¤ü, defined as the ratio between the size k of the reduced
space and the original one d, which guarantees a narrow confidence
interval of such estimator with high confidence level. The derived
dimensionality reduction rate depends on a system control parameter
β easily computed a priori on the basis of the observations only.
Extensive simulations have been done on different sets of real world
signals. They show that actually the dimensionality reduction is very
high, it preserves the quality of the decomposition and impressively
speeds up FastICA. On the other hand, a set of signals, on which the
estimated reduction rate is greater than 1, exhibits bad decomposition
results if reduced, thus validating the reliability of the parameter β.
We are confident that our method will lead to a better approach to
real time applications.
Abstract: This paper proposes a three-phase four-wire currentcontrolled
Voltage Source Inverter (CC-VSI) for both power quality
improvement and PV energy extraction. For power quality
improvement, the CC-VSI works as a grid current-controlling shunt
active power filter to compensate for harmonic and reactive power of
loads. Then, the PV array is coupled to the DC bus of the CC-VSI
and supplies active power to the grid. The MPPT controller employs
the particle swarm optimization technique. The output of the MPPT
controller is a DC voltage that determines the DC-bus voltage
according to PV maximum power. The PSO method is simple and
effective especially for a partially shaded PV array. From computer
simulation results, it proves that grid currents are sinusoidal and inphase
with grid voltages, while the PV maximum active power is
delivered to loads.
Abstract: One of the disadvantages of using OFDM is the larger
peak to averaged power ratio (PAPR) in its time domain signal. The
larger PAPR signal would course the fatal degradation of bit error
rate performance (BER) due to the inter-modulation noise in the nonlinear
channel. This paper proposes an improved DSI (Dummy
Sequence Insertion) method, which can achieve the better PAPR and
BER performances. The feature of proposed method is to optimize
the phase of each dummy sub-carrier so as to reduce the PAPR
performance by changing all predetermined phase coefficients in the
time domain signal, which is calculated for data sub-carriers and
dummy sub-carriers separately. To achieve the better PAPR
performance, this paper also proposes to employ the time-frequency
domain swapping algorithm for fine adjustment of phase coefficient
of the dummy subcarriers, which can achieve the less complexity of
processing and achieves the better PAPR and BER performances
than those for the conventional DSI method. This paper presents
various computer simulation results to verify the effectiveness of
proposed method as comparing with the conventional methods in the
non-linear channel.
Abstract: Mobile Ad hoc Networks is an autonomous system of
mobile nodes connected by multi-hop wireless links without
centralized infrastructure support. As mobile communication gains
popularity, the need for suitable ad hoc routing protocols will
continue to grow. Efficient dynamic routing is an important research
challenge in such a network. Bandwidth constrained mobile devices
use on-demand approach in their routing protocols because of its
effectiveness and efficiency. Many researchers have conducted
numerous simulations for comparing the performance of these
protocols under varying conditions and constraints. Most of them are
not aware of MAC Protocols, which will impact the relative
performance of routing protocols considered in different network
scenarios. In this paper we investigate the choice of MAC protocols
affects the relative performance of ad hoc routing protocols under
different scenarios. We have evaluated the performance of these
protocols using NS2 simulations. Our results show that the
performance of routing protocols of ad hoc networks will suffer when
run over different MAC Layer protocols.
Abstract: Video sensor networks operate on stringent requirements
of latency. Packets have a deadline within which they have
to be delivered. Violation of the deadline causes a packet to be
treated as lost and the loss of packets ultimately affects the quality
of the application. Network latency is typically a function of many
interacting components. In this paper, we propose ways of reducing
the forwarding latency of a packet at intermediate nodes. The
forwarding latency is caused by a combination of processing delay
and queueing delay. The former is incurred in order to determine the
next hop in dynamic routing. We show that unless link failures in a
very specific and unlikely pattern, a vast majority of these lookups
are redundant. To counter this we propose source routing as the
routing strategy. However, source routing suffers from issues related
to scalability and being impervious to network dynamics. We propose
solutions to counter these and show that source routing is definitely
a viable option in practical sized video networks. We also propose a
fast and fair packet scheduling algorithm that reduces queueing delay
at the nodes. We support our claims through extensive simulation on
realistic topologies with practical traffic loads and failure patterns.
Abstract: In this paper, a fuzzy controller is designed for
stabilization of the Lorenz chaotic equations. A simple Mamdani
inference method is used for this purpose. This method is very simple
and applicable for complex chaotic systems and it can be
implemented easily. The stability of close loop system is investigated
by the Lyapunov stabilization criterion. A Lyapunov function is
introduced and the global stability is proven. Finally, the
effectiveness of this method is illustrated by simulation results and it
is shown that the performance of the system is improved.