Abstract: In this paper, the decomposition-aggregation method
is used to carry out connective stability criteria for general linear
composite system via aggregation. The large scale system is
decomposed into a number of subsystems. By associating directed
graphs with dynamic systems in an essential way, we define the
relation between system structure and stability in the sense of
Lyapunov. The stability criteria is then associated with the stability
and system matrices of subsystems as well as those interconnected
terms among subsystems using the concepts of vector differential
inequalities and vector Lyapunov functions. Then, we show that the
stability of each subsystem and stability of the aggregate model
imply connective stability of the overall system. An example is
reported, showing the efficiency of the proposed technique.
Abstract: The rapid urbanization of cities has a bane in the form
road accidents that cause extensive damage to life and limbs. A
number of location based factors are enablers of road accidents in the
city. The speed of travel of vehicles is non-uniform among locations
within a city. In this study, the perception of vehicle users is captured
on a 10-point rating scale regarding the degree of variation in speed
of travel at chosen locations in the city. The average rating is used to
cluster locations using fuzzy c-means clustering and classify them as
low, moderate and high speed of travel locations. The high speed of
travel locations can be classified proactively to ensure that accidents
do not occur due to the speeding of vehicles at such locations. The
advantage of fuzzy c-means clustering is that a location may be a
part of more than one cluster to a varying degree and this gives a
better picture about the location with respect to the characteristic
(speed of travel) being studied.
Abstract: In current common research reports, salient regions
are usually defined as those regions that could present the main
meaningful or semantic contents. However, there are no uniform
saliency metrics that could describe the saliency of implicit image
regions. Most common metrics take those regions as salient regions,
which have many abrupt changes or some unpredictable
characteristics. But, this metric will fail to detect those salient useful
regions with flat textures. In fact, according to human semantic
perceptions, color and texture distinctions are the main characteristics
that could distinct different regions. Thus, we present a novel saliency
metric coupled with color and texture features, and its corresponding
salient region extraction methods. In order to evaluate the
corresponding saliency values of implicit regions in one image, three
main colors and multi-resolution Gabor features are respectively used
for color and texture features. For each region, its saliency value is
actually to evaluate the total sum of its Euclidean distances for other
regions in the color and texture spaces. A special synthesized image
and several practical images with main salient regions are used to
evaluate the performance of the proposed saliency metric and other
several common metrics, i.e., scale saliency, wavelet transform
modulus maxima point density, and important index based metrics.
Experiment results verified that the proposed saliency metric could
achieve more robust performance than those common saliency
metrics.
Abstract: This paper presents a reliability-based approach to select appropriate wind turbine types for a wind farm considering site-specific wind speed patterns. An actual wind farm in the northern region of Iran with the wind speed registration of one year is studied in this paper. An analytic approach based on total probability theorem is utilized in this paper to model the probabilistic behavior of both turbines- availability and wind speed. Well-known probabilistic reliability indices such as loss of load expectation (LOLE), expected energy not supplied (EENS) and incremental peak load carrying capability (IPLCC) for wind power integration in the Roy Billinton Test System (RBTS) are examined. The most appropriate turbine type achieving the highest reliability level is chosen for the studied wind farm.
Abstract: The world's population continues to grow at a quarter of a million people per day, increasing the consumption of energy. This has made the world to face the problem of energy crisis now days. In response to the energy crisis, the principles of renewable energy gained popularity. There are much advancement made in developing the wind and solar energy farms across the world. These energy farms are not enough to meet the energy requirement of world. This has attracted investors to procure new sources of energy to be substituted. Among these sources, extraction of energy from the waves is considered as best option. The world oceans contain enough energy to meet the requirement of world. Significant advancements in design and technology are being made to make waves as a continuous source of energy. One major hurdle in launching wave energy devices in a developing country like Pakistan is the initial cost. A simple, reliable and cost effective wave energy converter (WEC) is required to meet the nation-s energy need. This paper will present a novel design proposed by team SAS for harnessing wave energy. This paper has three major sections. The first section will give a brief and concise view of ocean wave creation, propagation and the energy carried by them. The second section will explain the designing of SAS-2. A gear chain mechanism is used for transferring the energy from the buoy to a rotary generator. The third section will explain the manufacturing of scaled down model for SAS-2 .Many modifications are made in the trouble shooting stage. The design of SAS-2 is simple and very less maintenance is required. SAS-2 is producing electricity at Clifton. The initial cost of SAS-2 is very low. This has proved SAS- 2 as one of the cost effective and reliable source of harnessing wave energy for developing countries.
Abstract: The goal of this project is to design a system to
recognition voice commands. Most of voice recognition systems
contain two main modules as follow “feature extraction" and “feature
matching". In this project, MFCC algorithm is used to simulate
feature extraction module. Using this algorithm, the cepstral
coefficients are calculated on mel frequency scale. VQ (vector
quantization) method will be used for reduction of amount of data to
decrease computation time. In the feature matching stage Euclidean
distance is applied as similarity criterion. Because of high accuracy
of used algorithms, the accuracy of this voice command system is
high. Using these algorithms, by at least 5 times repetition for each
command, in a single training session, and then twice in each testing
session zero error rate in recognition of commands is achieved.
Abstract: Investment in a constructed facility represents a cost in
the short term that returns benefits only over the long term use of the
facility. Thus, the costs occur earlier than the benefits, and the owners
of facilities must obtain the capital resources to finance the costs of
construction. A project cannot proceed without an adequate
financing, and the cost of providing an adequate financing can be
quite large. For these reasons, the attention to the project finance is an
important aspect of project management. Finance is also a concern to
the other organizations involved in a project such as the general
contractor and material suppliers. Unless an owner immediately and
completely covers the costs incurred by each participant, these
organizations face financing problems of their own. At a more
general level, the project finance is the only one aspect of the general
problem of corporate finance. If numerous projects are considered
and financed together, then the net cash flow requirements constitute
the corporate financing problem for capital investment. Whether
project finance is performed at the project or at the corporate level
does not alter the basic financing problem .In this paper, we will first
consider facility financing from the owner's perspective, with due
consideration for its interaction with other organizations involved in a
project. Later, we discuss the problems of construction financing
which are crucial to the profitability and solvency of construction
contractors. The objective of this paper is to present the steps utilized
to determine the best combination of minimum project financing.
The proposed model considers financing; schedule and maximum net
area .The proposed model is called Project Financing and Schedule
Integration using Genetic Algorithms "PFSIGA". This model
intended to determine more steps (maximum net area) for any project
with a subproject. An illustrative example will demonstrate the
feature of this technique. The model verification and testing are put
into consideration.
Abstract: The turbulent mixing of coolant streams of different
temperature and density can cause severe temperature fluctuations in
piping systems in nuclear reactors. In certain periodic contraction
cycles these conditions lead to thermal fatigue. The resulting aging
effect prompts investigation in how the mixing of flows over a sharp
temperature/density interface evolves. To study the fundamental
turbulent mixing phenomena in the presence of density gradients,
isokinetic (shear-free) mixing experiments are performed in a square
channel with Reynolds numbers ranging from 2-500 to 60-000.
Sucrose is used to create the density difference. A Wire Mesh Sensor
(WMS) is used to determine the concentration map of the flow in the
cross section. The mean interface width as a function of velocity,
density difference and distance from the mixing point are analyzed
based on traditional methods chosen for the purposes of
atmospheric/oceanic stratification analyses. A definition of the
mixing layer thickness more appropriate to thermal fatigue and based
on mixedness is devised. This definition shows that the thermal
fatigue risk assessed using simple mixing layer growth can be
misleading and why an approach that separates the effects of large
scale (turbulent) and small scale (molecular) mixing is necessary.
Abstract: Work is focused to the study of unburned carbon in
ash from coal (and wastes) combustion in 8 combustion tests at 3
fluidised-bed power station, at co-combustion of coal and wastes
(also at fluidized bed) and at bench-scale unit simulating coal
combustion in small domestic furnaces. The attention is paid to
unburned carbon contents in bottom ashes and fly ashes at these 8
combustion tests and to morphology of unburned carbons. Specific
surface area of coals, unburned carbons and ashes and the relation of
specific surface area of unburned carbon and the content of volatile
combustibles in coal were studied as well.
Abstract: In-core memory requirement is a bottleneck in solving
large three dimensional Navier-Stokes finite element problem
formulations using sparse direct solvers. Out-of-core solution
strategy is a viable alternative to reduce the in-core memory
requirements while solving large scale problems. This study
evaluates the performance of various out-of-core sequential solvers
based on multifrontal or supernodal techniques in the context of
finite element formulations for three dimensional problems on a
Windows platform. Here three different solvers, HSL_MA78,
MUMPS and PARDISO are compared. The performance of these
solvers is evaluated on a 64-bit machine with 16GB RAM for finite
element formulation of flow through a rectangular channel. It is
observed that using out-of-core PARDISO solver, relatively large
problems can be solved. The implementation of Newton and
modified Newton's iteration is also discussed.
Abstract: The dynamics of User Datagram Protocol (UDP) traffic
over Ethernet between two computers are analyzed using nonlinear
dynamics which shows that there are two clear regimes in the data
flow: free flow and saturated. The two most important variables
affecting this are the packet size and packet flow rate. However,
this transition is due to a transcritical bifurcation rather than phase
transition in models such as in vehicle traffic or theorized large-scale
computer network congestion. It is hoped this model will help lay
the groundwork for further research on the dynamics of networks,
especially computer networks.
Abstract: Network layer multicast, i.e. IP multicast, even after
many years of research, development and standardization, is not
deployed in large scale due to both technical (e.g. upgrading of
routers) and political (e.g. policy making and negotiation) issues.
Researchers looked for alternatives and proposed application/overlay
multicast where multicast functions are handled by end hosts, not
network layer routers. Member hosts wishing to receive multicast
data form a multicast delivery tree. The intermediate hosts in the tree
act as routers also, i.e. they forward data to the lower hosts in the
tree. Unlike IP multicast, where a router cannot leave the tree until all
members below it leave, in overlay multicast any member can leave
the tree at any time thus disjoining the tree and disrupting the data
dissemination. All the disrupted hosts have to rejoin the tree. This
characteristic of the overlay multicast causes multicast tree unstable,
data loss and rejoin overhead. In this paper, we propose that each node
sets its leaving time from the tree and sends join request to a number
of nodes in the tree. The nodes in the tree will reject the request if
their leaving time is earlier than the requesting node otherwise they
will accept the request. The node can join at one of the accepting
nodes. This makes the tree more stable as the nodes will join the tree
according to their leaving time, earliest leaving time node being at the
leaf of the tree. Some intermediate nodes may not follow their leaving
time and leave earlier than their leaving time thus disrupting the tree.
For this, we propose a proactive recovery mechanism so that disrupted
nodes can rejoin the tree at predetermined nodes immediately. We
have shown by simulation that there is less overhead when joining
the multicast tree and the recovery time of the disrupted nodes is
much less than the previous works. Keywords
Abstract: By introducing the concept of Oracle we propose an approach for improving the performance of genetic algorithms for large-scale asymmetric Traveling Salesman Problems. The results have shown that the proposed approach allows overcoming some traditional problems for creating efficient genetic algorithms.
Abstract: This paper presents a mathematical model and a
methodology to analyze the losses in transmission expansion
planning (TEP) under uncertainty in demand. The methodology is
based on discrete particle swarm optimization (DPSO). DPSO is a
useful and powerful stochastic evolutionary algorithm to solve the
large-scale, discrete and nonlinear optimization problems like TEP.
The effectiveness of the proposed idea is tested on an actual
transmission network of the Azerbaijan regional electric company,
Iran. The simulation results show that considering the losses even for
transmission expansion planning of a network with low load growth
is caused that operational costs decreases considerably and the
network satisfies the requirement of delivering electric power more
reliable to load centers.
Abstract: In most of the cases, natural disasters lead to the
necessity of evacuating people. The quality of evacuation
management is dramatically improved by the use of information
provided by decision support systems, which become indispensable
in case of large scale evacuation operations. This paper presents a
best practice case study. In November 2007, officers from the
Emergency Situations Inspectorate “Crisana" of Bihor County from
Romania participated to a cross-border evacuation exercise, when
700 people have been evacuated from Netherlands to Belgium. One
of the main objectives of the exercise was the test of four different
decision support systems. Afterwards, based on that experience,
software system called TEVAC (Trans Border Evacuation) has been
developed “in house" by the experts of this institution. This original
software system was successfully tested in September 2008, during
the deployment of the international exercise EU-HUROMEX 2008,
the scenario involving real evacuation of 200 persons from Hungary
to Romania. Based on the lessons learned and results, starting from
April 2009, the TEVAC software is used by all Emergency
Situations Inspectorates all over Romania.
Abstract: In the present study, a heterogeneous and
homogeneous gas flow dispersion model for simulation and
optimisation of a large-scale catalytic slurry reactor for the direct
synthesis of dimethyl ether (DME) from syngas and CO2, using a
churn-turbulent regime was developed. In the heterogeneous gas flow
model the gas phase was distributed into two bubble phases: small
and large, however in the homogeneous one, the gas phase was
distributed into only one large bubble phase. The results indicated
that the heterogeneous gas flow model was in more agreement with
experimental pilot plant data than the homogeneous one.
Abstract: The purpose of this study was to develop a “teachers’
self-efficacy scale for high school physical education teachers
(TSES-HSPET)” in Taiwan. This scale is based on the self-efficacy
theory of Bandura [1], [2]. This study used exploratory and
confirmatory factor analyses to test the reliability and validity. The
participants were high school physical education teachers in Taiwan.
Both stratified random sampling and cluster sampling were used to
sample participants for the study. 350 teachers were sampled in the
first stage and 234 valid scales (male 133, female 101) returned.
During the second stage, 350 teachers were sampled and 257 valid
scales (male 143, female 110, 4 did not indicate gender) returned. The
exploratory factor analysis was used in the first stage, and it got
60.77% of total variance for construct validity. The Cronbach’s alpha
coefficient of internal consistency was 0.91 for sumscale, and
subscales were 0.84 and 0.90. In the second stage, confirmatory factor
analysis was used to test construct validity. The result showed that the
fit index could be accepted (χ2 (75) =167.94, p
Abstract: Proper orthogonal decomposition (POD) is used to reconstruct spatio-temporal data of a fully developed turbulent channel flow with density variation at Reynolds number of 150, based on the friction velocity and the channel half-width, and Prandtl number of 0.71. To apply POD to the fully developed turbulent channel flow with density variation, the flow field (velocities, density, and temperature) is scaled by the corresponding root mean square values (rms) so that the flow field becomes dimensionless. A five-vector POD problem is solved numerically. The reconstructed second-order moments of velocity, temperature, and density from POD eigenfunctions compare favorably to the original Direct Numerical Simulation (DNS) data.
Abstract: Understanding the cell's large-scale organization is an
interesting task in computational biology. Thus, protein-protein
interactions can reveal important organization and function of the
cell. Here, we investigated the correspondence between protein
interactions and function for the yeast. We obtained the correlations
among the set of proteins. Then these correlations are clustered using
both the hierarchical and biclustering methods. The detailed analyses
of proteins in each cluster were carried out by making use of their
functional annotations. As a result, we found that some functional
classes appear together in almost all biclusters. On the other hand, in
hierarchical clustering, the dominancy of one functional class is
observed. In brief, from interaction data to function, some correlated
results are noticed about the relationship between interaction and
function which might give clues about the organization of the
proteins.
Abstract: In image processing and visualization, comparing two
bitmapped images needs to be compared from their pixels by matching
pixel-by-pixel. Consequently, it takes a lot of computational time
while the comparison of two vector-based images is significantly
faster. Sometimes these raster graphics images can be approximately
converted into the vector-based images by various techniques. After
conversion, the problem of comparing two raster graphics images
can be reduced to the problem of comparing vector graphics images.
Hence, the problem of comparing pixel-by-pixel can be reduced to
the problem of polynomial comparisons. In computer aided geometric
design (CAGD), the vector graphics images are the composition of
curves and surfaces. Curves are defined by a sequence of control
points and their polynomials. In this paper, the control points will be
considerably used to compare curves. The same curves after relocated
or rotated are treated to be equivalent while two curves after different
scaled are considered to be similar curves. This paper proposed an
algorithm for comparing the polynomial curves by using the control
points for equivalence and similarity. In addition, the geometric
object-oriented database used to keep the curve information has also
been defined in XML format for further used in curve comparisons.