Abstract: This study estimates the seismic demands of tall
buildings with central symmetric setbacks by using nonlinear time
history analysis. Three setback structures, all 60-story high with
setback in three levels, are used for evaluation. The effects of
irregularities occurred by setback are evaluated by determination of
global-drift, story-displacement and story drift. Story-displacement is
modified by roof displacement and first story displacement and story
drift is modified by global drift. All results are calculated at the
center of mass and in x and y direction. Also the absolute values of
these quantities are determined. The results show that increasing of
vertical irregularities increases the global drift of the structure and
enlarges the deformations in the height of the structure. It is also
observed that the effects of geometry irregularity in the seismic
deformations of setback structures are higher than those of mass
irregularity.
Abstract: The aim of this paper is to select the most accurate
forecasting method for predicting the future values of the
unemployment rate in selected European countries. In order to do so,
several forecasting techniques adequate for forecasting time series
with trend component, were selected, namely: double exponential
smoothing (also known as Holt`s method) and Holt-Winters` method
which accounts for trend and seasonality. The results of the empirical
analysis showed that the optimal model for forecasting
unemployment rate in Greece was Holt-Winters` additive method. In
the case of Spain, according to MAPE, the optimal model was double
exponential smoothing model. Furthermore, for Croatia and Italy the
best forecasting model for unemployment rate was Holt-Winters`
multiplicative model, whereas in the case of Portugal the best model
to forecast unemployment rate was Double exponential smoothing
model. Our findings are in line with European Commission
unemployment rate estimates.
Abstract: The elastic period has a primary role in the seismic
assessment of buildings. Reliable calculations and/or estimates of the
fundamental frequency of a building and its site are essential during
analysis and design process. Various code formulas based on
empirical data are generally used to estimate the fundamental
frequency of a structure. For existing structures, in addition to code
formulas and available analytical tools such as modal analyses,
various methods of testing including ambient and forced vibration
testing procedures may be used to determine dynamic characteristics.
In this study, the dynamic properties of the 32 buildings located in
the Madinah of Saudi Arabia were identified using ambient motions
recorded at several, spatially-distributed locations within each
building. Ambient vibration measurements of buildings have been
analyzed and the fundamental longitudinal and transverse periods for
all tested buildings are presented. The fundamental mode of vibration
has been compared in plots with codes formulae (Saudi Building
Code, EC8, and UBC1997). The results indicate that measured
periods of existing buildings are shorter than that given by most
empirical code formulas. Recommendations are given based on the
common design and construction practice in Madinah city.
Abstract: Construction cost estimation is one of the most
important aspects of construction project design. For generations, the
process of cost estimating has been manual, time-consuming and
error-prone. This has partly led to most cost estimates to be unclear
and riddled with inaccuracies that at times lead to over- or underestimation
of construction cost. The development of standard set of
measurement rules that are understandable by all those involved in a
construction project, have not totally solved the challenges. Emerging
Building Information Modelling (BIM) technologies can exploit
standard measurement methods to automate cost estimation process
and improve accuracies. This requires standard measurement
methods to be structured in ontological and machine readable format;
so that BIM software packages can easily read them. Most standard
measurement methods are still text-based in textbooks and require
manual editing into tables or Spreadsheet during cost estimation. The
aim of this study is to explore the development of an ontology based
on New Rules of Measurement (NRM) commonly used in the UK for
cost estimation. The methodology adopted is Methontology, one of
the most widely used ontology engineering methodologies. The
challenges in this exploratory study are also reported and
recommendations for future studies proposed.
Abstract: Shortfall of electrical energy in Pakistan is a challenge
adversely affecting its industrial output and social growth. As
elsewhere, Pakistan derives its electrical energy from a number of
conventional sources. The exhaustion of petroleum and conventional
resources, the rising costs coupled with extremely adverse climatic
effects are taking its toll especially on the under-developed countries
like Pakistan. As alternate, renewable energy sources like hydropower,
solar, wind, even bio-energy and a mix of some or all of them
could provide a credible alternative to the conventional energy
resources that would not only be cleaner but sustainable as well. As a
model, solar energy-based power grid for the near future has been
attempted to offset the energy shortfalls as a mix with our existing
sustainable natural energy resources. An assessment of solar energy
potential for electricity generation is being presented for fulfilling the
energy demands with higher level of reliability and sustainability.
This model is based on the premise that solar energy potential of
Pakistan is not only reliable but also sustainable. This research
estimates the present & future approaching renewable energy
resource specially the impact of solar energy based power grid for
mitigating energy shortage in Pakistan.
Abstract: Factors affecting construction unit cost vary
depending on a country’s political, economic, social and
technological inclinations. Factors affecting construction costs have
been studied from various perspectives. Analysis of cost factors
requires an appreciation of a country’s practices. Identified cost
factors provide an indication of a country’s construction economic
strata. The purpose of this paper is to identify the essential factors
that affect unit cost estimation and their breakdown using artificial
neural networks. Twenty five (25) identified cost factors in road
construction were subjected to a questionnaire survey and employing
SPSS factor analysis the factors were reduced to eight. The 8 factors
were analysed using neural network (NN) to determine the
proportionate breakdown of the cost factors in a given construction
unit rate. NN predicted that political environment accounted 44% of
the unit rate followed by contractor capacity at 22% and financial
delays, project feasibility and overhead & profit each at 11%. Project
location, material availability and corruption perception index had
minimal impact on the unit cost from the training data provided.
Quantified cost factors can be incorporated in unit cost estimation
models (UCEM) to produce more accurate estimates. This can create
improvements in the cost estimation of infrastructure projects and
establish a benchmark standard to assist the process of alignment of
work practises and training of new staff, permitting the on-going
development of best practises in cost estimation to become more
effective.
Abstract: The objective of meta-analysis is to combine results
from several independent studies in order to create generalization
and provide evidence base for decision making. But recent studies
show that the magnitude of effect size estimates reported in many
areas of research significantly changed over time and this can
impair the results and conclusions of meta-analysis. A number of
sequential methods have been proposed for monitoring the effect
size estimates in meta-analysis. However they are based on statistical
theory applicable only to fixed effect model (FEM) of meta-analysis.
For random-effects model (REM), the analysis incorporates the
heterogeneity variance, τ 2 and its estimation create complications.
In this paper we study the use of a truncated CUSUM-type test with
asymptotically valid critical values for sequential monitoring in REM.
Simulation results show that the test does not control the Type I error
well, and is not recommended. Further work required to derive an
appropriate test in this important area of applications.
Abstract: Currently, seismic probabilistic risk assessments
(SPRA) for nuclear facilities use In-Structure Response Spectra
(ISRS) in the calculation of fragilities for systems and components.
ISRS are calculated via dynamic analyses of the host building
subjected to two orthogonal components of horizontal ground
motion. Each component is defined as the median motion in any
horizontal direction. Structural engineers applied the components
along selected X and Y Cartesian axes. The ISRS at different
locations in the building are also calculated in the X and Y directions.
The choice of the directions of X and Y are not specified by the
ground motion model with respect to geographic coordinates, and are
rather arbitrarily selected by the structural engineer. Normally, X and
Y coincide with the “principal” axes of the building, in the
understanding that this practice is generally conservative. For SPRA
purposes, however, it is desirable to remove any conservatism in the
estimates of median ISRS. This paper examines the effects of the
direction of horizontal seismic motion on the ISRS on typical nuclear
structure. We also evaluate the variability of ISRS calculated along
different horizontal directions. Our results indicate that some central
measures of the ISRS provide robust estimates that are practically
independent of the selection of the directions of the horizontal
Cartesian axes.
Abstract: The purpose of this study was to investigate
perceptions of climate change risk to forest ecosystems and forestbased
communities as well as perceived effectiveness of adaptation
strategies for climate change as well as challenges for adaptation.
Data was gathered using a pre-tested semi-structured questionnaire.
Simple random selection technique was applied. For the majority of
issues, the responses were obtained on multi-point likert scales, and
the scores provided were, in turn, used to estimate the means and
other useful estimates. A composite knowledge index developed
using correct responses to a set of self-rated statements were used to
evaluate the issues. The mean of the knowledge index was 0.64. Also
all respondents recorded values of the knowledge index above 0.25.
Increase forest fire was perceived by respondents as the greatest risk
to forest eco-system. Decrease access to water supplies was perceived
as the greatest risk to livelihoods of forest based communities. The
most effective adaptation strategy relevant to climate change risks to
forest eco-systems and forest based communities livelihoods in
Kathmandu valley in Nepal as perceived by the respondents was
reforestation and afforestation. As well, lack of public awareness was
perceived as the major limitation for climate change adaptation.
However, perceived risks as well as effective adaptation strategies
showed an inconsistent association with knowledge indicators and
social-cultural variables. The results provide useful information to
any party who involve with climate change issues in Nepal, since
such attempts would be more effective once the people’s perceptions
on these aspects are taken into account.
Abstract: At-site flood frequency analysis is used to estimate
flood quantiles when at-site record length is reasonably long. In
Australia, FLIKE software has been introduced for at-site flood
frequency analysis. The advantage of FLIKE is that, for a given
application, the user can compare a number of most commonly
adopted probability distributions and parameter estimation methods
relatively quickly using a windows interface. The new version of
FLIKE has been incorporated with the multiple Grubbs and Beck test
which can identify multiple numbers of potentially influential low
flows. This paper presents a case study considering six catchments in
eastern Australia which compares two outlier identification tests
(original Grubbs and Beck test and multiple Grubbs and Beck test)
and two commonly applied probability distributions (Generalized
Extreme Value (GEV) and Log Pearson type 3 (LP3)) using FLIKE
software. It has been found that the multiple Grubbs and Beck test
when used with LP3 distribution provides more accurate flood
quantile estimates than when LP3 distribution is used with the
original Grubbs and Beck test. Between these two methods, the
differences in flood quantile estimates have been found to be up to
61% for the six study catchments. It has also been found that GEV
distribution (with L moments) and LP3 distribution with the multiple
Grubbs and Beck test provide quite similar results in most of the
cases; however, a difference up to 38% has been noted for flood
quantiles for annual exceedance probability (AEP) of 1 in 100 for one
catchment. This finding needs to be confirmed with a greater number
of stations across other Australian states.
Abstract: Ancillary services are support services which are
essential for humanizing and enhancing the reliability and security of
the electric power system. Reactive power ancillary service is one of
the important ancillary services in a restructured electricity market
which determines the cost of supplying ancillary services and finding
of how this cost would change with respect to operating decisions.
This paper presents a new formation that can be used to minimize the
Independent System Operator (ISO)’s total payment for reactive
power ancillary service. The modified power flow tracing algorithm
estimates the availability of reserve reactive power for ancillary
service. In order to find optimum reactive power dispatch,
Biogeography based optimization method (BPO) is proposed. Market
Reactive Clearing Price (MRCP) is then estimated and it encourages
generator companies (GENCOs) to participate in an ancillary service.
Finally, optimal weighting factor and real time utilization factor of
reactive power give the minimum ISO’s total payment. The
effectiveness of proposed design is verified using IEEE 30 bus
system.
Abstract: In MANET, mobile nodes communicate with each
other using the wireless channel where transmission takes place with
significant interference. The wireless medium used in MANET is a
shared resource used by all the nodes available in MANET. Packet
reserving is one important resource management scheme which
controls the allocation of bandwidth among multiple flows through
node cooperation in MANET. This paper proposes packet reserving
and clogging control via Routing Aware Packet Reserving (RAPR)
framework in MANET. It mainly focuses the end-to-end routing
condition with maximal throughput. RAPR is complimentary system
where the packet reserving utilizes local routing information
available in each node. Path setup in RAPR estimates the security
level of the system, and symbolizes the end-to-end routing by
controlling the clogging. RAPR reaches the packet to the destination
with high probability ratio and minimal delay count. The standard
performance measures such as network security level,
communication overhead, end-to-end throughput, resource utilization
efficiency and delay measure are considered in this work. The results
reveals that the proposed packet reservation and clogging control via
Routing Aware Packet Reserving (RAPR) framework performs well
for the above said performance measures compare to the existing
methods.
Abstract: A total of 150 meat type chickens comprising 50 each
of Arbor Acre, Marshall and Ross were used for this study which
lasted for 10 weeks at the Federal University of Agriculture,
Abeokuta, Nigeria. Growth performance data were collected from the
third week through week 10 and data obtained were analysed using
the Generalized Linear Model Procedure. Heritability estimates (h2)
for body dimensions carried out on the chicken strains ranged from
low to high. Marshall broiler chicken strain had the highest h2 for
body weight 0.46±0.04, followed by Arbor Acre and Ross with h2
being 0.38±0.12 and 0.26±0.06, respectively. The repeatability
estimates for body weight in the three broiler strains were high, and it
ranged from 0.70 at week 4 to 0.88 at week 10. Relationships
between the body weight and linear body measurements in the broiler
chicken strains were positive and highly significant (p > 0.05).
Abstract: An analysis of the Australian Diabetes Screening
Study estimated undiagnosed diabetes mellitus [DM] prevalence in a
high risk general practice based cohort. DM prevalence varied from
9.4% to 18.1% depending upon the diagnostic criteria utilised with
age being a highly significant risk factor. Utilising the gold standard
oral glucose tolerance test, the prevalence of DM was 22-23% in
those aged >= 70 years and
Abstract: A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates.On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.
Abstract: This paper describes a new efficient blind source separation method; in this method we uses a non-uniform filter bank and a new structure with different sub-bands. This method provides a reduced permutation and increased convergence speed comparing to the full-band algorithm. Recently, some structures have been suggested to deal with two problems: reducing permutation and increasing the speed of convergence of the adaptive algorithm for correlated input signals. The permutation problem is avoided with the use of adaptive filters of orders less than the full-band adaptive filter, which operate at a sampling rate lower than the sampling rate of the input signal. The decomposed signals by analysis bank filter are less correlated in each sub-band than the input signal at full-band, and can promote better rates of convergence.
Abstract: Complex sensitivity analysis of stresses in a concrete slab of the real type of rigid pavement made from recycled materials is performed. The computational model of the pavement is designed as a spatial (3D) model, is based on a nonlinear variant of the finite element method that respects the structural nonlinearity, enables to model different arrangements of joints, and the entire model can be loaded by the thermal load. Interaction of adjacent slabs in joints and contact of the slab and the subsequent layer are modeled with the help of special contact elements. Four concrete slabs separated by transverse and longitudinal joints and the additional structural layers and soil to the depth of about 3m are modeled. The thickness of individual layers, physical and mechanical properties of materials, characteristics of joints, and the temperature of the upper and lower surface of slabs are supposed to be random variables. The modern simulation technique Updated Latin Hypercube Sampling with 20 simulations is used. For sensitivity analysis the sensitivity coefficient based on the Spearman rank correlation coefficient is utilized. As a result, the estimates of influence of random variability of individual input variables on the random variability of principal stresses s1 and s3 in 53 points on the upper and lower surface of the concrete slabs are obtained.
Abstract: Two multisensor system architectures for navigation
and guidance of small Unmanned Aircraft (UA) are presented and
compared. The main objective of our research is to design a compact,
light and relatively inexpensive system capable of providing the
required navigation performance in all phases of flight of small UA,
with a special focus on precision approach and landing, where Vision
Based Navigation (VBN) techniques can be fully exploited in a
multisensor integrated architecture. Various existing techniques for
VBN are compared and the Appearance-Based Navigation (ABN)
approach is selected for implementation. Feature extraction and
optical flow techniques are employed to estimate flight parameters
such as roll angle, pitch angle, deviation from the runway centreline
and body rates. Additionally, we address the possible synergies of
VBN, Global Navigation Satellite System (GNSS) and MEMS-IMU
(Micro-Electromechanical System Inertial Measurement Unit)
sensors, and the use of Aircraft Dynamics Model (ADM) to provide
additional information suitable to compensate for the shortcomings of
VBN and MEMS-IMU sensors in high-dynamics attitude
determination tasks. An Extended Kalman Filter (EKF) is developed
to fuse the information provided by the different sensors and to
provide estimates of position, velocity and attitude of the UA
platform in real-time. The key mathematical models describing the
two architectures i.e., VBN-IMU-GNSS (VIG) system and VIGADM
(VIGA) system are introduced. The first architecture uses VBN
and GNSS to augment the MEMS-IMU. The second mode also
includes the ADM to provide augmentation of the attitude channel.
Simulation of these two modes is carried out and the performances of
the two schemes are compared in a small UA integration scheme (i.e.,
AEROSONDE UA platform) exploring a representative cross-section
of this UA operational flight envelope, including high dynamics
manoeuvres and CAT-I to CAT-III precision approach tasks.
Simulation of the first system architecture (i.e., VIG system) shows
that the integrated system can reach position, velocity and attitude
accuracies compatible with the Required Navigation Performance
(RNP) requirements. Simulation of the VIGA system also shows
promising results since the achieved attitude accuracy is higher using
the VBN-IMU-ADM than using VBN-IMU only. A comparison of
VIG and VIGA system is also performed and it shows that the
position and attitude accuracy of the proposed VIG and VIGA
systems are both compatible with the RNP specified in the various
UA flight phases, including precision approach down to CAT-II.
Abstract: In this paper feedforward controller is designed to eliminate nonlinear hysteresis behaviors of a piezoelectric stack actuator (PSA) driven system. The control design is based on inverse Prandtl-Ishlinskii (P-I) hysteresis model identified using particle swarm optimization (PSO) technique. Based on the identified P-I model, both the inverse P-I hysteresis model and feedforward controller can be determined. Experimental results obtained using the inverse P-I feedforward control are compared with their counterparts using hysteresis estimates obtained from the identified Bouc-Wen model. Effectiveness of the proposed feedforward control scheme is demonstrated. To improve control performance feedback compensation using traditional PID scheme is adopted to integrate with the feedforward controller.
Abstract: The primary objective of this paper is to study the thermal effects of the electric arc on the breaker apparatus contacts for forecasting and improving the contact durability. We will propose a model which takes account of the main influence factors on the erosion contacts. This phenomenon is very complicated because the amount of ejected metal is not necessarily constituted by the whole melted metal bath but this depends on the balance of forces on the contact surface. Consequently, to calculate the metal ejection coefficient, we propose a method which consists in comparing the experimental results with the calculated ones. The proposed model estimates the mass lost by vaporization, by droplets ejection and by the extraction mechanism of liquid or solid metal. In the one-dimensional geometry, to calculate of the contact heating, we used Green’s function which expresses the point source and allows the transition to the surface source. However, for the two- dimensional model we used explicit and implicit numerical methods. The results are similar to those found by Wilson’s experiments.