Abstract: Superelastic Shape Memory Alloy (SMA) is accepted
when it used as connection in steel structures. The seismic behaviour
of steel frames with SMA is being assessed in this study. Three eightstorey
steel frames with different SMA systems are suggested, the
first one of which is braced with diagonal bracing system, the second
one is braced with nee bracing system while the last one is which the
SMA is used as connection at the plastic hinge regions of beams.
Nonlinear time history analyses of steel frames with SMA subjected
to two different ground motion records have been performed using
Seismostruct software. To evaluate the efficiency of suggested
systems, the dynamic responses of the frames were compared. From
the comparison results, it can be concluded that using SMA element
is an effective way to improve the dynamic response of structures
subjected to earthquake excitations. Implementing the SMA braces
can lead to a reduction in residual roof displacement. The shape
memory alloy is effective in reducing the maximum displacement at
the frame top and it provides a large elastic deformation range. SMA
connections are very effective in dissipating energy and reducing the
total input energy of the whole frame under severe seismic ground
motion. Using of the SMA connection system is more effective in
controlling the reaction forces at the base frame than other bracing
systems. Using SMA as bracing is more effective in reducing the
displacements. The efficiency of SMA is dependant on the input
wave motions and the construction system as well.
Abstract: In the present study the efficiency of Big Bang-Big
Crunch (BB-BC) algorithm is investigated in discrete structural
design optimization. It is shown that a standard version of the BB-BC
algorithm is sometimes unable to produce reasonable solutions to
problems from discrete structural design optimization. Two
reformulations of the algorithm, which are referred to as modified
BB-BC (MBB-BC) and exponential BB-BC (EBB-BC), are
introduced to enhance the capability of the standard algorithm in
locating good solutions for steel truss and frame type structures,
respectively. The performances of the proposed algorithms are
experimented and compared to its standard version as well as some
other algorithms over several practical design examples. In these
examples, steel structures are sized for minimum weight subject to
stress, stability and displacement limitations according to the
provisions of AISC-ASD.
Abstract: Enzymatic hydrolysis is one of the major steps involved in the conversion from sugarcane bagasse to yield ethanol. This process offers potential for yields and selectivity higher, lower energy costs and milder operating conditions than chemical processes. However, the presence of some factors such as lignin content, crystallinity degree of the cellulose, and particle sizes, limits the digestibility of the cellulose present in the lignocellulosic biomasses. Pretreatment aims to improve the access of the enzyme to the substrate. In this study sugarcane bagasse was submitted chemical pretreatment that consisted of two consecutive steps, the first with dilute sulfuric acid (1 % (v/v) H2SO4), and the second with alkaline solutions with different concentrations of NaOH (1, 2, 3 and 4 % (w/v)). Thermal Analysis (TG/ DTG and DTA) was used to evaluate hemicellulose, cellulose and lignin contents in the samples. Scanning Electron Microscopy (SEM) was used to evaluate the morphological structures of the in natura and chemically treated samples. Results showed that pretreatments were effective in chemical degradation of lignocellulosic materials of the samples, and also was possible to observe the morphological changes occurring in the biomasses after pretreatments.
Abstract: The use of neural networks for recognition application is generally constrained by their inherent parameters inflexibility after the training phase. This means no adaptation is accommodated for input variations that have any influence on the network parameters. Attempts were made in this work to design a neural network that includes an additional mechanism that adjusts the threshold values according to the input pattern variations. The new approach is based on splitting the whole network into two subnets; main traditional net and a supportive net. The first deals with the required output of trained patterns with predefined settings, while the second tolerates output generation dynamically with tuning capability for any newly applied input. This tuning comes in the form of an adjustment to the threshold values. Two levels of supportive net were studied; one implements an extended additional layer with adjustable neuronal threshold setting mechanism, while the second implements an auxiliary net with traditional architecture performs dynamic adjustment to the threshold value of the main net that is constructed in dual-layer architecture. Experiment results and analysis of the proposed designs have given quite satisfactory conducts. The supportive layer approach achieved over 90% recognition rate, while the multiple network technique shows more effective and acceptable level of recognition. However, this is achieved at the price of network complexity and computation time. Recognition generalization may be also improved by accommodating capabilities involving all the innate structures in conjugation with Intelligence abilities with the needs of further advanced learning phases.
Abstract: This paper describes the design of new method of
propagation delay measurement in micro and nanostructures during
characterization of ASIC standard library cell. Providing more
accuracy timing information about library cell to the design team we
can improve a quality of timing analysis inside of ASIC design flow
process. Also, this information could be very useful for semiconductor
foundry team to make correction in technology process. By
comparison of the propagation delay in the CMOS element and result
of analog SPICE simulation. It was implemented as digital IP core for
semiconductor manufacturing process. Specialized method helps to
observe the propagation time delay in one element of the standard-cell
library with up-to picoseconds accuracy and less. Thus, the special
useful solutions for VLSI schematic to parameters extraction, basic
cell layout verification, design simulation and verification are
announced.
Abstract: This article reports on the studies of porous GaN prepared by ultra-violet (UV) assisted electrochemical etching in a solution of 4:1:1 HF: CH3OH:H2O2 under illumination of an UV lamp with 500 W power for 10, 25 and 35 minutes. The optical properties of porous GaN sample were compared to the corresponding as grown GaN. Porosity induced photoluminescence (PL) intensity enhancement was found in these samples. The resulting porous GaN displays blue shifted PL spectra compared to the as-grown GaN. Appearance of the blue shifted emission is correlated with the development of highly anisotropic structures in the morphology. An estimate of the size of the GaN nanostructure can be obtained with the help of a quantized state effective mass theory.
Abstract: The DNA microarray technology concurrently monitors the expression levels of thousands of genes during significant biological processes and across the related samples. The better understanding of functional genomics is obtained by extracting the patterns hidden in gene expression data. It is handled by clustering which reveals natural structures and identify interesting patterns in the underlying data. In the proposed work clustering gene expression data is done through an Advanced Nelder Mead (ANM) algorithm. Nelder Mead (NM) method is a method designed for optimization process. In Nelder Mead method, the vertices of a triangle are considered as the solutions. Many operations are performed on this triangle to obtain a better result. In the proposed work, the operations like reflection and expansion is eliminated and a new operation called spread-out is introduced. The spread-out operation will increase the global search area and thus provides a better result on optimization. The spread-out operation will give three points and the best among these three points will be used to replace the worst point. The experiment results are analyzed with optimization benchmark test functions and gene expression benchmark datasets. The results show that ANM outperforms NM in both benchmarks.
Abstract: We present a preliminary x-ray study on human-hair
microstructures for a health-state indicator, in particular a cancer
case. As an uncomplicated and low-cost method of x-ray technique,
the human-hair microstructure was analyzed by wide-angle x-ray
diffractions (XRD) and small-angle x-ray scattering (SAXS). The
XRD measurements exhibited the simply reflections at the d-spacing
of 28 Å, 9.4 Å and 4.4 Å representing to the periodic distance of the
protein matrix of the human-hair macrofibrous and the diameter and
the repeated spacing of the polypeptide alpha helixes of the
photofibrils of the human-hair microfibrous, respectively. When
compared to the normal cases, the unhealthy cases including to the
breast- and ovarian-cancer cases obtained higher normalized ratios of
the x-ray diffracting peaks of 9.4 Å and 4.4 Å. This likely resulted
from the varied distributions of microstructures by a molecular
alteration. As an elemental analysis by x-ray fluorescence (XRF), the
normalized quantitative ratios of zinc(Zn)/calcium(Ca) and
iron(Fe)/calcium(Ca) were determined. Analogously, both Zn/Ca and
Fe/Ca ratios of the unhealthy cases were obtained higher than both of
the normal cases were. Combining the structural analysis by XRD
measurements and the elemental analysis by XRF measurements
exhibited that the modified fibrous microstructures of hair samples
were in relation to their altered elemental compositions. Therefore,
these microstructural and elemental analyses of hair samples will be
benefit to associate with a diagnosis of cancer and genetic diseases.
This functional method would lower a risk of such diseases by the
early diagnosis. However, the high-intensity x-ray source, the highresolution
x-ray detector, and more hair samples are necessarily
desired to develop this x-ray technique and the efficiency would be
enhanced by including the skin and fingernail samples with the
human-hair analysis.
Abstract: Nano fibers produced by electrospinning are of industrial and scientific attention due to their special characteristics such as long length, small diameter and high surface area. Applications of electrospun structures in nanotechnology are included tissue scaffolds, fibers for drug delivery, composite reinforcement, chemical sensing, enzyme immobilization, membrane-based filtration, protective clothing, catalysis, solar cells, electronic devices and others. Many polymer and ceramic precursor nano fibers have been successfully electrospun with diameters in the range from 1 nm to several microns. The process is complex so that fiber diameter is influenced by various material, design and operating parameters. The objective of this work is to apply genetic algorithm on the parameters of electrospinning which have the most significant effect on the nano fiber diameter to determine the optimum parameter values before doing experimental set up. Effective factors including initial polymer concentration, initial jet radius, electrical potential, relaxation time, initial elongation, viscosity and distance between nozzle and collector are considered to determine finest diameter which is selected by user.
Abstract: The interaction between respiration and low-frequency rhythms of the cardiovascular system is studied. The obtained results count in favor of the hypothesis that low-frequency rhythms in blood pressure and R-R intervals are generated in different central neural structures involved in the autonomic control of the cardiovascular systems.
Abstract: Turbulence of the incoming wind field is of paramount
importance to the dynamic response of civil engineering structures. Hence reliable stochastic models of the turbulence should be available from which time series can be generated for dynamic response and
structural safety analysis. In the paper an empirical cross spectral
density function for the along-wind turbulence component over the wind field area is taken as the starting point. The spectrum is spatially
discretized in terms of a Hermitian cross-spectral density matrix for the turbulence state vector which turns out not to be positive
definite. Since the succeeding state space and ARMA modelling of
the turbulence rely on the positive definiteness of the cross-spectral
density matrix, the problem with the non-positive definiteness of such
matrices is at first addressed and suitable treatments regarding it are proposed. From the adjusted positive definite cross-spectral density
matrix a frequency response matrix is constructed which determines the turbulence vector as a linear filtration of Gaussian white noise.
Finally, an accurate state space modelling method is proposed which allows selection of an appropriate model order, and estimation of a state space model for the vector turbulence process incorporating its phase spectrum in one stage, and its results are compared with a conventional ARMA modelling method.
Abstract: The technique of k-anonymization has been proposed to obfuscate private data through associating it with at least k identities. This paper investigates the basic tabular structures that
underline the notion of k-anonymization using cell suppression.
These structures are studied under idealized conditions to identify the
essential features of the k-anonymization notion. We optimize data kanonymization
through requiring a minimum number of anonymized
values that are balanced over all columns and rows. We study the
relationship between the sizes of the anonymized tables, the value k, and the number of attributes. This study has a theoretical value through contributing to develop a mathematical foundation of the kanonymization
concept. Its practical significance is still to be
investigated.
Abstract: Higher education has an important role to play in
advocating environmentalism. Given this responsibility, the goal of
higher education should therefore be to develop graduates with the
knowledge, skills and values related to environmentalism. However,
research indicates that there is a lack of consciousness amongst
graduates on the need to be more environmentally aware, especially
when it comes to applying the appropriate knowledge and skills
related to environmentalism. Although institutions of higher learning
do include environmental parameters within their undergraduate and
postgraduate academic programme structures, the environmental
boundaries are usually confined to specific engineering majors within
an engineering programme. This makes environmental knowledge,
skills and values exclusive to certain quarters of the higher education
system. The incorporation of environmental literacy within higher
education institutions as a whole is of utmost pertinence if a nation-s
human capital is to be nurtured to become change agents for the
preservation of environment. This paper discusses approaches that
can be adapted by institutions of higher learning to include
environmental literacy within the graduate-s higher learning
experience.
Abstract: OLAP uses multidimensional structures, to provide
access to data for analysis. Traditionally, OLAP operations are more
focused on retrieving data from a single data mart. An exception is
the drill across operator. This, however, is restricted to retrieving
facts on common dimensions of the multiple data marts. Our concern
is to define further operations while retrieving data from multiple
data marts. Towards this, we have defined six operations which
coalesce data marts. While doing so we consider the common as well
as the non-common dimensions of the data marts.
Abstract: The African Great Lakes Region refers to the zone
around lakes Victoria, Tanganyika, Albert, Edward, Kivu, and
Malawi. The main source of electricity in this region is hydropower
whose systems are generally characterized by relatively weak,
isolated power schemes, poor maintenance and technical deficiencies
with limited electricity infrastructures. Most of the hydro sources are
rain fed, and as such there is normally a deficiency of water during
the dry seasons and extended droughts. In such calamities fossil fuels
sources, in particular petroleum products and natural gas, are
normally used to rescue the situation but apart from them being nonrenewable,
they also release huge amount of green house gases to our
environment which in turn accelerates the global warming that has at
present reached an amazing stage. Wind power is ample, renewable,
widely distributed, clean, and free energy source that does not
consume or pollute water. Wind generated electricity is one of the
most practical and commercially viable option for grid quality and
utility scale electricity production. However, the main shortcoming
associated with electric wind power generation is fluctuation in its
output both in space and time. Before making a decision to establish
a wind park at a site, the wind speed features there should therefore
be known thoroughly as well as local demand or transmission
capacity. The main objective of this paper is to utilise monthly
average wind speed data collected from one prospective site within
the African Great Lakes Region to demonstrate that the available
wind power there is high enough to generate electricity. The mean
monthly values were calculated from records gathered on hourly
basis for a period of 5 years (2001 to 2005) from a site in Tanzania.
The documentations that were collected at a height of 2 m were
projected to a height of 50 m which is the standard hub height of
wind turbines. The overall monthly average wind speed was found to
be 12.11 m/s whereas June to November was established to be the
windy season as the wind speed during the session is above the
overall monthly wind speed. The available wind power density
corresponding to the overall mean monthly wind speed was evaluated
to be 1072 W/m2, a potential that is worthwhile harvesting for the
purpose of electric generation.
Abstract: When dealing with safety in structures, the connections between structural components play an important role. Robustness of a structure as a whole depends both on the load- bearing capacity of the structural component and on the structures capacity to resist total failure, even though a local failure occurs in a component or a connection between components. To avoid progressive collapse it is necessary to be able to carry out a design for connections. A connection may be executed with anchors to withstand local failure of the connection in structures built with prefabricated components. For the design of these anchors, a model is developed for connections in structures performed in prefabricated autoclaved aerated concrete components. The design model takes into account the effect of anchors placed close to the edge, which may result in splitting failure. Further the model is developed to consider the effect of reinforcement diameter and anchor depth. The model is analytical and theoretically derived assuming a static equilibrium stress distribution along the anchor. The theory is compared to laboratory test, including the relevant parameters and the model is refined and theoretically argued analyzing the observed test results. The method presented can be used to improve safety in structures or even optimize the design of the connections
Abstract: A new approach to predict the 3D structures of proteins by combining the knowledge-based method and Molecular Dynamics Simulation is presented on the chicken villin headpiece subdomain (HP-36). Comparative modeling is employed as the knowledge-based method to predict the core region (Ala9-Asn28) of the protein while the remaining residues are built as extended regions (Met1-Lys8; Leu29-Phe36) which then further refined using Molecular Dynamics Simulation for 120 ns. Since the core region is built based on a high sequence identity to the template (65%) resulting in RMSD of 1.39 Å from the native, it is believed that this well-developed core region can act as a 'nucleation center' for subsequent rapid downhill folding. Results also demonstrate that the formation of the non-native contact which tends to hamper folding rate can be avoided. The best 3D model that exhibits most of the native characteristics is identified using clustering method which then further ranked based on the conformational free energies. It is found that the backbone RMSD of the best model compared to the NMR-MDavg is 1.01 Å and 3.53 Å, for the core region and the complete protein, respectively. In addition to this, the conformational free energy of the best model is lower by 5.85 kcal/mol as compared to the NMR-MDavg. This structure prediction protocol is shown to be effective in predicting the 3D structure of small globular protein with a considerable accuracy in much shorter time compared to the conventional Molecular Dynamics simulation alone.
Abstract: The characterization and modeling of the dynamic
behavior of many built-up structures under vibration conditions is still
a subject of current research. The present study emphasizes the
theoretical investigation of slip damping in layered and jointed
welded cantilever structures using finite element approach.
Application of finite element method in damping analysis is relatively
recent, as such, some problems particularly slip damping analysis has
not received enough attention. To validate the finite element model
developed, experiments have been conducted on a number of mild
steel specimens under different initial conditions of vibration. Finite
element model developed affirms that the damping capacity of such
structures is influenced by a number of vital parameters such as;
pressure distribution, kinematic coefficient of friction and micro-slip
at the interfaces, amplitude, frequency of vibration, length and
thickness of the specimen. Finite element model developed can be
utilized effectively in the design of machine tools, automobiles,
aerodynamic and space structures, frames and machine members for
enhancing their damping capacity.
Abstract: HIV-1 genome is highly heterogeneous. Due to this
variation, features of HIV-I genome is in a wide range. For this
reason, the ability to infection of the virus changes depending on
different chemokine receptors. From this point of view, R5 HIV
viruses use CCR5 coreceptor while X4 viruses use CXCR5 and
R5X4 viruses can utilize both coreceptors. Recently, in
Bioinformatics, R5X4 viruses have been studied to classify by using
the experiments on HIV-1 genome.
In this study, R5X4 type of HIV viruses were classified using
Auto Regressive (AR) model through Artificial Neural Networks
(ANNs). The statistical data of R5X4, R5 and X4 viruses was
analyzed by using signal processing methods and ANNs. Accessible
residues of these virus sequences were obtained and modeled by AR
model since the dimension of residues is large and different from
each other. Finally the pre-processed data was used to evolve various
ANN structures for determining R5X4 viruses. Furthermore ROC
analysis was applied to ANNs to show their real performances. The
results indicate that R5X4 viruses successfully classified with high
sensitivity and specificity values training and testing ROC analysis
for RBF, which gives the best performance among ANN structures.
Abstract: Midpoint filter is quite effective in recovering the
images confounded by the short-tailed (uniform) noise. It, however,
performs poorly in the presence of additive long-tailed (impulse)
noise and it does not preserve the edge structures of the image
signals. Median smoother discards outliers (impulses) effectively, but
it fails to provide adequate smoothing for images corrupted with nonimpulse
noise. In this paper, two nonlinear techniques for image
filtering, namely, New Filter I and New Filter II are proposed based
on a nonlinear high-pass filter algorithm. New Filter I is constructed
using a midpoint filter, a highpass filter and a combiner. It suppresses
uniform noise quite well. New Filter II is configured using an alpha
trimmed midpoint filter, a median smoother of window size 3x3, the
high pass filter and the combiner. It is robust against impulse noise
and attenuates uniform noise satisfactorily. Both the filters are shown
to exhibit good response at the image boundaries (edges). The
proposed filters are evaluated for their performance on a test image
and the results obtained are included.