Abstract: The strength of reinforced concrete depends on the member dimensions and material properties. The properties of concrete and steel materials are not constant but random variables. The variability of concrete strength is due to batching errors, variations in mixing, cement quality uncertainties, differences in the degree of compaction and disparity in curing. Similarly, the variability of steel strength is attributed to the manufacturing process, rolling conditions, characteristics of base material, uncertainties in chemical composition, and the microstructure-property relationships. To account for such uncertainties, codes of practice for reinforced concrete design impose resistance factors to ensure structural reliability over the useful life of the structure. In this investigation, the effects of reductions in concrete and reinforcing steel strengths from the nominal values, beyond those accounted for in the structural design codes, on the structural reliability are assessed. The considered limit states are flexure, shear and axial compression based on the ACI 318-11 structural concrete building code. Structural safety is measured in terms of a reliability index. Probabilistic resistance and load models are compiled from the available literature. The study showed that there is a wide variation in the reliability index for reinforced concrete members designed for flexure, shear or axial compression, especially when the live-to-dead load ratio is low. Furthermore, variations in concrete strength have minor effect on the reliability of beams in flexure, moderate effect on the reliability of beams in shear, and sever effect on the reliability of columns in axial compression. On the other hand, changes in steel yield strength have great effect on the reliability of beams in flexure, moderate effect on the reliability of beams in shear, and mild effect on the reliability of columns in axial compression. Based on the outcome, it can be concluded that the reliability of beams is sensitive to changes in the yield strength of the steel reinforcement, whereas the reliability of columns is sensitive to variations in the concrete strength. Since the embedded target reliability in structural design codes results in lower structural safety in beams than in columns, large reductions in material strengths compromise the structural safety of beams much more than they affect columns.
Abstract: To relieve the burden of reasoning on a point to point basis, in many domains there is a need to reduce large and noisy data sets into trends for qualitative reasoning. In this paper we propose and describe a new architectural design pattern called REDUCER for reducing large and noisy data sets that can be tailored for particular situations. REDUCER consists of 2 consecutive processes: Filter which takes the original data and removes outliers, inconsistencies or noise; and Compression which takes the filtered data and derives trends in the data. In this seminal article we also show how REDUCER has successfully been applied to 3 different case studies.
Abstract: Steel bracing members are widely used in steel
structures to reduce lateral displacement and dissipate energy during
earthquake motions. Concentric steel bracing provide an excellent
approach for strengthening and stiffening steel buildings. Using these
braces the designer can hardly adjust the stiffness together with
ductility as needed because of buckling of braces in compression. In
this study the use of SMA bracing and steel bracing (Mega) utilized
in steel frames are investigated. The effectiveness of these two
systems in rehabilitating a mid-rise eight-storey steel frames were
examined using time-history nonlinear analysis utilizing seismostruct
software. Results show that both systems improve the strength and
stiffness of the original structure but due to excellent behavior of
SMA in nonlinear phase and under compressive forces this system
shows much better performance than the rehabilitation system of
Mega bracing.
Abstract: Recycling of aluminum beverage cans is an important issue due to its economic and environmental effect. One of the significant factors in aluminum cans recycling process is the transportation cost from the landfill space. An automatic compression baler (ACB) machine has been designed and built to densify the aluminum beverage cans. It has been constructed using numerous fabricated components. Two types of control methodology have been introduced in this ACB machine to achieve its goal. The first is a semi-automatic system, and the second is a mechatronic system by using a Programmable Logic Control (PLC). The effect of single and double pre-compression for the beverage cans have been evaluated by using the PLC control. Comparisons have been performed between the two types of control methodologies by operating this ACB machine in different working conditions. The double pre-compression in PLC control proves that there is an enhancement in the ACB performance by 133% greater than the direct compression in the semi-automatic control. In addition, the percentage of the reduction ratio in volume reaches 77%, and the compaction ratio reaches about four times of the initial volume.
Abstract: In this paper, we present a non-blind technique of
adding the watermark to the Fourier spectral components of audio
signal in a way such that the modified amplitude does not exceed the
maximum amplitude spread (MAS). This MAS is due to individual
Discrete fourier transform (DFT) coefficients in that particular frame,
which is derived from the Energy Spreading function given by
Schroeder. Using this technique one can store double the information
within a given frame length i.e. overriding the watermark on the
host of equal length with least perceptual distortion. The watermark
is uniformly floating on the DFT components of original signal.
This helps in detecting any intentional manipulations done on the
watermarked audio. Also, the scheme is found robust to various signal
processing attacks like presence of multiple watermarks, Additive
white gaussian noise (AWGN) and mp3 compression.
Abstract: Time-Cost Optimization "TCO" is one of the greatest challenges in construction project planning and control, since the optimization of either time or cost, would usually be at the expense of the other. Since there is a hidden trade-off relationship between project and cost, it might be difficult to predict whether the total cost would increase or decrease as a result of the schedule compression. Recently third dimension in trade-off analysis is taken into consideration that is quality of the projects. Few of the existing algorithms are applied in a case of construction project with threedimensional trade-off analysis, Time-Cost-Quality relationships. The objective of this paper is to presents the development of a practical software system; that named Automatic Multi-objective Typical Construction Resource Optimization System "AMTCROS". This system incorporates the basic concepts of Line Of Balance "LOB" and Critical Path Method "CPM" in a multi-objective Genetic Algorithms "GAs" model. The main objective of this system is to provide a practical support for typical construction planners who need to optimize resource utilization in order to minimize project cost and duration while maximizing its quality simultaneously. The application of these research developments in planning the typical construction projects holds a strong promise to: 1) Increase the efficiency of resource use in typical construction projects; 2) Reduce construction duration period; 3) Minimize construction cost (direct cost plus indirect cost); and 4) Improve the quality of newly construction projects. A general description of the proposed software for the Time-Cost-Quality Trade-Off "TCQTO" is presented. The main inputs and outputs of the proposed software are outlined. The main subroutines and the inference engine of this software are detailed. The complexity analysis of the software is discussed. In addition, the verification, and complexity of the proposed software are proved and tested using a real case study.
Abstract: Image compression can improve the performance of
the digital systems by reducing time and cost in image storage
and transmission without significant reduction of the image quality.
Furthermore, the discrete cosine transform has emerged as the new
state-of-the art standard for image compression. In this paper, a
hybrid image compression technique based on reversible blockade
transform coding is proposed. The technique, implemented over
regions of interest (ROIs), is based on selection of the coefficients
that belong to different transforms, depending on the coefficients is
proposed. This method allows: (1) codification of multiple kernals
at various degrees of interest, (2) arbitrary shaped spectrum,and (3)
flexible adjustment of the compression quality of the image and the
background. No standard modification for JPEG2000 decoder was
required. The method was applied over different types of images.
Results show a better performance for the selected regions, when
image coding methods were employed for the whole set of images.
We believe that this method is an excellent tool for future image
compression research, mainly on images where image coding can
be of interest, such as the medical imaging modalities and several
multimedia applications. Finally VLSI implementation of proposed
method is shown. It is also shown that the kernal of Hartley and
Cosine transform gives the better performance than any other model.
Abstract: Image compression plays a vital role in today-s
communication. The limitation in allocated bandwidth leads to
slower communication. To exchange the rate of transmission in the
limited bandwidth the Image data must be compressed before
transmission. Basically there are two types of compressions, 1)
LOSSY compression and 2) LOSSLESS compression. Lossy
compression though gives more compression compared to lossless
compression; the accuracy in retrievation is less in case of lossy
compression as compared to lossless compression. JPEG, JPEG2000
image compression system follows huffman coding for image
compression. JPEG 2000 coding system use wavelet transform,
which decompose the image into different levels, where the
coefficient in each sub band are uncorrelated from coefficient of
other sub bands. Embedded Zero tree wavelet (EZW) coding exploits
the multi-resolution properties of the wavelet transform to give a
computationally simple algorithm with better performance compared
to existing wavelet transforms. For further improvement of
compression applications other coding methods were recently been
suggested. An ANN base approach is one such method. Artificial
Neural Network has been applied to many problems in image
processing and has demonstrated their superiority over classical
methods when dealing with noisy or incomplete data for image
compression applications. The performance analysis of different
images is proposed with an analysis of EZW coding system with
Error Backpropagation algorithm. The implementation and analysis
shows approximately 30% more accuracy in retrieved image
compare to the existing EZW coding system.
Abstract: In this study, a novel approach of image embedding is introduced. The proposed method consists of three main steps. First, the edge of the image is detected using Sobel mask filters. Second, the least significant bit LSB of each pixel is used. Finally, a gray level connectivity is applied using a fuzzy approach and the ASCII code is used for information hiding. The prior bit of the LSB represents the edged image after gray level connectivity, and the remaining six bits represent the original image with very little difference in contrast. The proposed method embeds three images in one image and includes, as a special case of data embedding, information hiding, identifying and authenticating text embedded within the digital images. Image embedding method is considered to be one of the good compression methods, in terms of reserving memory space. Moreover, information hiding within digital image can be used for security information transfer. The creation and extraction of three embedded images, and hiding text information is discussed and illustrated, in the following sections.
Abstract: Masonry cavity walls are loaded by wind pressure and vertical load from upper floors. These loads results in bending moments and compression forces in the ties connecting the outer and the inner wall in a cavity wall. Large cavity walls are furthermore loaded by differential movements from the temperature gradient between the outer and the inner wall, which results in critical increase of the bending moments in the ties. Since the ties are loaded by combined compression and moment forces, the loadbearing capacity is derived from instability equilibrium equations. Most of them are iterative, since exact instability solutions are complex to derive, not to mention the extra complexity introducing dimensional instability from the temperature gradients. Using an inverse variable substitution and comparing an exact theory with an analytical instability solution a method to design tie-connectors in cavity walls was developed. The method takes into account constraint conditions limiting the free length of the wall tie, and the instability in case of pure compression which gives an optimal load bearing capacity. The model is illustrated with examples from praxis.
Abstract: Corrugated wire mesh laminates (CWML) are a class
of engineered open cell structures that have potential for applications
in many areas including aerospace and biomedical engineering. Two
different methods of fabricating corrugated wire mesh laminates from
stainless steel, one using a high temperature Lithobraze alloy and the
other using a low temperature Eutectic solder for joining the
corrugated wire meshes are described herein. Their implementation is
demonstrated by manufacturing CWML samples of 304 and 316
stainless steel (SST). It is seen that due to the facility of employing
wire meshes of different densities and wire diameters, it is possible to
create CWML laminates with a wide range of effective densities. The
fabricated laminates are tested under uniaxial compression. The
variation of the compressive yield strength with relative density of the
CWML is compared to the theory developed by Gibson and Ashby for
open cell structures [22]. It is shown that the compressive strength of
the corrugated wire mesh laminates can be described using the same
equations by using an appropriate value for the linear coefficient in the
Gibson-Ashby model.
Abstract: In this work, we present a comparison between
different techniques of image compression. First, the image is
divided in blocks which are organized according to a certain scan.
Later, several compression techniques are applied, combined or
alone. Such techniques are: wavelets (Haar's basis), Karhunen-Loève
Transform, etc. Simulations show that the combined versions are the
best, with minor Mean Squared Error (MSE), and higher Peak Signal
to Noise Ratio (PSNR) and better image quality, even in the presence
of noise.
Abstract: In this paper we investigate the watermarking authentication when applied to medical imagery field. We first give an overview of watermarking technology by paying attention to fragile watermarking since it is the usual scheme for authentication.We then analyze the requirements for image authentication and integrity in medical imagery, and we show finally that invertible schemes are the best suited for this particular field. A well known authentication method is studied. This technique is then adapted here for interleaving patient information and message authentication code with medical images in a reversible manner, that is using lossless compression. The resulting scheme enables on a side the exact recovery of the original image that can be unambiguously authenticated, and on the other side, the patient information to be saved or transmitted in a confidential way. To ensure greater security the patient information is encrypted before being embedded into images.
Abstract: We examined whether children ( < 18 years old) had risk of intra-thoracic trauma during 'one-handed' chest compressions through MDCT images. We measured the length of the lower half of the sternum (Stotal/2~X). We also measured the distance from the diaphragm to the midpoint of the sternum (Stotal/2~D) and half the width of an adult hand (Wtotal/2). All the 1 year-old children had Stotal/2~X and Stotal/2~D less than Wtotal/2. Among the children aged 2 years, 6 (60.0%) had Stotal/2~X and Stotal/2~D less than Wtotal/2. Among those aged 3 years, 4 (26.7%) had Stotal/2~X and Stotal/2~D less than Wtotal/2, and among those aged 4 years, 2 (13.3%) had Stotal/2~X and Stotal/2~D less than Wtotal/2. However, Stotal/2~X and Stotal/2~D were greater than Wtotal/2 in children aged 5 years or more. We knew that small children may be at an increased risk of intra-thoracic trauma during 'one-handed' chest compressions.
Abstract: Discrete Wavelet Transform (DWT) has demonstrated
far superior to previous Discrete Cosine Transform (DCT) and
standard JPEG in natural as well as medical image compression. Due
to its localization properties both in special and transform domain,
the quantization error introduced in DWT does not propagate
globally as in DCT. Moreover, DWT is a global approach that avoids
block artifacts as in the JPEG. However, recent reports on natural
image compression have shown the superior performance of
contourlet transform, a new extension to the wavelet transform in two
dimensions using nonseparable and directional filter banks,
compared to DWT. It is mostly due to the optimality of contourlet in
representing the edges when they are smooth curves. In this work, we
investigate this fact for medical images, especially for CT images,
which has not been reported yet. To do that, we propose a
compression scheme in transform domain and compare the
performance of both DWT and contourlet transform in PSNR for
different compression ratios (CR) using this scheme. The results
obtained using different type of computed tomography images show
that the DWT has still good performance at lower CR but contourlet
transform performs better at higher CR.
Abstract: Data gathering is an essential operation in wireless
sensor network applications. So it requires energy efficiency
techniques to increase the lifetime of the network. Similarly,
clustering is also an effective technique to improve the energy
efficiency and network lifetime of wireless sensor networks. In this
paper, an energy efficient cluster formation protocol is proposed with
the objective of achieving low energy dissipation and latency without
sacrificing application specific quality. The objective is achieved by
applying randomized, adaptive, self-configuring cluster formation
and localized control for data transfers. It involves application -
specific data processing, such as data aggregation or compression.
The cluster formation algorithm allows each node to make
independent decisions, so as to generate good clusters as the end.
Simulation results show that the proposed protocol utilizes minimum
energy and latency for cluster formation, there by reducing the
overhead of the protocol.
Abstract: In the forming of ceramic materials the plasticity
concept is commonly used. This term is related to a particular
mechanical behavior when clay is mixed with water. A plastic
ceramic material shows a permanent strain without rupture
when a compressive load produces a shear stress that exceeds
the material-s yield strength. For a plastic ceramic body it
observes a measurable elastic behavior before the yield
strength and when the applied load is removed. In this work, a
mathematical model was developed from applied concepts of
the plasticity theory by using the stress/strain diagram under
compression.
Abstract: Without uncertainty by applying external loads on
beams, bending is created. The created bending in I-beams, puts one
of the flanges in tension and the other one in compression. With increasing of bending, compression flange buckled and beam in out
of its plane direction twisted, this twisting well-known as Lateral Torsional Buckling. Providing bending moment varieties along the
beam, the critical moment is greater than the case its under pure bending. In other words, the value of bending gradient coefficient is
always greater than unite. In this article by the use of " ANSYS 10.0" software near 80 3-D finite element models developed for the
propose of analyzing beams` lateral torsional buckling and surveying influence of slenderness on beams' bending gradient coefficient.
Results show that, presented Cb coefficient via AISC is not correct for some of beams and value of this coefficient is smaller than what proposed by AISC. Therefore instead of using a constant Cb for each
case of loading , a function with two criterion for calculation of Cb coefficient for some cases is proposed.
Abstract: We present a theory for optimal filtering of infinite sets of random signals. There are several new distinctive features of the proposed approach. First, we provide a single optimal filter for processing any signal from a given infinite signal set. Second, the filter is presented in the special form of a sum with p terms where each term is represented as a combination of three operations. Each operation is a special stage of the filtering aimed at facilitating the associated numerical work. Third, an iterative scheme is implemented into the filter structure to provide an improvement in the filter performance at each step of the scheme. The final step of the concerns signal compression and decompression. This step is based on the solution of a new rank-constrained matrix approximation problem. The solution to the matrix problem is described in this paper. A rigorous error analysis is given for the new filter.
Abstract: We constructed a method of noise reduction for
JPEG-compressed image based on Bayesian inference using the
maximizer of the posterior marginal (MPM) estimate. In this method,
we tried the MPM estimate using two kinds of likelihood, both of
which enhance grayscale images converted into the JPEG-compressed
image through the lossy JPEG image compression. One is the
deterministic model of the likelihood and the other is the probabilistic
one expressed by the Gaussian distribution. Then, using the Monte
Carlo simulation for grayscale images, such as the 256-grayscale
standard image “Lena" with 256 × 256 pixels, we examined the
performance of the MPM estimate based on the performance measure
using the mean square error. We clarified that the MPM estimate via
the Gaussian probabilistic model of the likelihood is effective for
reducing noises, such as the blocking artifacts and the mosquito noise,
if we set parameters appropriately. On the other hand, we found that
the MPM estimate via the deterministic model of the likelihood is not
effective for noise reduction due to the low acceptance ratio of the
Metropolis algorithm.