Abstract: Bridge is an architectural symbol in Iran as Badgir
(wind catcher); fire temples and arch are vaults are such. Therefore, from the very old ages, construction of bridges in Iran has mixed with
architecture, social customs, alms and charity and holiness. Since long ago, from Mad, Achaemenid, Parthian and Sassanid times which construction of bridges got an inseparable relation with social dependency and architecture, based on those dependency bridges and
dams got holy names; as Dokhtar castle and Dokhtar bridges were constructed. This method continued even after Islam and whenever
Iranians got free from political fights and the immunity of roads were established the bridge construction did also prospered. In ancient
times bridge construction passes through it growing and completion process and in Sassanid time in some way it reached to the peak of art
and glory; as after Islam especially during 4th. century (Arab calendar) it put behind a period of glory and in Safavid time it
reached to an exceptional glory and magnificence by constructing
glorious bridges on Zayandeh Roud River in Isfahan.
Having a combined style and changeability into bridge barrier, some of these bridges develop into magnificent constructions. The
sustainable structures, mentioned above, are constructed for various
reasons as follows: connecting two sides of a river, storing water,
controlling floods, using water energy to operate water windmills, making lanes of streams for farms- use, and building recreational
places for people, etc. These studies carried in bridges reveals the fact
that in construction and designing mentioned above, lots of
technological factors have been taken into consideration such as
exceeding floods in the rives, hydraulic and hydrology of the rivers and bridges, geology, foundation, structure, construction material, and adopting appropriate executing methods, all of which are being analyzed in this article.
Abstract: Discrete Wavelet Transform (DWT) has demonstrated
far superior to previous Discrete Cosine Transform (DCT) and
standard JPEG in natural as well as medical image compression. Due
to its localization properties both in special and transform domain,
the quantization error introduced in DWT does not propagate
globally as in DCT. Moreover, DWT is a global approach that avoids
block artifacts as in the JPEG. However, recent reports on natural
image compression have shown the superior performance of
contourlet transform, a new extension to the wavelet transform in two
dimensions using nonseparable and directional filter banks,
compared to DWT. It is mostly due to the optimality of contourlet in
representing the edges when they are smooth curves. In this work, we
investigate this fact for medical images, especially for CT images,
which has not been reported yet. To do that, we propose a
compression scheme in transform domain and compare the
performance of both DWT and contourlet transform in PSNR for
different compression ratios (CR) using this scheme. The results
obtained using different type of computed tomography images show
that the DWT has still good performance at lower CR but contourlet
transform performs better at higher CR.
Abstract: This paper presents a new fingerprint coding technique
based on contourlet transform and multistage vector quantization.
Wavelets have shown their ability in representing natural images that
contain smooth areas separated with edges. However, wavelets
cannot efficiently take advantage of the fact that the edges usually
found in fingerprints are smooth curves. This issue is addressed by
directional transforms, known as contourlets, which have the
property of preserving edges. The contourlet transform is a new
extension to the wavelet transform in two dimensions using
nonseparable and directional filter banks. The computation and
storage requirements are the major difficulty in implementing a
vector quantizer. In the full-search algorithm, the computation and
storage complexity is an exponential function of the number of bits
used in quantizing each frame of spectral information. The storage
requirement in multistage vector quantization is less when compared
to full search vector quantization. The coefficients of contourlet
transform are quantized by multistage vector quantization. The
quantized coefficients are encoded by Huffman coding. The results
obtained are tabulated and compared with the existing wavelet based
ones.
Abstract: In this note, we investigate the blind source separability of linear FIR-MIMO systems. The concept of semi-reversibility of a system is presented. It is shown that for a semi-reversible system, if the input signals belong to a binary alphabet, then the source data can be blindly separated. One sufficient condition for a system to be semi-reversible is obtained. It is also shown that the proposed criteria is weaker than that in the literature which requires that the channel matrix is irreducible/invertible or reversible.
Abstract: The challenge for software development house in
Bangladesh is to find a path of using minimum process rather than CMMI or ISO type gigantic practice and process area. The small and medium size organization in Bangladesh wants to ensure minimum
basic Software Process Improvement (SPI) in day to day operational
activities. Perhaps, the basic practices will ensure to realize their company's improvement goals. This paper focuses on the key issues in basic software practices for small and medium size software
organizations, who are unable to effort the CMMI, ISO, ITIL etc. compliance certifications. This research also suggests a basic software process practices model for Bangladesh and it will show the mapping of our suggestions with international best practice. In this IT
competitive world for software process improvement, Small and medium size software companies that require collaboration and
strengthening to transform their current perspective into inseparable global IT scenario. This research performed some investigations and analysis on some projects- life cycle, current good practice, effective approach, reality and pain area of practitioners, etc. We did some
reasoning, root cause analysis, comparative analysis of various
approach, method, practice and justifications of CMMI and real life. We did avoid reinventing the wheel, where our focus is for minimal
practice, which will ensure a dignified satisfaction between
organizations and software customer.
Abstract: Background noise is particularly damaging to speech
intelligibility for people with hearing loss especially for sensorineural
loss patients. Several investigations on speech intelligibility have
demonstrated sensorineural loss patients need 5-15 dB higher SNR
than the normal hearing subjects. This paper describes Discrete
Cosine Transform Power Normalized Least Mean Square algorithm
to improve the SNR and to reduce the convergence rate of the LMS
for Sensory neural loss patients. Since it requires only real arithmetic,
it establishes the faster convergence rate as compare to time domain
LMS and also this transformation improves the eigenvalue
distribution of the input autocorrelation matrix of the LMS filter.
The DCT has good ortho-normal, separable, and energy compaction
property. Although the DCT does not separate frequencies, it is a
powerful signal decorrelator. It is a real valued function and thus
can be effectively used in real-time operation. The advantages of
DCT-LMS as compared to standard LMS algorithm are shown via
SNR and eigenvalue ratio computations. . Exploiting the symmetry
of the basis functions, the DCT transform matrix [AN] can be
factored into a series of ±1 butterflies and rotation angles. This
factorization results in one of the fastest DCT implementation. There
are different ways to obtain factorizations. This work uses the fast
factored DCT algorithm developed by Chen and company. The
computer simulations results show superior convergence
characteristics of the proposed algorithm by improving the SNR at
least 10 dB for input SNR less than and equal to 0 dB, faster
convergence speed and better time and frequency characteristics.
Abstract: In this paper, we propose a novel time-frequency distribution (TFD) for the analysis of multi-component signals. In particular, we use synthetic as well as real-life speech signals to prove the superiority of the proposed TFD in comparison to some existing ones. In the comparison, we consider the cross-terms suppression and the high energy concentration of the signal around its instantaneous frequency (IF).
Abstract: The aim of our work is to study phase composition,
particle size and magnetic response of Fe2O3/TiO2 nanocomposites
with respect to the final annealing temperature. Those nanomaterials
are considered as smart catalysts, separable from a liquid/gaseous
phase by applied magnetic field. The starting product was obtained
by an ecologically acceptable route, based on heterogeneous
precipitation of the TiO2 on modified g-Fe2O3 nanocrystals dispersed
in water. The precursor was subsequently annealed on air at
temperatures ranging from 200 oC to 900 oC. The samples were
investigated by synchrotron X-ray powder diffraction (S-PXRD),
magnetic measurements and Mössbauer spectroscopy. As evidenced
by S-PXRD and Mössbauer spectroscopy, increasing the annealing
temperature causes evolution of the phase composition from
anatase/maghemite to rutile/hematite, finally above 700 oC the
pseudobrookite (Fe2TiO5) also forms. The apparent particle size of
the various Fe2O3/TiO2 phases has been determined from the highquality
S-PXRD data by using two different approaches: the Rietveld
refinement and the Debye method. Magnetic response of the samples
is discussed in considering the phase composition and the particle
size.