Abstract: This article refers to the action of Kazakh
intelligentsia towards the formation of national state and their
attempt for reconstruction of national independence and building the
way to nowadays- independence through reviewing the history of our
national ideology.
Abstract: In wavelet regression, choosing threshold value is a crucial issue. A too large value cuts too many coefficients resulting in over smoothing. Conversely, a too small threshold value allows many coefficients to be included in reconstruction, giving a wiggly estimate which result in under smoothing. However, the proper choice of threshold can be considered as a careful balance of these principles. This paper gives a very brief introduction to some thresholding selection methods. These methods include: Universal, Sure, Ebays, Two fold cross validation and level dependent cross validation. A simulation study on a variety of sample sizes, test functions, signal-to-noise ratios is conducted to compare their numerical performances using three different noise structures. For Gaussian noise, EBayes outperforms in all cases for all used functions while Two fold cross validation provides the best results in the case of long tail noise. For large values of signal-to-noise ratios, level dependent cross validation works well under correlated noises case. As expected, increasing both sample size and level of signal to noise ratio, increases estimation efficiency.
Abstract: One of the main issues in Computer Vision is to extract the movement of one or several points or objects of interest in an image or video sequence to conduct any kind of study or control process. Different techniques to solve this problem have been applied in numerous areas such as surveillance systems, analysis of traffic, motion capture, image compression, navigation systems and others, where the specific characteristics of each scenario determine the approximation to the problem. This paper puts forward a Computer Vision based algorithm to analyze fish trajectories in high turbulence conditions in artificial structures called vertical slot fishways, designed to allow the upstream migration of fish through obstructions in rivers. The suggested algorithm calculates the position of the fish at every instant starting from images recorded with a camera and using neural networks to execute fish detection on images. Different laboratory tests have been carried out in a full scale fishway model and with living fishes, allowing the reconstruction of the fish trajectory and the measurement of velocities and accelerations of the fish. These data can provide useful information to design more effective vertical slot fishways.
Abstract: A new estimator for evolutionary spectrum (ES) based
on short time Fourier transform (STFT) and modified group delay
function (MGDF) by signal decomposition (SD) is proposed. The
STFT due to its built-in averaging, suppresses the cross terms and the
MGDF preserves the frequency resolution of the rectangular window
with the reduction in the Gibbs ripple. The present work overcomes
the magnitude distortion observed in multi-component non-stationary
signals with STFT and MGDF estimation of ES using SD. The SD is
achieved either through discrete cosine transform based harmonic
wavelet transform (DCTHWT) or perfect reconstruction filter banks
(PRFB). The MGDF also improves the signal to noise ratio by
removing associated noise. The performance of the present method is
illustrated for cross chirp and frequency shift keying (FSK) signals,
which indicates that its performance is better than STFT-MGDF
(STFT-GD) alone. Further its noise immunity is better than STFT.
The SD based methods, however cannot bring out the frequency
transition path from band to band clearly, as there will be gap in the
contour plot at the transition. The PRFB based STFT-SD shows good
performance than DCTHWT decomposition method for STFT-GD.
Abstract: Matrix metalloproteinases (MMP) are a class of
structural and functional related enzymes involved in altering the
natural elements of the extracellular matrix. Most of the MMP
structures are cristalographycally determined and published in
WorldWide ProteinDataBank, isolated, in full structure or bound to
natural or synthetic inhibitors. This study proposes an algorithm to
replace missing crystallographic structures in PDB database. We
have compared the results of a chosen docking algorithm with a
known crystallographic structure in order to validate enzyme sites
reconstruction there where crystallographic data are missing.
Abstract: An alternative approach to the use of Discrete Fourier
Transform (DFT) for Magnetic Resonance Imaging (MRI) reconstruction
is the use of parametric modeling technique. This method
is suitable for problems in which the image can be modeled by
explicit known source functions with a few adjustable parameters.
Despite the success reported in the use of modeling technique as an
alternative MRI reconstruction technique, two important problems
constitutes challenges to the applicability of this method, these are
estimation of Model order and model coefficient determination. In
this paper, five of the suggested method of evaluating the model
order have been evaluated, these are: The Final Prediction Error
(FPE), Akaike Information Criterion (AIC), Residual Variance (RV),
Minimum Description Length (MDL) and Hannan and Quinn (HNQ)
criterion. These criteria were evaluated on MRI data sets based on the
method of Transient Error Reconstruction Algorithm (TERA). The
result for each criterion is compared to result obtained by the use of a
fixed order technique and three measures of similarity were evaluated.
Result obtained shows that the use of MDL gives the highest measure
of similarity to that use by a fixed order technique.
Abstract: A new automatic system for the recognition and re¬construction of resealed and/or rotated partially occluded objects is presented. The objects to be recognized are described by 2D views and each view is occluded by several half-planes. The whole object views and their visible parts (linear cuts) are then stored in a database. To establish if a region R of an input image represents an object possibly occluded, the system generates a set of linear cuts of R and compare them with the elements in the database. Each linear cut of R is associated to the most similar database linear cut. R is recognized as an instance of the object 0 if the majority of the linear cuts of R are associated to a linear cut of views of 0. In the case of recognition, the system reconstructs the occluded part of R and determines the scale factor and the orientation in the image plane of the recognized object view. The system has been tested on two different datasets of objects, showing good performance both in terms of recognition and reconstruction accuracy.
Abstract: Although the level crossing concept has been the subject of intensive investigation over the last few years, certain problems of great interest remain unsolved. One of these concern is distribution of threshold levels. This paper presents a new threshold level allocation schemes for level crossing based on nonuniform sampling. Intuitively, it is more reasonable if the information rich regions of the signal are sampled finer and those with sparse information are sampled coarser. To achieve this objective, we propose non-linear quantization functions which dynamically assign the number of quantization levels depending on the importance of the given amplitude range. Two new approaches to determine the importance of the given amplitude segment are presented. The proposed methods are based on exponential and logarithmic functions. Various aspects of proposed techniques are discussed and experimentally validated. Its efficacy is investigated by comparison with uniform sampling.
Abstract: Bursa, since the establishment of the Ottoman Empire,
being on the important trade roads and having a capital accumulation
as a result of silk production, was one of the first cities of
modernization activities applied. Bursa maintained its importance
even during the Republican Period and became one of the most
important cities of the country and today is the fourth biggest and the
industrialized city in Turkey. Social, political, economical and
cultural changes occured with the reforms starting with the 1839
Edict of Tanzimat that aimed at modernizing the society and the
government and centralizing the political power began in the
Ottoman Empire. After the Tanzimat Reforms transformation of the
city changed and planning processes began in Bursa according to the
vision of Governors. The theresholds of the city are very important
data for a sustainable planning for the city planners. Main aim of this
study is to investigate the changes and transformations of the city
according to the changes in the socio-economical and cultural
properties for the city planners.
Abstract: Super resolution (SR) technologies are now being
applied to video to improve resolution. Some TV sets are now
equipped with SR functions. However, it is not known if super
resolution image reconstruction (SRR) for TV really works or not.
Super resolution with non-linear signal processing (SRNL) has
recently been proposed. SRR and SRNL are the only methods for
processing video signals in real time. The results from subjective
assessments of SSR and SRNL are described in this paper. SRR video
was produced in simulations with quarter precision motion vectors and
100 iterations. These are ideal conditions for SRR. We found that the
image quality of SRNL is better than that of SRR even though SRR
was processed under ideal conditions.
Abstract: In this work a dual laser triangulation system is presented for fast building of 2.5D textured models of objects within a production line. This scanner is designed to produce data suitable for 3D completeness inspection algorithms. For this purpose two laser projectors have been used in order to considerably reduce the problem of occlusions in the camera movement direction. Results of reconstruction of electronic boards are presented, together with a comparison with a commercial system.
Abstract: The goal of Gene Expression Analysis is to understand the processes that underlie the regulatory networks and pathways controlling inter-cellular and intra-cellular activities. In recent times microarray datasets are extensively used for this purpose. The scope of such analysis has broadened in recent times towards reconstruction of gene networks and other holistic approaches of Systems Biology. Evolutionary methods are proving to be successful in such problems and a number of such methods have been proposed. However all these methods are based on processing of genotypic information. Towards this end, there is a need to develop evolutionary methods that address phenotypic interactions together with genotypic interactions. We present a novel evolutionary approach, called Phenomic algorithm, wherein the focus is on phenotypic interaction. We use the expression profiles of genes to model the interactions between them at the phenotypic level. We apply this algorithm to the yeast sporulation dataset and show that the algorithm can identify gene networks with relative ease.
Abstract: Due to increased number of terrorist attacks in recent years, loads induced by explosions need to be incorporated in building designs. For safer performance of a structure, its foundation should have sufficient strength and stability. Therefore, prior to any reconstruction or rehabilitation of a building subjected to blast, it is important to examine adverse effects on the foundation caused by blast induced ground shocks. This paper evaluates the effects of a buried explosion on a pile foundation. It treats the dynamic response of the pile in saturated sand, using explicit dynamic nonlinear finite element software LS-DYNA. The blast induced wave propagation in the soil and the horizontal deformation of pile are presented and the results are discussed. Further, a parametric study is carried out to evaluate the effect of varying the explosive shape on the pile response. This information can be used to evaluate the vulnerability of piled foundations to credible blast events as well as develop guidance for their design.
Abstract: This paper presents the results of enhancing images from a left and right stereo pair in order to increase the resolution of a 3D representation of a scene generated from that same pair. A new neural network structure known as a Self Delaying Dynamic Network (SDN) has been used to perform the enhancement. The advantage of SDNs over existing techniques such as bicubic interpolation is their ability to cope with motion and noise effects. SDNs are used to generate two high resolution images, one based on frames taken from the left view of the subject, and one based on the frames from the right. This new high resolution stereo pair is then processed by a disparity map generator. The disparity map generated is compared to two other disparity maps generated from the same scene. The first is a map generated from an original high resolution stereo pair and the second is a map generated using a stereo pair which has been enhanced using bicubic interpolation. The maps generated using the SDN enhanced pairs match more closely the target maps. The addition of extra noise into the input images is less problematic for the SDN system which is still able to out perform bicubic interpolation.
Abstract: As the Computed Tomography(CT) requires normally
hundreds of projections to reconstruct the image, patients are exposed
to more X-ray energy, which may cause side effects such as cancer.
Even when the variability of the particles in the object is very less,
Computed Tomography requires many projections for good quality
reconstruction. In this paper, less variability of the particles in an
object has been exploited to obtain good quality reconstruction.
Though the reconstructed image and the original image have same
projections, in general, they need not be the same. In addition
to projections, if a priori information about the image is known,
it is possible to obtain good quality reconstructed image. In this
paper, it has been shown by experimental results why conventional
algorithms fail to reconstruct from a few projections, and an efficient
polynomial time algorithm has been given to reconstruct a bi-level
image from its projections along row and column, and a known sub
image of unknown image with smoothness constraints by reducing the
reconstruction problem to integral max flow problem. This paper also
discusses the necessary and sufficient conditions for uniqueness and
extension of 2D-bi-level image reconstruction to 3D-bi-level image
reconstruction.
Abstract: Computed tomography and laminography are heavily investigated in a compressive sensing based image reconstruction framework to reduce the dose to the patients as well as to the radiosensitive devices such as multilayer microelectronic circuit boards. Nowadays researchers are actively working on optimizing the compressive sensing based iterative image reconstruction algorithm to obtain better quality images. However, the effects of the sampled data’s properties on reconstructed the image’s quality, particularly in an insufficient sampled data conditions have not been explored in computed laminography. In this paper, we investigated the effects of two data properties i.e. sampling density and data incoherence on the reconstructed image obtained by conventional computed laminography and a recently proposed method called spherical sinusoidal scanning scheme. We have found that in a compressive sensing based image reconstruction framework, the image quality mainly depends upon the data incoherence when the data is uniformly sampled.
Abstract: To meet the demands of wireless sensor networks
(WSNs) where data are usually aggregated at a single source prior to
transmitting to any distant user, there is a need to establish a tree
structure inside any given event region. In this paper , a novel
technique to create one such tree is proposed .This tree preserves the
energy and maximizes the lifetime of event sources while they are
constantly transmitting for data aggregation. The term Decentralized
Lifetime Maximizing Tree (DLMT) is used to denote this tree.
DLMT features in nodes with higher energy tend to be chosen as data
aggregating parents so that the time to detect the first broken tree link
can be extended and less energy is involved in tree maintenance. By
constructing the tree in such a way, the protocol is able to reduce the
frequency of tree reconstruction, minimize the amount of data loss
,minimize the delay during data collection and preserves the energy.
Abstract: Typhoon Morakot hit Taiwan in 2009 and caused
severe damages. The government employs a compulsory relocation
strategy for post-disaster reconstruction. This study analyzes the
impact of this strategy on community solidarity. It employs a multiple
approach for data collection, including semi-structural interview,
secondary data, and documentation. The results indicate that the
government-s strategy for distributing housing has led to conflicts
within the communities. In addition, the relocating process has
stimulated tensions between victims of the disaster and those residents
whose lands were chosen to be new sites for relocation. The
government-s strategy of “collective relocation" also worsened
community integration. In addition, the fact that a permanent housing
community may accommodate people from different places also posts
challenge for the development of new inter-personal relations in the
communities. This study concludes by emphasizing the importance of
bringing social, economic and cultural aspects into consideration for
post-disaster relocation..
Abstract: Here, in this work we study correspondence the energy density New agegraphic and the energy density Gauss- Bonnet models in flat universe. We reconstruct Λ and Λ ω for them with 0 ( ) 0 h a t = a t .
Abstract: We study the problem of reconstructing a three dimensional binary matrices whose interiors are only accessible through few projections. Such question is prominently motivated by the demand in material science for developing tool for reconstruction of crystalline structures from their images obtained by high-resolution transmission electron microscopy. Various approaches have been suggested to reconstruct 3D-object (crystalline structure) by reconstructing slice of the 3D-object. To handle the ill-posedness of the problem, a priori information such as convexity, connectivity and periodicity are used to limit the number of possible solutions. Formally, 3Dobject (crystalline structure) having a priory information is modeled by a class of 3D-binary matrices satisfying a priori information. We consider 3D-binary matrices with periodicity constraints, and we propose a polynomial time algorithm to reconstruct 3D-binary matrices with periodicity constraints from two orthogonal projections.