Abstract: The occurrence of Wenchuan earthquake in 2008 has led to severe damage to the rural areas of Chengdu city, such as the rupture of the social network, the stagnation of economic production and the rupture of living space. The post-disaster reconstruction has become a sustainable issue. As an important link to maintain the order of rural social development, social network should be an important content of post-disaster reconstruction. Therefore, this paper takes rural reconstruction communities in earthquake-stricken areas of Chengdu as the research object and adopts sociological research methods such as field survey, observation and interview to try to understand the transformation of rural social relations network under the influence of earthquake and its impact on rural space. It has found that rural societies under the earthquake generally experienced three phases: the break of stable social relations, the transition of temporary non-normal state, and the reorganization of social networks. The connotation of phased rural social relations also changed accordingly: turn to a new division of labor on the social orientation, turn to a capital flow and redistribution in new production mode on the capital orientation, and turn to relative decentralization after concentration on the spatial dimension. Along with such changes, rural areas have emerged some social issues such as the alienation of competition in the new industry division, the low social connection, the significant redistribution of capital, and the lack of public space. Based on a comprehensive review of these issues, this paper proposes the corresponding response mechanism. First of all, a reasonable division of labor should be established within the villages to realize diversified commodity supply. Secondly, the villages should adjust the industrial type to promote the equitable participation of capital allocation groups. Finally, external public spaces should be added to strengthen the field of social interaction within the communities.
Abstract: The use of biometric identifiers in the field of
information security, access control to resources, authentication in
ATMs and banking among others, are of great concern because of
the safety of biometric data. In the general architecture of a biometric
system have been detected eight vulnerabilities, six of them allow
obtaining minutiae template in plain text. The main consequence
of obtaining minutia templates is the loss of biometric identifier
for life. To mitigate these vulnerabilities several models to protect
minutiae templates have been proposed. Several vulnerabilities in the
cryptographic security of these models allow to obtain biometric data
in plain text. In order to increase the cryptographic security and ease
of reversibility, a minutiae templates protection model is proposed.
The model aims to make the cryptographic protection and facilitate
the reversibility of data using two levels of security. The first level
of security is the data transformation level. In this level generates
invariant data to rotation and translation, further transformation is
irreversible. The second level of security is the evaluation level,
where the encryption key is generated and data is evaluated using a
defined evaluation function. The model is aimed at mitigating known
vulnerabilities of the proposed models, basing its security on the
impossibility of the polynomial reconstruction.
Abstract: This paper will focus on the concept of social capital for especially housing reconstruction Post Disaster. The context of the study is Indonesia and Yogyakarta Post Earthquake 2006 as a case, but it is expected that the concept can be adopted in general post disaster reconstruction. The discussion will begin by addressing issues on House Reconstruction Post Disaster in Indonesia and Yogyakarta; defining Social Capital as a concept for effective management capacity based on community; Social Capital Post Java Earthquake utilizing Gotong Royong—community mutual self-help, and Approach and Strategy towards Community-based Reconstruction.
Abstract: The reconstruction from sparse-view projections is one
of important problems in computed tomography (CT) limited by
the availability or feasibility of obtaining of a large number of
projections. Traditionally, convex regularizers have been exploited
to improve the reconstruction quality in sparse-view CT, and the
convex constraint in those problems leads to an easy optimization
process. However, convex regularizers often result in a biased
approximation and inaccurate reconstruction in CT problems. Here,
we present a nonconvex, Lipschitz continuous and non-smooth
regularization model. The CT reconstruction is formulated as a
nonconvex constrained L1 − L2 minimization problem and solved
through a difference of convex algorithm and alternating direction
of multiplier method which generates a better result than L0 or L1
regularizers in the CT reconstruction. We compare our method with
previously reported high performance methods which use convex
regularizers such as TV, wavelet, curvelet, and curvelet+TV (CTV)
on the test phantom images. The results show that there are benefits in
using the nonconvex regularizer in the sparse-view CT reconstruction.
Abstract: Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.
Abstract: The railway transport is considered as a one of the
most environmentally friendly mode of transport. With future
prediction of increasing of freight transport there are lines facing
problems with demanded capacity. Increase of the track capacity
could be achieved by infrastructure constructive adjustments. The
contribution shows how the travel time can be minimized and the
track capacity increased by changing some of the basic infrastructure
and operation parameters, for example, the minimal curve radius of
the track, the number of tracks, or the usable track length at stations.
Calculation of the necessary parameter changes is based on the
fundamental physical laws applied to the train movement, and
calculation of the occupation time is dependent on the changes of
controlling the traffic between the stations.
Abstract: In recent years, fire accidents have been steadily
increased and the amount of property damage caused by the accidents
has gradually raised. Damaging building structure, fire incidents bring
about not only such property damage but also strength degradation and
member deformation. As a result, the building structure undermines its
structural ability. Examining the degradation and the deformation is
very important because reusing the building is more economical than
reconstruction. Therefore, engineers need to investigate the strength
degradation and member deformation well, and make sure that they
apply right rehabilitation methods. This study aims at evaluating
deformation characteristics of fire damaged and rehabilitated normal
strength concrete beams through both experiments and finite element
analyses. For the experiments, control beams, fire damaged beams and
rehabilitated beams are tested to examine deformation characteristics.
Ten test beam specimens with compressive strength of 21MPa are
fabricated and main test variables are selected as cover thickness of
40mm and 50mm and fire exposure time of 1 hour or 2 hours. After
heating, fire damaged beams are air-recurred for 2 months and
rehabilitated beams are repaired with polymeric cement mortar after
being removed the fire damaged concrete cover. All beam specimens
are tested under four points loading. FE analyses are executed to
investigate the effects of main parameters applied to experimental
study. Test results show that both maximum load and stiffness of the
rehabilitated beams are higher than those of the fire damaged beams.
In addition, predicted structural behaviors from the analyses also show
good rehabilitation effect and the predicted load-deflection curves are
similar to the experimental results. For the further, the proposed
analytical method can be used to predict deformation characteristics of
fire damaged and rehabilitated concrete beams without suffering from
time and cost consuming of experimental process.
Abstract: Brownfields are one of the most important problems
that must be solved by today's cities. The topic of this article is
description of developing a comprehensive transformation of postindustrial
area of the former iron factory national cultural heritage
lower Vítkovice. City of Ostrava used to be industrial superpower of
the Czechoslovak Republic, especially in the area of coal mining and
iron production, after declining industrial production and mining in
the 80s left many unused areas of former factories generally
brownfields and backfields. Since the late 90s we are observing how
the city officials or private entities seeking to remedy this situation.
Regeneration of brownfields is a very expensive and long-term
process. The area is now rebuilt for tourists and residents of the city
in the entertainment, cultural, and social center. It was necessary do
the reconstruction of the industrial monuments. Equally important
was the construction of new buildings, which helped reusing of the
entire complex. This is a unique example of transformation of
technical monuments and completion of necessary new objects, so
that the area could start working again and reintegrate back into the
urban system.
Abstract: In medical imaging, segmentation of different areas of
human body like bones, organs, tissues, etc. is an important issue.
Image segmentation allows isolating the object of interest for further
processing that can lead for example to 3D model reconstruction of
whole organs. Difficulty of this procedure varies from trivial for
bones to quite difficult for organs like liver. The liver is being
considered as one of the most difficult human body organ to segment.
It is mainly for its complexity, shape versatility and proximity of
other organs and tissues. Due to this facts usually substantial user
effort has to be applied to obtain satisfactory results of the image
segmentation. Process of image segmentation then deteriorates from
automatic or semi-automatic to fairly manual one. In this paper,
overview of selected available software applications that can handle
semi-automatic image segmentation with further 3D volume
reconstruction of human liver is presented. The applications are being
evaluated based on the segmentation results of several consecutive
DICOM images covering the abdominal area of the human body.
Abstract: This research proposes a novel reconstruction protocol
for restoring missing surfaces and low-quality edges and shapes in
photos of artifacts at historical sites. The protocol starts with the
extraction of a cloud of points. This extraction process is based on
four subordinate algorithms, which differ in the robustness and
amount of resultant. Moreover, they use different -but
complementary- accuracy to some related features and to the way
they build a quality mesh. The performance of our proposed protocol
is compared with other state-of-the-art algorithms and toolkits. The
statistical analysis shows that our algorithm significantly outperforms
its rivals in the resultant quality of its object files used to reconstruct
the desired model.
Abstract: An accurate study of blood flow is associated with an accurate vascular pattern and geometrical properties of the organ of interest. Due to the complexity of vascular networks and poor accessibility in vivo, it is challenging to reconstruct the entire vasculature of any organ experimentally. The objective of this study is to introduce an innovative approach for the reconstruction of a full vascular tree from available morphometric data. Our method consists of implementing morphometric data on those parts of the vascular tree that are smaller than the resolution of medical imaging methods. This technique reconstructs the entire arterial tree down to the capillaries. Vessels greater than 2 mm are obtained from direct volume and surface analysis using contrast enhanced computed tomography (CT). Vessels smaller than 2mm are reconstructed from available morphometric and distensibility data and rearranged by applying Murray’s Laws. Implementation of morphometric data to reconstruct the branching pattern and applying Murray’s Laws to every vessel bifurcation simultaneously, lead to an accurate vascular tree reconstruction. The reconstruction algorithm generates full arterial tree topography down to the first capillary bifurcation. Geometry of each order of the vascular tree is generated separately to minimize the construction and simulation time. The node-to-node connectivity along with the diameter and length of every vessel segment is established and order numbers, according to the diameter-defined Strahler system, are assigned. During the simulation, we used the averaged flow rate for each order to predict the pressure drop and once the pressure drop is predicted, the flow rate is corrected to match the computed pressure drop for each vessel. The final results for 3 cardiac cycles is presented and compared to the clinical data.
Abstract: Camera calibration is an important step in 3D
reconstruction. Camera calibration may be classified into two major types: traditional calibration and self-calibration. However, a calibration method in using a checkerboard is intermediate between traditional calibration and self-calibration. A self
is proposed based on a square in this paper. Only a square in the planar
template, the camera self-calibration can be completed through the single view. The proposed algorithm is that the virtual circle and straight line are established by a square on planar template, and
circular points, vanishing points in straight lines and the relation
between them are be used, in order to obtain the image of the absolute
conic (IAC) and establish the camera intrinsic parameters. To make
the calibration template is simpler, as compared with the Zhang Zhengyou-s method. Through real experiments and experiments, the experimental results show that this algorithm is
feasible and available, and has a certain precision and robustness.
Abstract: Given a large sparse signal, great wishes are to
reconstruct the signal precisely and accurately from lease number of
measurements as possible as it could. Although this seems possible
by theory, the difficulty is in built an algorithm to perform the
accuracy and efficiency of reconstructing. This paper proposes a new
proved method to reconstruct sparse signal depend on using new
method called Least Support Matching Pursuit (LS-OMP) merge it
with the theory of Partial Knowing Support (PSK) given new method
called Partially Knowing of Least Support Orthogonal Matching
Pursuit (PKLS-OMP).
The new methods depend on the greedy algorithm to compute the
support which depends on the number of iterations. So to make it
faster, the PKLS-OMP adds the idea of partial knowing support of its
algorithm. It shows the efficiency, simplicity, and accuracy to get
back the original signal if the sampling matrix satisfies the Restricted
Isometry Property (RIP).
Simulation results also show that it outperforms many algorithms
especially for compressible signals.
Abstract: Beta-spline is built on G2 continuity which guarantees
smoothness of generated curves and surfaces using it. This curve is
preferred to be used in object design rather than reconstruction. This
study however, employs the Beta-spline in reconstructing a 3-
dimensional G2 image of the Stanford Rabbit. The original data
consists of multi-slice binary images of the rabbit. The result is then
compared with related works using other techniques.
Abstract: At the end of the 20th century it was actual the
development of transport corridors and the improvement of their
technical parameters. With this purpose, many countries and Georgia
among them manufacture to construct new highways, railways and
also reconstruction-modernization of the existing transport
infrastructure. It is necessary to explore the artificial structures
(bridges and tunnels) on the existing tracks as they are very old.
Conference report includes the peculiarities of reconstruction of
tunnels, because we think that this theme is important for the
modernization of the existing road infrastructure. We must remark
that the methods of determining mining pressure of tunnel
reconstructions are worked out according to the jobs of new tunnels
but it is necessary to foresee additional mining pressure which will be
formed during their reconstruction. In this report there are given the
methods of figuring the additional mining pressure while
reconstruction of tunnels, there was worked out the computer
program, it is determined that during reconstruction of tunnels the
additional mining pressure is 1/3rd of main mining pressure.
Abstract: As the Computed Tomography(CT) requires normally
hundreds of projections to reconstruct the image, patients are exposed
to more X-ray energy, which may cause side effects such as cancer.
Even when the variability of the particles in the object is very less,
Computed Tomography requires many projections for good quality
reconstruction. In this paper, less variability of the particles in an
object has been exploited to obtain good quality reconstruction.
Though the reconstructed image and the original image have same
projections, in general, they need not be the same. In addition
to projections, if a priori information about the image is known,
it is possible to obtain good quality reconstructed image. In this
paper, it has been shown by experimental results why conventional
algorithms fail to reconstruct from a few projections, and an efficient
polynomial time algorithm has been given to reconstruct a bi-level
image from its projections along row and column, and a known sub
image of unknown image with smoothness constraints by reducing the
reconstruction problem to integral max flow problem. This paper also
discusses the necessary and sufficient conditions for uniqueness and
extension of 2D-bi-level image reconstruction to 3D-bi-level image
reconstruction.
Abstract: Typhoon Morakot hit Taiwan in 2009 and caused
severe damages. The government employs a compulsory relocation
strategy for post-disaster reconstruction. This study analyzes the
impact of this strategy on community solidarity. It employs a multiple
approach for data collection, including semi-structural interview,
secondary data, and documentation. The results indicate that the
government-s strategy for distributing housing has led to conflicts
within the communities. In addition, the relocating process has
stimulated tensions between victims of the disaster and those residents
whose lands were chosen to be new sites for relocation. The
government-s strategy of “collective relocation" also worsened
community integration. In addition, the fact that a permanent housing
community may accommodate people from different places also posts
challenge for the development of new inter-personal relations in the
communities. This study concludes by emphasizing the importance of
bringing social, economic and cultural aspects into consideration for
post-disaster relocation..
Abstract: Imprecision is a long-standing problem in CAD design
and high accuracy image-based reconstruction applications. The visual
hull which is the closed silhouette equivalent shape of the objects
of interest is an important concept in image-based reconstruction.
We extend the domain-theoretic framework, which is a robust and
imprecision capturing geometric model, to analyze the imprecision in
the output shape when the input vertices are given with imprecision.
Under this framework, we show an efficient algorithm to generate the
2D partial visual hull which represents the exact information of the
visual hull with only basic imprecision assumptions. We also show
how the visual hull from polyhedra problem can be efficiently solved
in the context of imprecise input.
Abstract: Three-dimensional reconstruction of small objects has
been one of the most challenging problems over the last decade.
Computer graphics researchers and photography professionals have
been working on improving 3D reconstruction algorithms to fit the
high demands of various real life applications. Medical sciences,
animation industry, virtual reality, pattern recognition, tourism
industry, and reverse engineering are common fields where 3D
reconstruction of objects plays a vital role. Both lack of accuracy and
high computational cost are the major challenges facing successful
3D reconstruction. Fringe projection has emerged as a promising 3D
reconstruction direction that combines low computational cost to both
high precision and high resolution. It employs digital projection,
structured light systems and phase analysis on fringed pictures.
Research studies have shown that the system has acceptable
performance, and moreover it is insensitive to ambient light.
This paper presents an overview of fringe projection approaches. It
also presents an experimental study and implementation of a simple
fringe projection system. We tested our system using two objects
with different materials and levels of details. Experimental results
have shown that, while our system is simple, it produces acceptable
results.
Abstract: For the communication between human and computer
in an interactive computing environment, the gesture recognition is
studied vigorously. Therefore, a lot of studies have proposed efficient
methods about the recognition algorithm using 2D camera captured
images. However, there is a limitation to these methods, such as the
extracted features cannot fully represent the object in real world.
Although many studies used 3D features instead of 2D features for
more accurate gesture recognition, the problem, such as the processing
time to generate 3D objects, is still unsolved in related researches.
Therefore we propose a method to extract the 3D features combined
with the 3D object reconstruction. This method uses the modified
GPU-based visual hull generation algorithm which disables unnecessary
processes, such as the texture calculation to generate three kinds
of 3D projection maps as the 3D feature: a nearest boundary, a farthest
boundary, and a thickness of the object projected on the base-plane. In
the section of experimental results, we present results of proposed
method on eight human postures: T shape, both hands up, right hand
up, left hand up, hands front, stand, sit and bend, and compare the
computational time of the proposed method with that of the previous
methods.