Abstract: This paper investigates civic representation in mid-century diplomatic buildings through the case of the U.S. Embassy in Karachi (1955-59), Pakistan, designed by the Austrian-American architect Richard Neutra (1892-1970) and the American architect Robert Alexander (1907-92). Texts, magazines, and oral histories at that time highlighted the need for a new postwar expression of American governmental architecture, leaning toward modernization, technology, and monumentality. Descriptive, structural, and historical analyses of the U.S. Embassy in Karachi revealed the emergence of a new prototypical solution for postwar diplomatic buildings: the combination of one main orthogonal block, seen as a modern-day corps de logis, and a flanking arcuated pavilion, often organized in one or two stories. Although the U.S. Embassy relied on highly industrialized techniques and abstract images of social progress, archival work at the Neutra’s archives at the University of California, Los Angeles, revealed that much of this project was adapted to vernacular elements and traditional forms—such as the intriguing use of reinforced concrete barrel vaults.
Abstract: A secret image sharing scheme is a way to protect images. The main idea is dispersing the secret image into numerous shadow images. A secret image sharing scheme can withstand the impersonal attack and achieve the highly practical property of multiuse is more practical. Therefore, this paper proposes a verifiable and detectable secret image-sharing scheme called VDGMSISS to solve the impersonal attack and to achieve some properties such as encrypting multi-secret images at one time and multi-use. Moreover, our scheme can also be used for any genera access structure.
Abstract: Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment.
Abstract: Lung CT image segmentation is a prerequisite in lung
CT image analysis. Most of the conventional methods need a
post-processing to deal with the abnormal lung CT scans such as
lung nodules or other lesions. The simplest similarity measure in
the standard Graph Cuts Algorithm consists of directly comparing
the pixel values of the two neighboring regions, which is not
accurate because this kind of metrics is extremely sensitive to minor
transformations such as noise or other artifacts problems. In this work,
we propose an improved version of the standard graph cuts algorithm
based on the Patch-Based similarity metric. The boundary penalty
term in the graph cut algorithm is defined Based on Patch-Based
similarity measurement instead of the simple intensity measurement
in the standard method. The weights between each pixel and its
neighboring pixels are Based on the obtained new term. The graph
is then created using theses weights between its nodes. Finally,
the segmentation is completed with the minimum cut/Max-Flow
algorithm. Experimental results show that the proposed method is
very accurate and efficient, and can directly provide explicit lung
regions without any post-processing operations compared to the
standard method.
Abstract: The human middle ear (ME) is a delicate and vital organ. It has a complex structure that performs various functions such as receiving sound pressure and producing vibrations of eardrum and propagating it to inner ear. It consists of Tympanic Membrane (TM), three auditory ossicles, various ligament structures and muscles. Incidents such as traumata, infections, ossification of ossicular structures and other pathologies may damage the ME organs. The conditions can be surgically treated by employing prosthesis. However, the suitability of the prosthesis needs to be examined in advance prior to the surgery. Few decades ago, this issue was addressed and analyzed by developing an equivalent representation either in the form of spring mass system, electrical system using R-L-C circuit or developing an approximated CAD model. But, nowadays a three-dimensional ME model can be constructed using micro X-Ray Computed Tomography (μCT) scan data. Moreover, the concern about patient specific integrity pertaining to the disease can be examined well in advance. The current research work emphasizes to develop the ME model from the stacks of μCT images which are used as input file to MIMICS Research 19.0 (Materialise Interactive Medical Image Control System) software. A stack of CT images is converted into geometrical surface model to build accurate morphology of ME. The work is further extended to understand the dynamic behaviour of Harmonic response of the stapes footplate and umbo for different sound pressure levels applied at lateral side of eardrum using finite element approach. The pathological condition Cholesteatoma of ME is investigated to obtain peak to peak displacement of stapes footplate and umbo. Apart from this condition, other pathologies, mainly, changes in the stiffness of stapedial ligament, TM thickness and ossicular chain separation and fixation are also explored. The developed model of ME for pathologies is validated by comparing the results available in the literatures and also with the results of a normal ME to calculate the percentage loss in hearing capability.
Abstract: One of the key problems facing in the analysis of Computed Tomography (CT) images is the poor contrast of the images. Image enhancement can be used to improve the visual clarity and quality of the images or to provide a better transformation representation for further processing. Contrast enhancement of images is one of the acceptable methods used for image enhancement in various applications in the medical field. This will be helpful to visualize and extract details of brain infarctions, tumors, and cancers from the CT image. This paper presents a comparison study of five contrast enhancement techniques suitable for the contrast enhancement of CT images. The types of techniques include Power Law Transformation, Logarithmic Transformation, Histogram Equalization, Contrast Stretching, and Laplacian Transformation. All these techniques are compared with each other to find out which enhancement provides better contrast of CT image. For the comparison of the techniques, the parameters Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) are used. Logarithmic Transformation provided the clearer and best quality image compared to all other techniques studied and has got the highest value of PSNR. Comparison concludes with better approach for its future research especially for mapping abnormalities from CT images resulting from Brain Injuries.
Abstract: Image denoising plays extremely important role in digital image processing. Enhancement of clinical image research based on Curvelet has been developed rapidly in recent years. In this paper, we present a method for image contrast enhancement for cone beam CT (CBCT) images based on fast discrete curvelet transforms (FDCT) that work through Unequally Spaced Fast Fourier Transform (USFFT). These transforms return a table of Curvelet transform coefficients indexed by a scale parameter, an orientation and a spatial location. Accordingly, the coefficients obtained from FDCT-USFFT can be modified in order to enhance contrast in an image. Our proposed method first uses a two-dimensional mathematical transform, namely the FDCT through unequal-space fast Fourier transform on input image and then applies thresholding on coefficients of Curvelet to enhance the CBCT images. Consequently, applying unequal-space fast Fourier Transform leads to an accurate reconstruction of the image with high resolution. The experimental results indicate the performance of the proposed method is superior to the existing ones in terms of Peak Signal to Noise Ratio (PSNR) and Effective Measure of Enhancement (EME).
Abstract: Through the Fukaya conjecture and the wrapped Floer cohomology, the correspondences between paths in a loop space and states of a wrapping space of states in a Hamiltonian space (the ramification of field in this case is the connection to the operator that goes from TM to T*M) are demonstrated where these last states are corresponding to bosonic extensions of a spectrum of the space-time or direct image of the functor Spec, on space-time. This establishes a distinguished diffeomorphism defined by the mapping from the corresponding loops space to wrapping category of the Floer cohomology complex which furthermore relates in certain proportion D-branes (certain D-modules) with strings. This also gives to place to certain conjecture that establishes equivalences between moduli spaces that can be consigned in a moduli identity taking as space-time the Hitchin moduli space on G, whose dual can be expressed by a factor of a bosonic moduli spaces.
Abstract: The main principles of X-ray Fourier interferometric holography method are discussed. The object image is reconstructed by the mathematical method of Fourier transformation. The three methods are presented – method of approximation, iteration method and step by step method. As an example the complex amplitude transmission coefficient reconstruction of a beryllium wire is considered. The results reconstructed by three presented methods are compared. The best results are obtained by means of step by step method.
Abstract: Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.
Abstract: A conventional optical coherence tomography (OCT) system has limited imaging depth, which is 1-2 mm, and suffers unwanted noise such as speckle noise. The motorized-stage-based OCT system, using a common-path Fourier-domain optical coherence tomography (CP-FD-OCT) configuration, provides enhanced imaging depth and less noise so that we can overcome these limitations. Using this OCT systems, OCT images were obtained from an onion, and their subsurface structure was observed. As a result, the images obtained using the developed motorized-stage-based system showed enhanced imaging depth than the conventional system, since it is real-time accurate depth tracking. Consequently, the developed CP-FD-OCT systems and algorithms have good potential for the further development of endoscopic OCT for microsurgery.
Abstract: The purpose of the study is to develop a finite element model based on 3D bone structural images of Micro-CT and to analyze the stress distribution for the osteoporosis mouse femora. In this study, results of finite element analysis show that the early osteoporosis of mouse model decreased a bone density in trabecular region; however, the bone density in cortical region increased.
Abstract: In this paper, an approach for the liver tumor detection
in computed tomography (CT) images is represented. The detection
process is based on classifying the features of target liver cell to
either tumor or non-tumor. Fractional differential (FD) is applied for
enhancement of Liver CT images, with the aim of enhancing texture
and edge features. Later on, a fusion method is applied to merge
between the various enhanced images and produce a variety of
feature improvement, which will increase the accuracy of
classification. Each image is divided into NxN non-overlapping
blocks, to extract the desired features. Support vector machines
(SVM) classifier is trained later on a supplied dataset different from
the tested one. Finally, the block cells are identified whether they are
classified as tumor or not. Our approach is validated on a group of
patients’ CT liver tumor datasets. The experiment results
demonstrated the efficiency of detection in the proposed technique.
Abstract: Liver segmentation from medical images poses more
challenges than analogous segmentations of other organs. This
contribution introduces a liver segmentation method from a series of
computer tomography images. Overall, we present a novel method for
segmenting liver by coupling density matching with shape priors.
Density matching signifies a tracking method which operates via
maximizing the Bhattacharyya similarity measure between the
photometric distribution from an estimated image region and a model
photometric distribution. Density matching controls the direction of
the evolution process and slows down the evolving contour in regions
with weak edges. The shape prior improves the robustness of density
matching and discourages the evolving contour from exceeding liver’s
boundaries at regions with weak boundaries. The model is
implemented using a modified distance regularized level set (DRLS)
model. The experimental results show that the method achieves a
satisfactory result. By comparing with the original DRLS model, it is
evident that the proposed model herein is more effective in addressing
the over segmentation problem. Finally, we gauge our performance of
our model against matrices comprising of accuracy, sensitivity, and
specificity.
Abstract: One of the most important challenging factors in
medical images is nominated as noise. Image denoising refers to the
improvement of a digital medical image that has been infected by
Additive White Gaussian Noise (AWGN). The digital medical image
or video can be affected by different types of noises. They are
impulse noise, Poisson noise and AWGN. Computed tomography
(CT) images are subjects to low quality due to the noise. Quality of
CT images is dependent on absorbed dose to patients directly in such
a way that increase in absorbed radiation, consequently absorbed
dose to patients (ADP), enhances the CT images quality. In this
manner, noise reduction techniques on purpose of images quality
enhancement exposing no excess radiation to patients is one the
challenging problems for CT images processing. In this work, noise
reduction in CT images was performed using two different
directional 2 dimensional (2D) transformations; i.e., Curvelet and
Contourlet and Discrete Wavelet Transform (DWT) thresholding
methods of BayesShrink and AdaptShrink, compared to each other
and we proposed a new threshold in wavelet domain for not only
noise reduction but also edge retaining, consequently the proposed
method retains the modified coefficients significantly that result good
visual quality. Data evaluations were accomplished by using two
criterions; namely, peak signal to noise ratio (PSNR) and Structure
similarity (Ssim).
Abstract: The aim of this work is to build a model based on
tissue characterization that is able to discriminate pathological and
non-pathological regions from three-phasic CT images. With our
research and based on a feature selection in different phases, we are
trying to design a neural network system with an optimal neuron
number in a hidden layer. Our approach consists of three steps:
feature selection, feature reduction, and classification. For each
region of interest (ROI), 6 distinct sets of texture features are
extracted such as: first order histogram parameters, absolute gradient,
run-length matrix, co-occurrence matrix, autoregressive model, and
wavelet, for a total of 270 texture features. When analyzing more
phases, we show that the injection of liquid cause changes to the high
relevant features in each region. Our results demonstrate that for
detecting HCC tumor phase 3 is the best one in most of the features
that we apply to the classification algorithm. The percentage of
detection between pathology and healthy classes, according to our
method, relates to first order histogram parameters with accuracy of
85% in phase 1, 95% in phase 2, and 95% in phase 3.
Abstract: The aim of this study is to develop an anterior lumbar
interbody fusion (ALIF) PEEK cage suitable for Korean people. In this
study, CT images were obtained from Korean male (173cm, 71kg) and
3D Korean lumbar models were reconstructed based on the CT images
to investigate anatomical characteristics. Major design parameters of
anterior lumbar interbody fusion (ALIF) PEEK Cage were selected
using the morphological measurement information of the Korean
Lumbar models. Through finite element analysis and mechanical tests,
the developed ALIFPEEK Cage prototype was compared with the
Fidji Cage (Zimmer. Inc, USA) and it was found that the ALIF
prototype showed similar and/or superior mechanical performance
compared to the FidJi Cage. Also, clinical validation for the ALIF
PEEK Cage prototype was carried out to check predictable troubles in
surgical operations. Finally, it is considered that the convenience and
stability of the prototype was clinically verified.
Abstract: In this paper, we present a robust algorithm to recognize extracted text from grocery product images captured by mobile phone cameras. Recognition of such text is challenging since text in grocery product images varies in its size, orientation,
style, illumination, and can suffer from perspective distortion.
Pre-processing is performed to make the characters scale and
rotation invariant. Since text degradations can not be appropriately
defined using well-known geometric transformations such
as translation, rotation, affine transformation and shearing, we
use the whole character black pixels as our feature vector.
Classification is performed with minimum distance classifier
using the maximum likelihood criterion, which delivers very
promising Character Recognition Rate (CRR) of 89%. We
achieve considerably higher Word Recognition Rate (WRR) of
99% when using lower level linguistic knowledge about product
words during the recognition process.
Abstract: The use of anatomical landmarks as a basis for image to patient registration is appealing because the registration may be performed retrospectively. We have previously proposed the use of two anatomical soft tissue landmarks of the head, the canthus (corner of the eye) and the tragus (a small, pointed, cartilaginous flap of the ear), as a registration basis for an automated CT image to patient registration system, and described their localization in patient space using close range photogrammetry. In this paper, the automatic localization of these landmarks in CT images, based on their curvature saliency and using a rule based system that incorporates prior knowledge of their characteristics, is described. Existing approaches to landmark localization in CT images are predominantly semi-automatic and primarily for localizing internal landmarks. To validate our approach, the positions of the landmarks localized automatically and manually in near isotropic CT images of 102 patients were compared. The average difference was 1.2mm (std = 0.9mm, max = 4.5mm) for the medial canthus and 0.8mm (std = 0.6mm, max = 2.6mm) for the tragus. The medial canthus and tragus can be automatically localized in CT images, with performance comparable to manual localization, based on the approach presented.
Abstract: In this paper problem of edge detection in digital images is considered. Edge detection based on morphological operators was applied on two sets (brain & chest) ct images. Three methods of edge detection by applying line morphological filters with multi structures in different directions have been used. 3x3 filter for first method, 5x5 filter for second method, and 7x7 filter for third method. We had applied this algorithm on (13 images) under MATLAB program environment. In order to evaluate the performance of the above mentioned edge detection algorithms, standard deviation (SD) and peak signal to noise ratio (PSNR) were used for justification for all different ct images. The objective method and the comparison of different methods of edge detection, shows that high values of both standard deviation and PSNR values of edge detection images were obtained.