Abstract: This paper describes a novel approach for deriving
modules from protein-protein interaction networks, which combines
functional information with topological properties of the network.
This approach is based on weighted clustering coefficient, which
uses weights representing the functional similarities between the
proteins. These weights are calculated according to the semantic
similarity between the proteins, which is based on their Gene
Ontology terms. We recently proposed an algorithm for identification
of functional modules, called SWEMODE (Semantic WEights for
MODule Elucidation), that identifies dense sub-graphs containing
functionally similar proteins. The rational underlying this approach is
that each module can be reduced to a set of triangles (protein triplets
connected to each other). Here, we propose considering semantic
similarity weights of all triangle-forming edges between proteins. We
also apply varying semantic similarity thresholds between
neighbours of each node that are not neighbours to each other (and
hereby do not form a triangle), to derive new potential triangles to
include in module-defining procedure. The results show an
improvement of pure topological approach, in terms of number of
predicted modules that match known complexes.
Abstract: A prime cordial labeling of a graph G with vertex set V is a bijection f from V to {1, 2, ..., |V |} such that each edge uv is assigned the label 1 if gcd(f(u), f(v)) = 1 and 0 if gcd(f(u), f(v)) > 1, then the number of edges labeled with 0 and the number of edges labeled with 1 differ by at most 1. In this paper we exhibit some characterization results and new constructions on prime cordial graphs.
Abstract: In this paper, we present a novel approach to accurately
detect text regions including shop name in signboard images with
complex background for mobile system applications. The proposed
method is based on the combination of text detection using edge
profile and region segmentation using fuzzy c-means method. In the
first step, we perform an elaborate canny edge operator to extract all
possible object edges. Then, edge profile analysis with vertical and
horizontal direction is performed on these edge pixels to detect
potential text region existing shop name in a signboard. The edge
profile and geometrical characteristics of each object contour are
carefully examined to construct candidate text regions and classify the
main text region from background. Finally, the fuzzy c-means
algorithm is performed to segment and detected binarize text region.
Experimental results show that our proposed method is robust in text
detection with respect to different character size and color and can
provide reliable text binarization result.
Abstract: Color image segmentation plays an important role in
computer vision and image processing areas. In this paper, the
features of Volterra filter are utilized for color image segmentation.
The discrete Volterra filter exhibits both linear and nonlinear
characteristics. The linear part smoothes the image features in
uniform gray zones and is used for getting a gross representation of
objects of interest. The nonlinear term compensates for the blurring
due to the linear term and preserves the edges which are mainly used
to distinguish the various objects. The truncated quadratic Volterra
filters are mainly used for edge preserving along with Gaussian noise
cancellation. In our approach, the segmentation is based on K-means
clustering algorithm in HSI space. Both the hue and the intensity
components are fully utilized. For hue clustering, the special cyclic
property of the hue component is taken into consideration. The
experimental results show that the proposed technique segments the
color image while preserving significant features and removing noise
effects.
Abstract: This paper presents a threshold voltage model of pocket implanted sub-100 nm n-MOSFETs incorporating the drain and substrate bias effects using two linear pocket profiles. Two linear equations are used to simulate the pocket profiles along the channel at the surface from the source and drain edges towards the center of the n-MOSFET. Then the effective doping concentration is derived and is used in the threshold voltage equation that is obtained by solving the Poisson-s equation in the depletion region at the surface. Simulated threshold voltages for various gate lengths fit well with the experimental data already published in the literature. The simulated result is compared with the two other pocket profiles used to derive the threshold voltage models of n-MOSFETs. The comparison shows that the linear model has a simple compact form that can be utilized to study and characterize the pocket implanted advanced ULSI devices.
Abstract: Computer modeling has played a unique role in
understanding electrocardiography. Modeling and simulating cardiac
action potential propagation is suitable for studying normal and
pathological cardiac activation. This paper presents a 2-D Cellular
Automata model for simulating action potential propagation in
cardiac tissue. We demonstrate a novel algorithm in order to use
minimum neighbors. This algorithm uses the summation of the
excitability attributes of excited neighboring cells. We try to
eliminate flat edges in the result patterns by inserting probability to
the model. We also preserve the real shape of action potential by
using linear curve fitting of one well known electrophysiological
model.
Abstract: In this paper, a robust watermarking algorithm using
the wavelet transform and edge detection is presented. The efficiency
of an image watermarking technique depends on the preservation of
visually significant information. This is attained by embedding the
watermark transparently with the maximum possible strength. The
watermark embedding process is carried over the subband
coefficients that lie on edges, where distortions are less noticeable,
with a subband level dependent strength. Also, the watermark is
embedded to selected coefficients around edges, using a different
scale factor for watermark strength, that are captured by a
morphological dilation operation. The experimental evaluation of the
proposed method shows very good results in terms of robustness and
transparency to various attacks such as median filtering, Gaussian
noise, JPEG compression and geometrical transformations.
Abstract: A novel physico-chemical route to produce few layer graphene nanoribbons with atomically smooth edges is reported, via acid treatment (H2SO4:HNO3) followed by characteristic thermal shock processes involving extremely cold substances. Samples were studied by scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), Raman spectroscopy and X-ray photoelectron spectroscopy. This method demonstrates the importance of having the nanotubes open ended for an efficient uniform unzipping along the nanotube axis. The average dimensions of these nanoribbons are approximately ca. 210 nm wide and consist of few layers, as observed by transmission electron microscopy. The produced nanoribbons exhibit different chiralities, as observed by high resolution transmission electron microscopy. This method is able to provide graphene nanoribbons with atomically smooth edges which could be used in various applications including sensors, gas adsorption materials, composite fillers, among others.
Abstract: detecting the deadlock is one of the important
problems in distributed systems and different solutions have been
proposed for it. Among the many deadlock detection algorithms,
Edge-chasing has been the most widely used. In Edge-chasing
algorithm, a special message called probe is made and sent along
dependency edges. When the initiator of a probe receives the probe
back the existence of a deadlock is revealed. But these algorithms are
not problem-free. One of the problems associated with them is that
they cannot detect some deadlocks and they even identify false
deadlocks. A key point not mentioned in the literature is that when
the process is waiting to obtain the required resources and its
execution has been blocked, how it can actually respond to probe
messages in the system. Also the question of 'which process should
be victimized in order to achieve a better performance when multiple
cycles exist within one single process in the system' has received
little attention. In this paper, one of the basic concepts of the
operating system - daemon - will be used to solve the problems
mentioned. The proposed Algorithm becomes engaged in sending
probe messages to the mandatory daemons and collects enough
information to effectively identify and resolve multi-cycle deadlocks
in distributed systems.
Abstract: In this paper we propose a new content-weighted
method for full reference (FR) video quality control using a region of
interest (ROI) and wherein two-component weighted metrics for Deaf
People Video Communication. In our approach, an image is
partitioned into region of interest and into region "dry-as-dust", then
region of interest is partitioned into two parts: edges and background
(smooth regions), while the another methods (metrics) combined and
weighted three or more parts as edges, edges errors, texture, smooth
regions, blur, block distance etc. as we proposed. Using another idea
that different image regions from deaf people video communication
have different perceptual significance relative to quality. Intensity
edges certainly contain considerable image information and are
perceptually significant.
Abstract: Program slicing is the task of finding all statements in a program that directly or indirectly influence the value of a variable occurrence. The set of statements that can affect the value of a variable at some point in a program is called a program slice. In several software engineering applications, such as program debugging and measuring program cohesion and parallelism, several slices are computed at different program points. In this paper, algorithms are introduced to compute all backward and forward static slices of a computer program by traversing the program representation graph once. The program representation graph used in this paper is called Program Dependence Graph (PDG). We have conducted an experimental comparison study using 25 software modules to show the effectiveness of the introduced algorithm for computing all backward static slices over single-point slicing approaches in computing the parallelism and functional cohesion of program modules. The effectiveness of the algorithm is measured in terms of time execution and number of traversed PDG edges. The comparison study results indicate that using the introduced algorithm considerably saves the slicing time and effort required to measure module parallelism and functional cohesion.
Abstract: Shadows add great amount of realism to a scene and
many algorithms exists to generate shadows. Recently, Shadow
volumes (SVs) have made great achievements to place a valuable
position in the gaming industries. Looking at this, we concentrate on
simple but valuable initial partial steps for further optimization in SV
generation, i.e.; model simplification and silhouette edge detection
and tracking. Shadow volumes (SVs) usually takes time in generating
boundary silhouettes of the object and if the object is complex then
the generation of edges become much harder and slower in process.
The challenge gets stiffer when real time shadow generation and
rendering is demanded. We investigated a way to use the real time
silhouette edge detection method, which takes the advantage of
spatial and temporal coherence, and exploit the level-of-details
(LOD) technique for reducing silhouette edges of the model to use
the simplified version of the model for shadow generation speeding
up the running time. These steps highly reduce the execution time of
shadow volume generations in real-time and are easily flexible to any
of the recently proposed SV techniques. Our main focus is to exploit
the LOD and silhouette edge detection technique, adopting them to
further enhance the shadow volume generations for real time
rendering.
Abstract: In this paper, based on a novel synthesis, a set of new simplified circuit design to implement the linguistic-hedge operations for adjusting the fuzzy membership function set is presented. The circuits work in current-mode and employ floating-gate MOS (FGMOS) transistors that operate in weak inversion region. Compared to the other proposed circuits, these circuits feature severe reduction of the elements number, low supply voltage (0.7V), low power consumption (60dB). In this paper, a set of fuzzy linguistic hedge circuits, including absolutely, very, much more, more, plus minus, more or less and slightly, has been implemented in 0.18 mm CMOS process. Simulation results by Hspice confirm the validity of the proposed design technique and show high performance of the circuits.
Abstract: Task of object localization is one of the major
challenges in creating intelligent transportation. Unfortunately, in
densely built-up urban areas, localization based on GPS only
produces a large error, or simply becomes impossible. New
opportunities arise for the localization due to the rapidly emerging
concept of a wireless ad-hoc network. Such network, allows
estimating potential distance between these objects measuring
received signal level and construct a graph of distances in which
nodes are the localization objects, and edges - estimates of the
distances between pairs of nodes. Due to the known coordinates of
individual nodes (anchors), it is possible to determine the location of
all (or part) of the remaining nodes of the graph. Moreover, road
map, available in digital format can provide localization routines
with valuable additional information to narrow node location search.
However, despite abundance of well-known algorithms for solving
the problem of localization and significant research efforts, there are
still many issues that currently are addressed only partially. In this
paper, we propose localization approach based on the graph mapped
distances on the digital road map data basis. In fact, problem is
reduced to distance graph embedding into the graph representing area
geo location data. It makes possible to localize objects, in some cases
even if only one reference point is available. We propose simple
embedding algorithm and sample implementation as spatial queries
over sensor network data stored in spatial database, allowing
employing effectively spatial indexing, optimized spatial search
routines and geometry functions.
Abstract: At very high speeds, bubbles form in the underwater vehicles because of sharp trailing edges or of places where the local pressure is lower than the vapor pressure. These bubbles are called cavities and the size of the cavities grows as the velocity increases. A properly designed cavitator can induce the formation of a single big cavity all over the vehicle. Such a vehicle travelling in the vaporous cavity is called a supercavitating vehicle and the present research work mainly focuses on the dynamic modeling of such vehicles. Cavitation of the fins is also accounted and the effect of the same on trajectory is well explained. The entire dynamics has been developed using the state space approach and emphasis is given on the effect of size and angle of attack of the cavitator. Control law has been established for the motion of the vehicle using Non-linear Dynamic Inverse (NDI) with cavitator as the control surface.
Abstract: Image coding based on clustering provides immediate
access to targeted features of interest in a high quality decoded
image. This approach is useful for intelligent devices, as well as for
multimedia content-based description standards. The result of image
clustering cannot be precise in some positions especially on pixels
with edge information which produce ambiguity among the clusters.
Even with a good enhancement operator based on PDE, the quality of
the decoded image will highly depend on the clustering process. In
this paper, we introduce an ambiguity cluster in image coding to
represent pixels with vagueness properties. The presence of such
cluster allows preserving some details inherent to edges as well for
uncertain pixels. It will also be very useful during the decoding phase
in which an anisotropic diffusion operator, such as Perona-Malik,
enhances the quality of the restored image. This work also offers a
comparative study to demonstrate the effectiveness of a fuzzy
clustering technique in detecting the ambiguity cluster without losing
lot of the essential image information. Several experiments have been
carried out to demonstrate the usefulness of ambiguity concept in
image compression. The coding results and the performance of the
proposed algorithms are discussed in terms of the peak signal-tonoise
ratio and the quantity of ambiguous pixels.
Abstract: Liver segmentation is the first significant process for
liver diagnosis of the Computed Tomography. It segments the liver
structure from other abdominal organs. Sophisticated filtering techniques
are indispensable for a proper segmentation. In this paper, we
employ a 3D anisotropic diffusion as a preprocessing step. While
removing image noise, this technique preserve the significant parts
of the image, typically edges, lines or other details that are important
for the interpretation of the image. The segmentation task is done
by using thresholding with automatic threshold values selection and
finally the false liver region is eliminated using 3D connected component.
The result shows that by employing the 3D anisotropic filtering,
better liver segmentation results could be achieved eventhough simple
segmentation technique is used.
Abstract: In the context of channel coding, the Generalized Belief Propagation (GBP) is an iterative algorithm used to recover the transmission bits sent through a noisy channel. To ensure a reliable transmission, we apply a map on the bits, that is called a code. This code induces artificial correlations between the bits to send, and it can be modeled by a graph whose nodes are the bits and the edges are the correlations. This graph, called Tanner graph, is used for most of the decoding algorithms like Belief Propagation or Gallager-B. The GBP is based on a non unic transformation of the Tanner graph into a so called region-graph. A clear advantage of the GBP over the other algorithms is the freedom in the construction of this graph. In this article, we explain a particular construction for specific graph topologies that involves relevant performance of the GBP. Moreover, we investigate the behavior of the GBP considered as a dynamic system in order to understand the way it evolves in terms of the time and in terms of the noise power of the channel. To this end we make use of classical measures and we introduce a new measure called the hyperspheres method that enables to know the size of the attractors.
Abstract: This paper proposes new enhancement models to the
methods of nonlinear anisotropic diffusion to greatly reduce speckle
and preserve image features in medical ultrasound images. By
incorporating local physical characteristics of the image, in this case
scatterer density, in addition to the gradient, into existing tensorbased
image diffusion methods, we were able to greatly improve the
performance of the existing filtering methods, namely edge
enhancing (EE) and coherence enhancing (CE) diffusion. The new
enhancement methods were tested using various ultrasound images,
including phantom and some clinical images, to determine the
amount of speckle reduction, edge, and coherence enhancements.
Scatterer density weighted nonlinear anisotropic diffusion
(SDWNAD) for ultrasound images consistently outperformed its
traditional tensor-based counterparts that use gradient only to weight
the diffusivity function. SDWNAD is shown to greatly reduce
speckle noise while preserving image features as edges, orientation
coherence, and scatterer density. SDWNAD superior performances
over nonlinear coherent diffusion (NCD), speckle reducing
anisotropic diffusion (SRAD), adaptive weighted median filter
(AWMF), wavelet shrinkage (WS), and wavelet shrinkage with
contrast enhancement (WSCE), make these methods ideal
preprocessing steps for automatic segmentation in ultrasound
imaging.
Abstract: The similarity comparison of RNA secondary
structures is important in studying the functions of RNAs. In recent
years, most existing tools represent the secondary structures by
tree-based presentation and calculate the similarity by tree alignment
distance. Different to previous approaches, we propose a new method
based on maximum clique detection algorithm to extract the maximum
common structural elements in compared RNA secondary structures.
A new graph-based similarity measurement and maximum common
subgraph detection procedures for comparing purely RNA secondary
structures is introduced. Given two RNA secondary structures, the
proposed algorithm consists of a process to determine the score of the
structural similarity, followed by comparing vertices labelling, the
labelled edges and the exact degree of each vertex. The proposed
algorithm also consists of a process to extract the common structural
elements between compared secondary structures based on a proposed
maximum clique detection of the problem. This graph-based model
also can work with NC-IUB code to perform the pattern-based
searching. Therefore, it can be used to identify functional RNA motifs
from database or to extract common substructures between complex
RNA secondary structures. We have proved the performance of this
proposed algorithm by experimental results. It provides a new idea of
comparing RNA secondary structures. This tool is helpful to those
who are interested in structural bioinformatics.