Abstract: Star graphs are Cayley graphs of symmetric groups of permutations, with transpositions as the generating sets. A star graph is a preferred interconnection network topology to a hypercube for its ability to connect a greater number of nodes with lower degree. However, an attractive property of the hypercube is that it has a Hamiltonian decomposition, i.e. its edges can be partitioned into disjoint Hamiltonian cycles, and therefore a simple routing can be found in the case of an edge failure. The existence of Hamiltonian cycles in Cayley graphs has been known for some time. So far, there are no published results on the much stronger condition of the existence of Hamiltonian decompositions. In this paper, we give a construction of a Hamiltonian decomposition of the star graph 5-star of degree 4, by defining an automorphism for 5-star and a Hamiltonian cycle which is edge-disjoint with its image under the automorphism.
Abstract: A combination of image fusion and quad tree decomposition method is used for detecting the sunspot trajectories in each month and computation of the latitudes of these trajectories in each solar hemisphere. Daily solar images taken with SOHO satellite are fused for each month and the result of fused image is decomposed with Quad Tree decomposition method in order to classifying the sunspot trajectories and then to achieve the precise information about latitudes of sunspot trajectories. Also with fusion we deduce some physical remarkable conclusions about sun magnetic fields behavior. Using quad tree decomposition we give information about the region on sun surface and the space angle that tremendous flares and hot plasma gases permeate interplanetary space and attack to satellites and human technical systems. Here sunspot images in June, July and August 2001 are used for studying and give a method to compute the latitude of sunspot trajectories in each month with sunspot images.
Abstract: Multi-dimensional principal component analysis
(PCA) is the extension of the PCA, which is used widely as the
dimensionality reduction technique in multivariate data analysis, to
handle multi-dimensional data. To calculate the PCA the singular
value decomposition (SVD) is commonly employed by the reason of
its numerical stability. The multi-dimensional PCA can be calculated
by using the higher-order SVD (HOSVD), which is proposed by
Lathauwer et al., similarly with the case of ordinary PCA. In this
paper, we apply the multi-dimensional PCA to the multi-dimensional
medical data including the functional independence measure (FIM)
score, and describe the results of experimental analysis.
Abstract: We address the balancing problem of transfer lines in
this paper to find the optimal line balancing that minimizes the nonproductive
time. We focus on the tool change time and face
orientation change time both of which influence the makespane. We
consider machine capacity limitations and technological constraints
associated with the manufacturing process of auto cylinder heads.
The problem is represented by a mixed integer programming model
that aims at distributing the design features to workstations and
sequencing the machining processes at a minimum non-productive
time. The proposed model is solved by an algorithm established using
linearization schemes and Benders- decomposition approach. The
experiments show the efficiency of the algorithm in reaching the
exact solution of small and medium problem instances at reasonable
time.
Abstract: In this paper we introduce an efficient solution
method for the Eigen-decomposition of bisymmetric and per
symmetric matrices of symmetric structures. Here we decompose
adjacency and Laplacian matrices of symmetric structures to submatrices
with low dimension for fast and easy calculation of
eigenvalues and eigenvectors. Examples are included to show the
efficiency of the method.
Abstract: Every 2-3 years the influenza B virus serves
epidemics. Neuraminidase (NA) is an important target for influenza
drug design. Although, oseltamivir, an oral neuraminidase drug, has
been shown good inhibitory efficiency against wild-type of influenza
B virus, the lower susceptibility to the R152K mutation has been
reported. Better understanding of oseltamivir efficiency and
resistance toward the influenza B NA wild-type and R152K mutant,
respectively, could be useful for rational drug design. Here, two
complex systems of wild-type and R152K NAs with oseltamivir
bound were studied using molecular dynamics (MD) simulations.
Based on 5-ns MD simulation, the loss of notable hydrogen bond and
decrease in per-residue decomposition energy from the mutated
residue K152 contributed to drug compared to those of R152 in wildtype
were found to be a primary source of high-level of oseltamivir
resistance due to the R152K mutation.
Abstract: A lot of Scientific and Engineering problems require the solution of large systems of linear equations of the form bAx in an effective manner. LU-Decomposition offers good choices for solving this problem. Our approach is to find the lower bound of processing elements needed for this purpose. Here is used the so called Omega calculus, as a computational method for solving problems via their corresponding Diophantine relation. From the corresponding algorithm is formed a system of linear diophantine equalities using the domain of computation which is given by the set of lattice points inside the polyhedron. Then is run the Mathematica program DiophantineGF.m. This program calculates the generating function from which is possible to find the number of solutions to the system of Diophantine equalities, which in fact gives the lower bound for the number of processors needed for the corresponding algorithm. There is given a mathematical explanation of the problem as well. Keywordsgenerating function, lattice points in polyhedron, lower bound of processor elements, system of Diophantine equationsand : calculus.
Abstract: The batch nature limits the standard kernel principal component analysis (KPCA) methods in numerous applications, especially for dynamic or large-scale data. In this paper, an efficient adaptive approach is presented for online extraction of the kernel principal components (KPC). The contribution of this paper may be divided into two parts. First, kernel covariance matrix is correctly updated to adapt to the changing characteristics of data. Second, KPC are recursively formulated to overcome the batch nature of standard KPCA.This formulation is derived from the recursive eigen-decomposition of kernel covariance matrix and indicates the KPC variation caused by the new data. The proposed method not only alleviates sub-optimality of the KPCA method for non-stationary data, but also maintains constant update speed and memory usage as the data-size increases. Experiments for simulation data and real applications demonstrate that our approach yields improvements in terms of both computational speed and approximation accuracy.
Abstract: Process-oriented software development is a new
software development paradigm in which software design is modeled
by a business process which is in turn translated into a process
execution language for execution. The building blocks of this
paradigm are software units that are composed together to work
according to the flow of the business process. This new paradigm
still exhibits the characteristic of the applications built with the
traditional software component technology. This paper discusses an
approach to apply a traditional technique for software component
fabrication to the design of process-oriented software units, called
process components. These process components result from
decomposing a business process of a particular application domain
into subprocesses, and these process components can be reused to
design the business processes of other application domains. The
decomposition considers five managerial goals, namely cost
effectiveness, ease of assembly, customization, reusability, and
maintainability. The paper presents how to design or decompose
process components from a business process model and measure
some technical features of the design that would affect the
managerial goals. A comparison between the measurement values
from different designs can tell which process component design is
more appropriate for the managerial goals that have been set. The
proposed approach can be applied in Web Services environment
which accommodates process-oriented software development.
Abstract: In this paper, the decomposition-aggregation method
is used to carry out connective stability criteria for general linear
composite system via aggregation. The large scale system is
decomposed into a number of subsystems. By associating directed
graphs with dynamic systems in an essential way, we define the
relation between system structure and stability in the sense of
Lyapunov. The stability criteria is then associated with the stability
and system matrices of subsystems as well as those interconnected
terms among subsystems using the concepts of vector differential
inequalities and vector Lyapunov functions. Then, we show that the
stability of each subsystem and stability of the aggregate model
imply connective stability of the overall system. An example is
reported, showing the efficiency of the proposed technique.
Abstract: As a result of urbanization, the unpredictable growth of industry and transport, production of chemicals, military activities, etc. the concentration of anthropogenic toxicants spread in nature exceeds all the permissible standards. Most dangerous among these contaminants are organic compounds having great persistence, bioaccumulation, and toxicity along with our awareness of their prominent occurrence in the environment and food chain. Among natural ecological tools, plants still occupying above 40% of the world land, until recently, were considered as organisms having only a limited ecological potential, accumulating in plant biomass and partially volatilizing contaminants of different structure. However, analysis of experimental data of the last two decades revealed the essential role of plants in environment remediation due to ability to carry out intracellular degradation processes leading to partial or complete decomposition of carbon skeleton of different structure contaminants. Though, phytoremediation technologies still are in research and development, their various applications have been successfully used. The paper aims to analyze mechanisms of organic contaminants uptake and detoxification in plants, being the less studied issue in evaluation and exploration of plants potential for environment remediation.
Abstract: In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.
Abstract: The main goal of this paper is to show a possibility, how to solve numerically elliptic boundary value problems arising in 2D linear elasticity by using the fictitious domain method (FDM) and the Total-FETI domain decomposition method. We briefly mention the theoretical background of these methods and demonstrate their performance on a benchmark.
Abstract: The major objective of this paper is to introduce a new method to select genes from DNA microarray data. As criterion to select genes we suggest to measure the local changes in the correlation graph of each gene and to select those genes whose local changes are largest. More precisely, we calculate the correlation networks from DNA microarray data of cervical cancer whereas each network represents a tissue of a certain tumor stage and each node in the network represents a gene. From these networks we extract one tree for each gene by a local decomposition of the correlation network. The interpretation of a tree is that it represents the n-nearest neighbor genes on the n-th level of a tree, measured by the Dijkstra distance, and, hence, gives the local embedding of a gene within the correlation network. For the obtained trees we measure the pairwise similarity between trees rooted by the same gene from normal to cancerous tissues. This evaluates the modification of the tree topology due to tumor progression. Finally, we rank the obtained similarity values from all tissue comparisons and select the top ranked genes. For these genes the local neighborhood in the correlation networks changes most between normal and cancerous tissues. As a result we find that the top ranked genes are candidates suspected to be involved in tumor growth. This indicates that our method captures essential information from the underlying DNA microarray data of cervical cancer.
Abstract: Proper orthogonal decomposition (POD) is used to reconstruct spatio-temporal data of a fully developed turbulent channel flow with density variation at Reynolds number of 150, based on the friction velocity and the channel half-width, and Prandtl number of 0.71. To apply POD to the fully developed turbulent channel flow with density variation, the flow field (velocities, density, and temperature) is scaled by the corresponding root mean square values (rms) so that the flow field becomes dimensionless. A five-vector POD problem is solved numerically. The reconstructed second-order moments of velocity, temperature, and density from POD eigenfunctions compare favorably to the original Direct Numerical Simulation (DNS) data.
Abstract: This paper presents a new spread-spectrum
watermarking algorithm for digital images in discrete wavelet
transform (DWT) domain. The algorithm is applied for embedding
watermarks like patient identification /source identification or
doctors signature in binary image format into host digital
radiological image for potential telemedicine applications.
Performance of the algorithm is analysed by varying the gain factor,
subband decomposition levels, and size of watermark. Simulation
results show that the proposed method achieves higher watermarking
capacity.
Abstract: In this paper, we present a framework to determine Haar solutions of Bratu-type equations that are widely applicable in fuel ignition of the combustion theory and heat transfer. The method is proposed by applying Haar series for the highest derivatives and integrate the series. Several examples are given to confirm the efficiency and the accuracy of the proposed algorithm. The results show that the proposed way is quite reasonable when compared to exact solution.
Abstract: Sparse representation has long been studied and several
dictionary learning methods have been proposed. The dictionary
learning methods are widely used because they are adaptive. In this
paper, a new dictionary learning method for audio is proposed. Signals
are at first decomposed into different degrees of Intrinsic Mode
Functions (IMF) using Empirical Mode Decomposition (EMD)
technique. Then these IMFs form a learned dictionary. To reduce the
size of the dictionary, the K-means method is applied to the dictionary
to generate a K-EMD dictionary. Compared to K-SVD algorithm, the
K-EMD dictionary decomposes audio signals into structured
components, thus the sparsity of the representation is increased by
34.4% and the SNR of the recovered audio signals is increased by
20.9%.
Abstract: Nano MgO has been synthesized by hydration and
dehydration method by modifies the commercial MgO. The prepared
MgO had been investigated as a heterogeneous base catalyst for
transesterification process for biodiesel production using palm oil.
TGA, FT-IR and XRD results obtained from this study lie each other
and proved in the formation of nano MgO from decomposition of
Mg(OH)2. This study proved that the prepared nano MgO was a
better base transesterification catalyst compared to commercial MgO.
The nano MgO calcined at 600ºC had gives the highest conversion of
51.3% of palm oil to biodiesel.
Abstract: In this paper, we study the numerical method for solving second-order fuzzy differential equations using Adomian method under strongly generalized differentiability. And, we present an example with initial condition having four different solutions to illustrate the efficiency of the proposed method under strongly generalized differentiability.