Abstract: The Chinese Postman Problem (CPP) is one of the
classical problems in graph theory and is applicable in a wide range
of fields. With the rapid development of hybrid systems and model
based testing, Chinese Postman Problem with Time Dependent Travel
Times (CPPTDT) becomes more realistic than the classical problems.
In the literature, we have proposed the first integer programming
formulation for the CPPTDT problem, namely, circuit formulation,
based on which some polyhedral results are investigated and a cutting
plane algorithm is also designed. However, there exists a main drawback:
the circuit formulation is only available for solving the special
instances with all circuits passing through the origin. Therefore, this
paper proposes a new integer programming formulation for solving
all the general instances of CPPTDT. Moreover, the size of the circuit
formulation is too large, which is reduced dramatically here. Thus, it
is possible to design more efficient algorithm for solving the CPPTDT
in the future research.
Abstract: As the Computed Tomography(CT) requires normally
hundreds of projections to reconstruct the image, patients are exposed
to more X-ray energy, which may cause side effects such as cancer.
Even when the variability of the particles in the object is very less,
Computed Tomography requires many projections for good quality
reconstruction. In this paper, less variability of the particles in an
object has been exploited to obtain good quality reconstruction.
Though the reconstructed image and the original image have same
projections, in general, they need not be the same. In addition
to projections, if a priori information about the image is known,
it is possible to obtain good quality reconstructed image. In this
paper, it has been shown by experimental results why conventional
algorithms fail to reconstruct from a few projections, and an efficient
polynomial time algorithm has been given to reconstruct a bi-level
image from its projections along row and column, and a known sub
image of unknown image with smoothness constraints by reducing the
reconstruction problem to integral max flow problem. This paper also
discusses the necessary and sufficient conditions for uniqueness and
extension of 2D-bi-level image reconstruction to 3D-bi-level image
reconstruction.
Abstract: This paper presents a genetic algorithm based
approach for solving security constrained optimal power flow
problem (SCOPF) including FACTS devices. The optimal location of
FACTS devices are identified using an index called overload index
and the optimal values are obtained using an enhanced genetic
algorithm. The optimal allocation by the proposed method optimizes
the investment, taking into account its effects on security in terms of
the alleviation of line overloads. The proposed approach has been
tested on IEEE-30 bus system to show the effectiveness of the
proposed algorithm for solving the SCOPF problem.
Abstract: Discrete particle swarm optimization (DPSO) is a
powerful stochastic evolutionary algorithm that is used to solve the
large-scale, discrete and nonlinear optimization problems. However,
it has been observed that standard DPSO algorithm has premature
convergence when solving a complex optimization problem like
transmission expansion planning (TEP). To resolve this problem an
advanced discrete particle swarm optimization (ADPSO) is proposed
in this paper. The simulation result shows that optimization of lines
loading in transmission expansion planning with ADPSO is better
than DPSO from precision view point.
Abstract: This paper is concerned with the application of the vision control algorithm for robot's point placement task in discontinuous trajectory caused by obstacle. The presented vision control algorithm consists of four models, which are the robot kinematic model, vision system model, parameters estimation model, and robot joint angle estimation model.When the robot moves toward a target along discontinuous trajectory, several types of obstacles appear in two obstacle regions. Then, this study is to investigate how these changes will affect the presented vision control algorithm.Thus, the practicality of the vision control algorithm is demonstrated experimentally by performing the robot's point placement task in discontinuous trajectory by obstacle.
Abstract: This is a study on numerical simulation of the convection-diffusion transport of a chemical species in steady flow through a small-diameter tube, which is lined with a very thin layer made up of retentive and absorptive materials. The species may be subject to a first-order kinetic reversible phase exchange with the wall material and irreversible absorption into the tube wall. Owing to the velocity shear across the tube section, the chemical species may spread out axially along the tube at a rate much larger than that given by the molecular diffusion; this process is known as dispersion. While the long-time dispersion behavior, well described by the Taylor model, has been extensively studied in the literature, the early development of the dispersion process is by contrast much less investigated. By early development, that means a span of time, after the release of the chemical into the flow, that is shorter than or comparable to the diffusion time scale across the tube section. To understand the early development of the dispersion, the governing equations along with the reactive boundary conditions are solved numerically using the Flux Corrected Transport Algorithm (FCTA). The computation has enabled us to investigate the combined effects on the early development of the dispersion coefficient due to the reversible and irreversible wall reactions. One of the results is shown that the dispersion coefficient may approach its steady-state limit in a short time under the following conditions: (i) a high value of Damkohler number (say Da ≥ 10); (ii) a small but non-zero value of absorption rate (say Γ* ≤ 0.5).
Abstract: This paper presents an algorithm which extends the rapidly-exploring random tree (RRT) framework to deal with change of the task environments. This algorithm called the Retrieval RRT Strategy (RRS) combines a support vector machine (SVM) and RRT and plans the robot motion in the presence of the change of the surrounding environment. This algorithm consists of two levels. At the first level, the SVM is built and selects a proper path from the bank of RRTs for a given environment. At the second level, a real path is planned by the RRT planners for the given environment. The suggested method is applied to the control of KUKA™,, a commercial 6 DOF robot manipulator, and its feasibility and efficiency are demonstrated via the cosimulatation of MatLab™, and RecurDyn™,.
Abstract: Computed tomography and laminography are heavily investigated in a compressive sensing based image reconstruction framework to reduce the dose to the patients as well as to the radiosensitive devices such as multilayer microelectronic circuit boards. Nowadays researchers are actively working on optimizing the compressive sensing based iterative image reconstruction algorithm to obtain better quality images. However, the effects of the sampled data’s properties on reconstructed the image’s quality, particularly in an insufficient sampled data conditions have not been explored in computed laminography. In this paper, we investigated the effects of two data properties i.e. sampling density and data incoherence on the reconstructed image obtained by conventional computed laminography and a recently proposed method called spherical sinusoidal scanning scheme. We have found that in a compressive sensing based image reconstruction framework, the image quality mainly depends upon the data incoherence when the data is uniformly sampled.
Abstract: Digital watermarking is the process of embedding
information into a digital signal which can be used in DRM (digital
rights managements) system. The visible watermark (often called logo)
can indicate the owner of the copyright which can often be seen in the
TV program and protects the copyright in an active way. However,
most of the schemes do not consider the visible watermark removing
process. To solve this problem, a visible watermarking scheme with
embedding and removing process is proposed under the control of a
secure template. The template generates different version of
watermarks which can be seen visually the same for different users.
Users with the right key can completely remove the watermark and
recover the original image while the unauthorized user is prevented to
remove the watermark. Experiment results show that our
watermarking algorithm obtains a good visual quality and is hard to be
removed by the illegally users. Additionally, the authorized users can
completely remove the visible watermark and recover the original
image with a good quality.
Abstract: Automotive engine air-ratio plays an important role of
emissions and fuel consumption reduction while maintains
satisfactory engine power among all of the engine control variables. In
order to effectively control the air-ratio, this paper presents a model
predictive fuzzy control algorithm based on online least-squares
support vector machines prediction model and fuzzy logic optimizer.
The proposed control algorithm was also implemented on a real car for
testing and the results are highly satisfactory. Experimental results
show that the proposed control algorithm can regulate the engine
air-ratio to the stoichiometric value, 1.0, under external disturbance
with less than 5% tolerance.
Abstract: In this paper, we propose a morphing method by which face color images can be freely transformed. The main focus of this work is the transformation of one face image to another. This method is fully automatic in that it can morph two face images by automatically detecting all the control points necessary to perform the morph. A face detection neural network, edge detection and medium filters are employed to detect the face position and features. Five control points, for both the source and target images, are then extracted based on the facial features. Triangulation method is then used to match and warp the source image to the target image using the control points. Finally color interpolation is done using a color Gaussian model that calculates the color for each particular frame depending on the number of frames used. A real coded Genetic algorithm is used in both the image warping and color blending steps to assist in step size decisions and speed up the morphing. This method results in ''very smooth'' morphs and is fast to process.
Abstract: Accurate timing alignment and stability is important
to maximize the true counts and minimize the random counts in
positron emission tomography So signals output from detectors must
be centering with the two isotopes to pre-operation and fed signals
into four units of pulse-processing units, each unit can accept up to
eight inputs. The dual source computed tomography consist two units
on the left for 15 detector signals of Cs-137 isotope and two units on
the right are for 15 detectors signals of Co-60 isotope. The gamma
spectrum consisting of either single or multiple photo peaks. This
allows for the use of energy discrimination electronic hardware
associated with the data acquisition system to acquire photon counts
data with a specific energy, even if poor energy resolution detectors
are used. This also helps to avoid counting of the Compton scatter
counts especially if a single discrete gamma photo peak is emitted by
the source as in the case of Cs-137. In this study the polyenergetic
version of the alternating minimization algorithm is applied to the
dual energy gamma computed tomography problem.
Abstract: A robust AUSM+ upwind discretisation scheme has been developed to simulate multiphase flow using consistent spatial discretisation schemes and a modified low-Mach number diffusion term. The impact of the selection of an interfacial pressure model has also been investigated. Three representative test cases have been simulated to evaluate the accuracy of the commonly-used stiffenedgas equation of state with respect to the IAPWS-IF97 equation of state for water. The algorithm demonstrates a combination of robustness and accuracy over a range of flow conditions, with the stiffened-gas equation tending to overestimate liquid temperature and density profiles.
Abstract: In this work, we present for the first time in our
perception an efficient digital watermarking scheme for mpeg audio
layer 3 files that operates directly in the compressed data domain,
while manipulating the time and subband/channel domain. In
addition, it does not need the original signal to detect the watermark.
Our scheme was implemented taking special care for the efficient
usage of the two limited resources of computer systems: time and
space. It offers to the industrial user the capability of watermark
embedding and detection in time immediately comparable to the real
music time of the original audio file that depends on the mpeg
compression, while the end user/audience does not face any artifacts
or delays hearing the watermarked audio file. Furthermore, it
overcomes the disadvantage of algorithms operating in the PCMData
domain to be vulnerable to compression/recompression attacks,
as it places the watermark in the scale factors domain and not in the
digitized sound audio data. The strength of our scheme, that allows it
to be used with success in both authentication and copyright
protection, relies on the fact that it gives to the users the enhanced
capability their ownership of the audio file not to be accomplished
simply by detecting the bit pattern that comprises the watermark
itself, but by showing that the legal owner knows a hard to compute
property of the watermark.
Abstract: A social network is a set of people or organization or other social entities connected by some form of relationships. Analysis of social network broadly elaborates visual and mathematical representation of that relationship. Web can also be considered as a social network. This paper presents an innovative approach to analyze a social network using a variant of existing ant colony optimization algorithm called as Clever Ant Colony Metaphor. Experiments are performed and interesting findings and observations have been inferred based on the proposed model.
Abstract: By taking advantage of both k-NN which is highly
accurate and K-means cluster which is able to reduce the time of classification, we can introduce Cluster-k-Nearest Neighbor as "variable k"-NN dealing with the centroid or mean point of all subclasses generated by clustering algorithm. In general the algorithm of K-means cluster is not stable, in term of accuracy, for that reason we develop another algorithm for clustering our space which gives a higher accuracy than K-means cluster, less
subclass number, stability and bounded time of classification with respect to the variable data size. We find between 96% and 99.7 % of accuracy in the lassification of 6 different types of Time series by using K-means cluster algorithm and we find 99.7% by using the new clustering algorithm.
Abstract: In the context of channel coding, the Generalized Belief Propagation (GBP) is an iterative algorithm used to recover the transmission bits sent through a noisy channel. To ensure a reliable transmission, we apply a map on the bits, that is called a code. This code induces artificial correlations between the bits to send, and it can be modeled by a graph whose nodes are the bits and the edges are the correlations. This graph, called Tanner graph, is used for most of the decoding algorithms like Belief Propagation or Gallager-B. The GBP is based on a non unic transformation of the Tanner graph into a so called region-graph. A clear advantage of the GBP over the other algorithms is the freedom in the construction of this graph. In this article, we explain a particular construction for specific graph topologies that involves relevant performance of the GBP. Moreover, we investigate the behavior of the GBP considered as a dynamic system in order to understand the way it evolves in terms of the time and in terms of the noise power of the channel. To this end we make use of classical measures and we introduce a new measure called the hyperspheres method that enables to know the size of the attractors.
Abstract: Lossless compression schemes with secure
transmission play a key role in telemedicine applications that helps in
accurate diagnosis and research. Traditional cryptographic algorithms
for data security are not fast enough to process vast amount of data.
Hence a novel Secured lossless compression approach proposed in
this paper is based on reversible integer wavelet transform, EZW
algorithm, new modified runlength coding for character
representation and selective bit scrambling. The use of the lifting
scheme allows generating truly lossless integer-to-integer wavelet
transforms. Images are compressed/decompressed by well-known
EZW algorithm. The proposed modified runlength coding greatly
improves the compression performance and also increases the
security level. This work employs scrambling method which is fast,
simple to implement and it provides security. Lossless compression
ratios and distortion performance of this proposed method are found
to be better than other lossless techniques.
Abstract: This paper presents a highly efficient algorithm for detecting and tracking humans and objects in video surveillance sequences. Mean shift clustering is applied on backgrounddifferenced image sequences. For efficiency, all calculations are performed on integral images. Novel corresponding exponential integral kernels are introduced to allow the application of nonuniform kernels for clustering, which dramatically increases robustness without giving up the efficiency of the integral data structures. Experimental results demonstrating the power of this approach are presented.
Abstract: Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforward network training is a special case of functional minimization, where no explicit model of the data is assumed. Therefore due to the high dimensionality of the data, linearization of the training problem through use of orthogonal basis functions is not desirable. The focus is functional minimization on any basis. A number of methods based on local gradient and Hessian matrices are discussed. Modifications of many methods of first and second order training methods are considered. Using share rates data, experimentally it is proved that Conjugate gradient and Quasi Newton?s methods outperformed the Gradient Descent methods. In case of the Levenberg-Marquardt algorithm is of special interest in financial forecasting.