Abstract: One of the ubiquitous routines in medical practice is searching through voluminous piles of clinical documents. In this paper we introduce a distributed system to search and exchange clinical documents. Clinical documents are distributed peer-to-peer. Relevant information is found in multiple iterations of cross-searches between the clinical text and its domain encyclopedia.
Abstract: The objective of this paper is to a design of pattern
classification model based on the back-propagation (BP) algorithm for
decision support system. Standard BP model has done full connection
of each node in the layers from input to output layers. Therefore, it
takes a lot of computing time and iteration computing for good
performance and less accepted error rate when we are doing some
pattern generation or training the network.
However, this model is using exclusive connection in between
hidden layer nodes and output nodes. The advantage of this model is
less number of iteration and better performance compare with standard
back-propagation model. We simulated some cases of classification
data and different setting of network factors (e.g. hidden layer number
and nodes, number of classification and iteration). During our
simulation, we found that most of simulations cases were satisfied by
BP based using exclusive connection network model compared to
standard BP. We expect that this algorithm can be available to
identification of user face, analysis of data, mapping data in between
environment data and information.
Abstract: An attempt in this paper proposes a re-modification to
the minimum moment approach of resource leveling which is a modified minimum moment approach to the traditional method by
Harris. The method is based on critical path method. The new approach suggests the difference between the methods in the
selection criteria of activity which needs to be shifted for leveling resource histogram. In traditional method, the improvement factor
found first to select the activity for each possible day of shifting. In
modified method maximum value of the product of Resources Rate
and Free Float was found first and improvement factor is then
calculated for that activity which needs to be shifted. In the proposed
method the activity to be selected first for shifting is based on the largest value of resource rate. The process is repeated for all the
remaining activities for possible shifting to get updated histogram.
The proposed method significantly reduces the number of iterations
and is easier for manual computations.
Abstract: In this paper we introduce a novel kernel classifier
based on a iterative shrinkage algorithm developed for compressive
sensing. We have adopted Bregman iteration with soft and hard
shrinkage functions and generalized hinge loss for solving l1 norm
minimization problem for classification. Our experimental results
with face recognition and digit classification using SVM as the
benchmark have shown that our method has a close error rate
compared to SVM but do not perform better than SVM. We have
found that the soft shrinkage method give more accuracy and in some
situations more sparseness than hard shrinkage methods.
Abstract: In this paper usefulness of quasi-Newton iteration
procedure in parameters estimation of the conditional variance
equation within BHHH algorithm is presented. Analytical solution of
maximization of the likelihood function using first and second
derivatives is too complex when the variance is time-varying. The
advantage of BHHH algorithm in comparison to the other
optimization algorithms is that requires no third derivatives with
assured convergence. To simplify optimization procedure BHHH
algorithm uses the approximation of the matrix of second derivatives
according to information identity. However, parameters estimation in
a/symmetric GARCH(1,1) model assuming normal distribution of
returns is not that simple, i.e. it is difficult to solve it analytically.
Maximum of the likelihood function can be founded by iteration
procedure until no further increase can be found. Because the
solutions of the numerical optimization are very sensitive to the
initial values, GARCH(1,1) model starting parameters are defined.
The number of iterations can be reduced using starting values close
to the global maximum. Optimization procedure will be illustrated in
framework of modeling volatility on daily basis of the most liquid
stocks on Croatian capital market: Podravka stocks (food industry),
Petrokemija stocks (fertilizer industry) and Ericsson Nikola Tesla
stocks (information-s-communications industry).
Abstract: In this paper, a new pseudo affine projection (AP)
algorithm based on Gauss-Seidel (GS) iterations is proposed for
acoustic echo cancellation (AEC). It is shown that the algorithm is
robust against near-end signal variations (including double-talk).
Abstract: This paper presents an iterative algorithm to find a
inverse kinematic solution of 5-DOF robot. The algorithm is to
minimize the iteration number. Since the 5-DOF robot cannot give full
orientation of tool. Only z-direction of tool is satisfied while rotation
of tool is determined by kinematic constraint. This work therefore
described how to specify the tool direction and let the tool rotation free.
The simulation results show that this algorithm effectively worked.
Using the proposed iteration algorithm, error due to inverse kinematics
converged to zero rapidly in 5 iterations. This algorithm was applied in
real welding robot and verified through various practical works.
Abstract: The focal spot of a high intensity focused ultrasound
transducer is small. To heat a large target volume, multiple treatment spots are required. If the power of each treatment spot is fixed, it could
results in insufficient heating of initial spots and over-heating of later ones, which is caused by the thermal diffusion. Hence, to produce a
uniform heated volume, the delivered energy of each treatment spot
should be properly adjusted. In this study, we proposed an iterative, extrapolation technique to adjust the required ultrasound energy of
each treatment spot. Three different scanning pathways were used to evaluate the performance of this technique. Results indicate that by using the proposed technique, uniform heating volume could be obtained.
Abstract: This paper describes the results of an extensive study
and comparison of popular hash functions SHA-1, SHA-256,
RIPEMD-160 and RIPEMD-320 with JERIM-320, a 320-bit hash
function. The compression functions of hash functions like SHA-1
and SHA-256 are designed using serial successive iteration whereas
those like RIPEMD-160 and RIPEMD-320 are designed using two
parallel lines of message processing. JERIM-320 uses four parallel
lines of message processing resulting in higher level of security than
other hash functions at comparable speed and memory requirement.
The performance evaluation of these methods has been done by using
practical implementation and also by using step computation
methods. JERIM-320 proves to be secure and ensures the integrity of
messages at a higher degree. The focus of this work is to establish
JERIM-320 as an alternative of the present day hash functions for the
fast growing internet applications.
Abstract: A novel calibration approach that aims to reduce
ASM2d parameter subsets and decrease the model complexity is
presented. This approach does not require high computational
demand and reduces the number of modeling parameters required to
achieve the ASMs calibration by employing a sensitivity and iteration
methodology. Parameter sensitivity is a crucial factor and the
iteration methodology enables refinement of the simulation parameter
values. When completing the iteration process, parameters values are
determined in descending order of their sensitivities. The number of
iterations required is equal to the number of model parameters of the
parameter significance ranking. This approach was used for the
ASM2d model to the evaluated EBPR phosphorus removal and it was
successful. Results of the simulation provide calibration parameters.
These included YPAO, YPO4, YPHA, qPHA, qPP, μPAO, bPAO, bPP, bPHA,
KPS, YA, μAUT, bAUT, KO2 AUT, and KNH4 AUT. Those parameters were
corresponding to the experimental data available.
Abstract: The load flow study in a power system constitutes a study of paramount importance. The study reveals the electrical performance and power flows (real and reactive) for specified condition when the system is operating under steady state. This paper gives an overview of different techniques used for load flow study under different specified conditions.
Abstract: Recently, the findings on the MEG iterative scheme has demonstrated to accelerate the convergence rate in solving any system of linear equations generated by using approximation equations of boundary value problems. Based on the same scheme, the aim of this paper is to investigate the capability of a family of four-point block iterative methods with a weighted parameter, ω such as the 4 Point-EGSOR, 4 Point-EDGSOR, and 4 Point-MEGSOR in solving two-dimensional elliptic partial differential equations by using the second-order finite difference approximation. In fact, the formulation and implementation of three four-point block iterative methods are also presented. Finally, the experimental results show that the Four Point MEGSOR iterative scheme is superior as compared with the existing four point block schemes.
Abstract: An edge based local search algorithm, called ELS, is proposed for the maximum clique problem (MCP), a well-known combinatorial optimization problem. ELS is a two phased local search method effectively £nds the near optimal solutions for the MCP. A parameter ’support’ of vertices de£ned in the ELS greatly reduces the more number of random selections among vertices and also the number of iterations and running times. Computational results on BHOSLIB and DIMACS benchmark graphs indicate that ELS is capable of achieving state-of-the-art-performance for the maximum clique with reasonable average running times.
Abstract: In this paper, we give the generalized alternating twostage method in which the inner iterations are accomplished by a generalized alternating method. And we present convergence results of the method for solving nonsingular linear systems when the coefficient matrix of the linear system is a monotone matrix or an H-matrix.
Abstract: A parallel block method based on Backward
Differentiation Formulas (BDF) is developed for the parallel solution
of stiff Ordinary Differential Equations (ODEs). Most common
methods for solving stiff systems of ODEs are based on implicit
formulae and solved using Newton iteration which requires repeated
solution of systems of linear equations with coefficient matrix, I -
hβJ . Here, J is the Jacobian matrix of the problem. In this paper,
the matrix operations is paralleled in order to reduce the cost of the
iterations. Numerical results are given to compare the speedup and
efficiency of parallel algorithm and that of sequential algorithm.
Abstract: Flow movement in unsaturated soil can be expressed
by a partial differential equation, named Richards equation. The
objective of this study is the finding of an appropriate implicit
numerical solution for head based Richards equation. Some of the
well known finite difference schemes (fully implicit, Crank Nicolson
and Runge-Kutta) have been utilized in this study. In addition, the
effects of different approximations of moisture capacity function,
convergence criteria and time stepping methods were evaluated. Two
different infiltration problems were solved to investigate the
performance of different schemes. These problems include of vertical
water flow in a wet and very dry soils. The numerical solutions of
two problems were compared using four evaluation criteria and the
results of comparisons showed that fully implicit scheme is better
than the other schemes. In addition, utilizing of standard chord slope
method for approximation of moisture capacity function, automatic
time stepping method and difference between two successive
iterations as convergence criterion in the fully implicit scheme can
lead to better and more reliable results for simulation of fluid
movement in different unsaturated soils.
Abstract: This paper presents a novel approach for tuning unified power flow controller (UPFC) based damping controller in order to enhance the damping of power system low frequency oscillations. The design problem of damping controller is formulated as an optimization problem according to the eigenvalue-based objective function which is solved using iteration particle swarm optimization (IPSO). The effectiveness of the proposed controller is demonstrated through eigenvalue analysis and nonlinear time-domain simulation studies under a wide range of loading conditions. The simulation study shows that the designed controller by IPSO performs better than CPSO in finding the solution. Moreover, the system performance analysis under different operating conditions show that the δE based controller is superior to the mB based controller.
Abstract: Software effort estimation is the process of predicting
the most realistic use of effort required to develop or maintain
software based on incomplete, uncertain and/or noisy input. Effort
estimates may be used as input to project plans, iteration plans,
budgets. There are various models like Halstead, Walston-Felix,
Bailey-Basili, Doty and GA Based models which have already used
to estimate the software effort for projects. In this study Statistical
Models, Fuzzy-GA and Neuro-Fuzzy (NF) Inference Systems are
experimented to estimate the software effort for projects. The
performances of the developed models were tested on NASA
software project datasets and results are compared with the Halstead,
Walston-Felix, Bailey-Basili, Doty and Genetic Algorithm Based
models mentioned in the literature. The result shows that the NF
Model has the lowest MMRE and RMSE values. The NF Model
shows the best results as compared with the Fuzzy-GA based hybrid
Inference System and other existing Models that are being used for
the Effort Prediction with lowest MMRE and RMSE values.
Abstract: The performance of adaptive beamforming degrades
substantially in the presence of steering vector mismatches. This
degradation is especially severe in the near-field, for the
3-dimensional source location is more difficult to estimate than the
2-dimensional direction of arrival in far-field cases. As a solution, a
novel approach of near-field robust adaptive beamforming (RABF) is
proposed in this paper. It is a natural extension of the traditional
far-field RABF and belongs to the class of diagonal loading
approaches, with the loading level determined based on worst-case
performance optimization. However, different from the methods
solving the optimal loading by iteration, it suggests here a simple
closed-form solution after some approximations, and consequently,
the optimal weight vector can be expressed in a closed form. Besides
simplicity and low computational cost, the proposed approach reveals
how different factors affect the optimal loading as well as the weight
vector. Its excellent performance in the near-field is confirmed via a
number of numerical examples.
Abstract: Iterative learning control aims to achieve zero tracking
error of a specific command. This is accomplished by iteratively
adjusting the command given to a feedback control system, based on
the tracking error observed in the previous iteration. One would like
the iterations to converge to zero tracking error in spite of any error
present in the model used to design the learning law. First, this need
for stability robustness is discussed, and then the need for robustness
of the property that the transients are well behaved. Methods of
producing the needed robustness to parameter variations and to
singular perturbations are presented. Then a method involving
reverse time runs is given that lets the world behavior produce the
ILC gains in such a way as to eliminate the need for a mathematical
model. Since the real world is producing the gains, there is no issue
of model error. Provided the world behaves linearly, the approach
gives an ILC law with both stability robustness and good transient
robustness, without the need to generate a model.