Abstract: This paper presents a heuristic to solve large size 0-1 Multi constrained Knapsack problem (01MKP) which is NP-hard. Many researchers are used heuristic operator to identify the redundant constraints of Linear Programming Problem before applying the regular procedure to solve it. We use the intercept matrix to identify the zero valued variables of 01MKP which is known as redundant variables. In this heuristic, first the dominance property of the intercept matrix of constraints is exploited to reduce the search space to find the optimal or near optimal solutions of 01MKP, second, we improve the solution by using the pseudo-utility ratio based on surrogate constraint of 01MKP. This heuristic is tested for benchmark problems of sizes upto 2500, taken from literature and the results are compared with optimum solutions. Space and computational complexity of solving 01MKP using this approach are also presented. The encouraging results especially for relatively large size test problems indicate that this heuristic can successfully be used for finding good solutions for highly constrained NP-hard problems.
Abstract: In this work study the location of interface in a stirred vessel with a Concave impeller by computational fluid dynamic was presented. To modeling rotating the impeller, sliding mesh (SM) technique was used and standard k-ε model was selected for turbulence closure. Mean tangential, radial and axial velocities and also turbulent kinetic energy (k) and turbulent dissipation rate (ε) in various points of tank was investigated. Results show sensitivity of system to location of interface and radius of 7 to 10cm for interface in the vessel with existence characteristics cause to increase the accuracy of simulation.
Abstract: An iterative definition of any n variable mean function is given in this article, which iteratively uses the two-variable form of the corresponding two-variable mean function. This extension method omits recursivity which is an important improvement compared with certain recursive formulas given before by Ando-Li-Mathias, Petz- Temesi. Furthermore it is conjectured here that this iterative algorithm coincides with the solution of the Riemann centroid minimization problem. Certain simulations are given here to compare the convergence rate of the different algorithms given in the literature. These algorithms will be the gradient and the Newton mehod for the Riemann centroid computation.
Abstract: The present paper considers the steady free convection
boundary layer flow of a viscoelastic fluid on solid sphere with
Newtonian heating. The boundary layer equations are an order higher
than those for the Newtonian (viscous) fluid and the adherence
boundary conditions are insufficient to determine the solution of
these equations completely. Thus, the augmentation an extra
boundary condition is needed to perform the numerical
computational. The governing boundary layer equations are first
transformed into non-dimensional form by using special
dimensionless group and then solved by using an implicit finite
difference scheme. The results are displayed graphically to illustrate
the influence of viscoelastic K and Prandtl Number Pr parameters on
skin friction, heat transfer, velocity profiles and temperature profiles.
Present results are compared with the published papers and are found
to concur very well.
Abstract: This paper presents an effective traffic lights
recognition method at the daytime. First, Potential Traffic Lights
Detector (PTLD) use whole color source of YCbCr channel image and
make each binary image of green and red traffic lights. After PTLD
step, Shape Filter (SF) use to remove noise such as traffic sign, street
tree, vehicle, and building. At this time, noise removal properties
consist of information of blobs of binary image; length, area, area of
boundary box, etc. Finally, after an intermediate association step witch
goal is to define relevant candidates region from the previously
detected traffic lights, Adaptive Multi-class Classifier (AMC) is
executed. The classification method uses Haar-like feature and
Adaboost algorithm. For simulation, we are implemented through Intel
Core CPU with 2.80 GHz and 4 GB RAM and tested in the urban and
rural roads. Through the test, we are compared with our method and
standard object-recognition learning processes and proved that it
reached up to 94 % of detection rate which is better than the results
achieved with cascade classifiers. Computation time of our proposed
method is 15 ms.
Abstract: This paper presents modeling and optimization of two NP-hard problems in flexible manufacturing system (FMS), part type selection problem and loading problem. Due to the complexity and extent of the problems, the paper was split into two parts. The first part of the papers has discussed the modeling of the problems and showed how the real coded genetic algorithms (RCGA) can be applied to solve the problems. This second part discusses the effectiveness of the RCGA which uses an array of real numbers as chromosome representation. The novel proposed chromosome representation produces only feasible solutions which minimize a computational time needed by GA to push its population toward feasible search space or repair infeasible chromosomes. The proposed RCGA improves the FMS performance by considering two objectives, maximizing system throughput and maintaining the balance of the system (minimizing system unbalance). The resulted objective values are compared to the optimum values produced by branch-and-bound method. The experiments show that the proposed RCGA could reach near optimum solutions in a reasonable amount of time.
Abstract: Smart Dust particles, are small smart materials used for generating weather maps. We investigate question of the optimal number of Smart Dust particles necessary for generating precise, computationally feasible and cost effective 3–D weather maps. We also give an optimal matching algorithm for the generalized scenario, when there are N Smart Dust particles and M ground receivers.
Abstract: The Comparison analysis of the Wald-s and Bayestype sequential methods for testing hypotheses is offered. The merits of the new sequential test are: universality which consists in optimality (with given criteria) and uniformity of decision-making regions for any number of hypotheses; simplicity, convenience and uniformity of the algorithms of their realization; reliability of the obtained results and an opportunity of providing the errors probabilities of desirable values. There are given the Computation results of concrete examples which confirm the above-stated characteristics of the new method and characterize the considered methods in regard to each other.
Abstract: In many buildings we rely on large footings to offer
structural stability. Designers often compensate for the lack of
knowledge available with regard to foundation-soil interaction by
furnishing structures with overly large footings. This may lead to a
significant increase in building expenditures if many large
foundations are present. This paper describes the interface material
law that governs the behavior along the contact surface of adjacent
materials, and the behavior of a large foundation under ultimate limit
loading. A case study is chosen that represents a common
foundation-soil system frequently used in general practice and
therefore relevant to other structures. Investigations include
compressing versus uplifting wind forces, alterations to the
foundation size and subgrade compositions, the role of the slab
stiffness and presence and the effect of commonly used structural
joints and connections. These investigations aim to provide the
reader with an objective design approach, efficiently preventing
structural instability.
Abstract: In this paper, we have proposed a Haar wavelet quasilinearization
method to solve the well known Blasius equation. The
method is based on the uniform Haar wavelet operational matrix
defined over the interval [0, 1]. In this method, we have proposed the
transformation for converting the problem on a fixed computational
domain. The Blasius equation arises in the various boundary layer
problems of hydrodynamics and in fluid mechanics of laminar
viscous flows. Quasi-linearization is iterative process but our
proposed technique gives excellent numerical results with quasilinearization
for solving nonlinear differential equations without any
iteration on selecting collocation points by Haar wavelets. We have
solved Blasius equation for 1≤α ≤ 2 and the numerical results are
compared with the available results in literature. Finally, we
conclude that proposed method is a promising tool for solving the
well known nonlinear Blasius equation.
Abstract: Numerical analysis for the aerodynamic characteristics
of the WIG (wing-in ground effect) craft with highly cambered and
aspect ratio of one is performed to predict the ground effect for the
case of with- and without- lower-extension endplate. The analysis is
included varying angles of attack from 0 to10 deg. and ground
clearances from 5% of chord to 50%. Due to the ground effect, the lift
by rising in pressure on the lower surface is increased and the
influence of wing-tip vortices is decreased. These two significant
effects improve the lift-drag ratio. On the other hand, the endplate
prevents the high-pressure air escaping from the air cushion at the
wing tip and causes to increase the lift and lift-drag ratio further. It is
found from the visualization of computation results that two wing-tip
vortices are generated from each surface of the wing tip and their
strength are weak and diminished rapidly. Irodov-s criteria are also
evaluated to investigate the static height stability. The comparison of
Irodov-s criteria shows that the endplate improves the deviation of the
static height stability with respect to pitch angles and heights. As the
results, the endplate can improve the aerodynamic characteristics and
static height stability of wings in ground effect, simultaneously.
Abstract: A data cutting and sorting method (DCSM) is proposed
to optimize the performance of data mining. DCSM reduces the
calculation time by getting rid of redundant data during the data
mining process. In addition, DCSM minimizes the computational units
by splitting the database and by sorting data with support counts. In the
process of searching for the relationship between metabolic syndrome
and lifestyles with the health examination database of an electronics
manufacturing company, DCSM demonstrates higher search
efficiency than the traditional Apriori algorithm in tests with different
support counts.
Abstract: In this paper, we present a comparative study between two computer vision systems for objects recognition and tracking, these algorithms describe two different approach based on regions constituted by a set of pixels which parameterized objects in shot sequences. For the image segmentation and objects detection, the FCM technique is used, the overlapping between cluster's distribution is minimized by the use of suitable color space (other that the RGB one). The first technique takes into account a priori probabilities governing the computation of various clusters to track objects. A Parzen kernel method is described and allows identifying the players in each frame, we also show the importance of standard deviation value research of the Gaussian probability density function. Region matching is carried out by an algorithm that operates on the Mahalanobis distance between region descriptors in two subsequent frames and uses singular value decomposition to compute a set of correspondences satisfying both the principle of proximity and the principle of exclusion.
Abstract: When reconstructing a scenario, it is necessary to
know the structure of the elements present on the scene to have an
interpretation. In this work we link 3D scenes reconstruction to
evolutionary algorithms through the vision stereo theory. We
consider vision stereo as a method that provides the reconstruction of
a scene using only a couple of images of the scene and performing
some computation. Through several images of a scene, captured from
different positions, vision stereo can give us an idea about the threedimensional
characteristics of the world. Vision stereo usually
requires of two cameras, making an analogy to the mammalian vision
system. In this work we employ only a camera, which is translated
along a path, capturing images every certain distance. As we can not
perform all computations required for an exhaustive reconstruction,
we employ an evolutionary algorithm to partially reconstruct the
scene in real time. The algorithm employed is the fly algorithm,
which employ “flies" to reconstruct the principal characteristics of
the world following certain evolutionary rules.
Abstract: Supersonic hydrogen-air cylindrical mixing layer is
numerically analyzed to investigate the effect of inlet swirl on
ignition time delay in scramjets. Combustion is treated using detail
chemical kinetics. One-equation turbulence model of Spalart and
Allmaras is chosen to study the problem and advection upstream
splitting method is used as computational scheme. The results show
that swirling both fuel and oxidizer streams may drastically decrease
the ignition distance in supersonic combustion, unlike using the swirl
just in fuel stream which has no helpful effect.
Abstract: The paper presents a simple and an accurate formula
that has been developed for the conduction angle (δ) of a single
phase half-wave or full-wave controlled rectifier with RL load. This
formula can be also used for calculating the conduction angle (δ) in
case of A.C. voltage regulator with inductive load under
discontinuous current mode. The simulation results shows that the
conduction angle calculated from the developed formula agree very
well with that obtained from the exact solution arrived from the
iterative method. Applying the developed formula can reduce the
computational time and reduce the time for manual classroom
calculation. In addition, the proposed formula is attractive for real
time implementations.
Abstract: Quantum computation using qubits made of two component Bose-Einstein condensates (BECs) is analyzed. We construct a general framework for quantum algorithms to be executed using the collective states of the BECs. The use of BECs allows for an increase of energy scales via bosonic enhancement, resulting in two qubit gate operations that can be performed at a time reduced by a factor of N, where N is the number of bosons per qubit. We illustrate the scheme by an application to Deutsch-s and Grover-s algorithms, and discuss possible experimental implementations. Decoherence effects are analyzed under both general conditions and for the experimental implementation proposed.
Abstract: This frame work describes a computationally more
efficient and adaptive threshold estimation method for image
denoising in the wavelet domain based on Generalized Gaussian
Distribution (GGD) modeling of subband coefficients. In this
proposed method, the choice of the threshold estimation is carried out
by analysing the statistical parameters of the wavelet subband
coefficients like standard deviation, arithmetic mean and geometrical
mean. The noisy image is first decomposed into many levels to
obtain different frequency bands. Then soft thresholding method is
used to remove the noisy coefficients, by fixing the optimum
thresholding value by the proposed method. Experimental results on
several test images by using this method show that this method yields
significantly superior image quality and better Peak Signal to Noise
Ratio (PSNR). Here, to prove the efficiency of this method in image
denoising, we have compared this with various denoising methods
like wiener filter, Average filter, VisuShrink and BayesShrink.
Abstract: In this paper, we consider the problem of logic simplification for a special class of logic functions, namely complementary Boolean functions (CBF), targeting low power implementation using static CMOS logic style. The functions are uniquely characterized by the presence of terms, where for a canonical binary 2-tuple, D(mj) ∪ D(mk) = { } and therefore, we have | D(mj) ∪ D(mk) | = 0 [19]. Similarly, D(Mj) ∪ D(Mk) = { } and hence | D(Mj) ∪ D(Mk) | = 0. Here, 'mk' and 'Mk' represent a minterm and maxterm respectively. We compare the circuits minimized with our proposed method with those corresponding to factored Reed-Muller (f-RM) form, factored Pseudo Kronecker Reed-Muller (f-PKRM) form, and factored Generalized Reed-Muller (f-GRM) form. We have opted for algebraic factorization of the Reed-Muller (RM) form and its different variants, using the factorization rules of [1], as it is simple and requires much less CPU execution time compared to Boolean factorization operations. This technique has enabled us to greatly reduce the literal count as well as the gate count needed for such RM realizations, which are generally prone to consuming more cells and subsequently more power consumption. However, this leads to a drawback in terms of the design-for-test attribute associated with the various RM forms. Though we still preserve the definition of those forms viz. realizing such functionality with only select types of logic gates (AND gate and XOR gate), the structural integrity of the logic levels is not preserved. This would consequently alter the testability properties of such circuits i.e. it may increase/decrease/maintain the same number of test input vectors needed for their exhaustive testability, subsequently affecting their generalized test vector computation. We do not consider the issue of design-for-testability here, but, instead focus on the power consumption of the final logic implementation, after realization with a conventional CMOS process technology (0.35 micron TSMC process). The quality of the resulting circuits evaluated on the basis of an established cost metric viz., power consumption, demonstrate average savings by 26.79% for the samples considered in this work, besides reduction in number of gates and input literals by 39.66% and 12.98% respectively, in comparison with other factored RM forms.
Abstract: Selection of the best possible set of suppliers has a
significant impact on the overall profitability and success of any
business. For this reason, it is usually necessary to optimize all
business processes and to make use of cost-effective alternatives for
additional savings. This paper proposes a new efficient context-aware
supplier selection model that takes into account possible changes of
the environment while significantly reducing selection costs. The
proposed model is based on data clustering techniques while
inspiring certain principles of online algorithms for an optimally
selection of suppliers. Unlike common selection models which re-run
the selection algorithm from the scratch-line for any decision-making
sub-period on the whole environment, our model considers the
changes only and superimposes it to the previously defined best set
of suppliers to obtain a new best set of suppliers. Therefore, any recomputation
of unchanged elements of the environment is avoided
and selection costs are consequently reduced significantly. A
numerical evaluation confirms applicability of this model and proves
that it is a more optimal solution compared with common static
selection models in this field.