Abstract: The existing image coding standards generally degrades at low bit-rates because of the underlying block based Discrete Cosine Transform scheme. Over the past decade, the success of wavelets in solving many different problems has contributed to its unprecedented popularity. Due to implementation constraints scalar wavelets do not posses all the properties such as orthogonality, short support, linear phase symmetry, and a high order of approximation through vanishing moments simultaneously, which are very much essential for signal processing. New class of wavelets called 'Multiwavelets' which posses more than one scaling function overcomes this problem. This paper presents a new image coding scheme based on non linear approximation of multiwavelet coefficients along with multistage vector quantization. The performance of the proposed scheme is compared with the results obtained from scalar wavelets.
Abstract: A new topology of unified power quality conditioner
(UPQC) is proposed for different power quality (PQ) improvement in
a three-phase four-wire (3P-4W) distribution system. For neutral
current mitigation, a star-hexagon transformer is connected in shunt
near the load along with three-leg voltage source inverters (VSIs)
based UPQC. For the mitigation of source neutral current, the uses of
passive elements are advantageous over the active compensation due
to ruggedness and less complexity of control. In addition to this, by
connecting a star-hexagon transformer for neutral current mitigation
the over all rating of the UPQC is reduced. The performance of the
proposed topology of 3P-4W UPQC is evaluated for power-factor
correction, load balancing, neutral current mitigation and mitigation
of voltage and currents harmonics. A simple control algorithm based
on Unit Vector Template (UVT) technique is used as a control
strategy of UPQC for mitigation of different PQ problems. In this
control scheme, the current/voltage control is applied over the
fundamental supply currents/voltages instead of fast changing APFs
currents/voltages, thereby reducing the computational delay.
Moreover, no extra control is required for neutral source current
compensation; hence the numbers of current sensors are reduced. The
performance of the proposed topology of UPQC is analyzed through
simulations results using MATLAB software with its Simulink and
Power System Block set toolboxes.
Abstract: Globalization and therefore increasing tight competition among companies, have resulted to increase the importance of making well-timed decision. Devising and employing effective strategies, that are flexible and adaptive to changing market, stand a greater chance of being effective in the long-term. In other side, a clear focus on managing the entire product lifecycle has emerged as critical areas for investment. Therefore, applying wellorganized tools to employ past experience in new case, helps to make proper and managerial decisions. Case based reasoning (CBR) is based on a means of solving a new problem by using or adapting solutions to old problems. In this paper, an adapted CBR model with k-nearest neighbor (K-NN) is employed to provide suggestions for better decision making which are adopted for a given product in the middle of life phase. The set of solutions are weighted by CBR in the principle of group decision making. Wrapper approach of genetic algorithm is employed to generate optimal feature subsets. The dataset of the department store, including various products which are collected among two years, have been used. K-fold approach is used to evaluate the classification accuracy rate. Empirical results are compared with classical case based reasoning algorithm which has no special process for feature selection, CBR-PCA algorithm based on filter approach feature selection, and Artificial Neural Network. The results indicate that the predictive performance of the model, compare with two CBR algorithms, in specific case is more effective.
Abstract: The success of IT-projects concerning the
implementation of business application Software is strongly
depending upon the application of an efficient requirements
management, to understand the business requirements and to realize
them in the IT. But in fact, the Potentials of the requirements
management are not fully exhausted by small and medium sized
enterprises (SME) of the IT sector. To work out recommendations for
action and furthermore a possible solution, allowing a better exhaust
of potentials, it shall be examined in a scientific research project,
which problems occur out of which causes. In the same place, the
storage of knowledge from the requirements management, and its
later reuse are important, to achieve sustainable improvements of the
competitive of the IT-SMEs. Requirements Engineering is one of the
most important topics in Product Management for Software to
achieve the goal of optimizing the success of the software product.
Abstract: Flow through micro and mini channels requires relatively
high driving pressure due to the large fluid pressure drop
through these channels. Consequently the forces acting on the walls of
the channel due to the fluid pressure are also large. Due to these forces
there are displacement fields set up in the solid substrate containing
the channels. If the movement of the substrate is constrained at some
points, then stress fields are established in the substrate. On the other
hand, if the deformation of the channel shape is sufficiently large
then its effect on the fluid flow is important to be calculated. Such
coupled fluid-solid systems form a class of problems known as fluidstructure
interactions. In the present work a co-located finite volume
discretization procedure on unstructured meshes is described for
solving fluid-structure interaction type of problems. A linear elastic
solid is assumed for which the effect of the channel deformation
on the flow is neglected. Thus the governing equations for the
fluid and the solid are decoupled and are solved separately. The
procedure is validated by solving two benchmark problems, one from
fluid mechanics and another from solid mechanics. A fluid-structure
interaction problem of flow through a U-shaped channel embedded
in a plate is solved.
Abstract: Artificial Immune System is applied as a Heuristic
Algorithm for decades. Nevertheless, many of these applications
took advantage of the benefit of this algorithm but seldom proposed
approaches for enhancing the efficiency. In this paper, a
Self-evolving Artificial Immune System is proposed via developing
the T and B cell in Immune System and built a self-evolving
mechanism for the complexities of different problems. In this
research, it focuses on enhancing the efficiency of Clonal selection
which is responsible for producing Affinities to resist the invading of
Antigens. T and B cell are the main mechanisms for Clonal
Selection to produce different combinations of Antibodies.
Therefore, the development of T and B cell will influence the
efficiency of Clonal Selection for searching better solution.
Furthermore, for better cooperation of the two cells, a co-evolutional
strategy is applied to coordinate for more effective productions of
Antibodies. This work finally adopts Flow-shop scheduling
instances in OR-library to validate the proposed algorithm.
Abstract: This paper made an attempt to investigate the problem associated with enhancement of emulsions of light crude oil-water recovery in an oil field of Algerian Sahara. Measurements were taken through experiments using RheoStress (RS600). Factors such as shear rate, temperature and light oil concentration on the viscosity behavior were considered. Experimental measurements were performed in terms of shear stress–shear rate, yield stress and flow index on mixture of light crude oil–water. The rheological behavior of emulsion showed Non-Newtonian shear thinning behavior (Herschel-Bulkley). The experiments done in the laboratory showed the stability of some water in light crude oil emulsions form during consolidate oil recovery process. To break the emulsion using additives may involve higher cost and could be very expensive. Therefore, further research should be directed to find solution of these problems that have been encountered.
Abstract: In an era of knowledge explosion, the growth of data
increases rapidly day by day. Since data storage is a limited resource,
how to reduce the data space in the process becomes a challenge issue.
Data compression provides a good solution which can lower the
required space. Data mining has many useful applications in recent
years because it can help users discover interesting knowledge in large
databases. However, existing compression algorithms are not
appropriate for data mining. In [1, 2], two different approaches were
proposed to compress databases and then perform the data mining
process. However, they all lack the ability to decompress the data to
their original state and improve the data mining performance. In this
research a new approach called Mining Merged Transactions with the
Quantification Table (M2TQT) was proposed to solve these problems.
M2TQT uses the relationship of transactions to merge related
transactions and builds a quantification table to prune the candidate
itemsets which are impossible to become frequent in order to improve
the performance of mining association rules. The experiments show
that M2TQT performs better than existing approaches.
Abstract: Today, building automation is advancing from simple
monitoring and control tasks of lightning and heating towards more
and more complex applications that require a dynamic perception
and interpretation of different scenes occurring in a building. Current
approaches cannot handle these newly upcoming demands. In this
article, a bionically inspired approach for multimodal, dynamic scene
perception and interpretation is presented, which is based on neuroscientific
and neuro-psychological research findings about the perceptual
system of the human brain. This approach bases on data from diverse
sensory modalities being processed in a so-called neuro-symbolic
network. With its parallel structure and with its basic elements being
information processing and storing units at the same time, a very
efficient method for scene perception is provided overcoming the
problems and bottlenecks of classical dynamic scene interpretation
systems.
Abstract: In this paper, we propose a novel algorithm for
delineating the endocardial wall from a human heart ultrasound scan.
We assume that the gray levels in the ultrasound images are
independent and identically distributed random variables with
different Rician Inverse Gaussian (RiIG) distributions. Both synthetic
and real clinical data will be used for testing the algorithm. Algorithm
performance will be evaluated using the expert radiologist evaluation
of a soft copy of an ultrasound scan during the scanning process and
secondly, doctor’s conclusion after going through a printed copy of
the same scan. Successful implementation of this algorithm should
make it possible to differentiate normal from abnormal soft tissue and
help disease identification, what stage the disease is in and how best
to treat the patient. We hope that an automated system that uses this
algorithm will be useful in public hospitals especially in Third World
countries where problems such as shortage of skilled radiologists and
shortage of ultrasound machines are common. These public hospitals
are usually the first and last stop for most patients in these countries.
Abstract: Abrasive waterjet is a novel machining process capable of processing wide range of hard-to-machine materials. This research addresses modeling and optimization of the process parameters for this machining technique. To model the process a set of experimental data has been used to evaluate the effects of various parameter settings in cutting 6063-T6 aluminum alloy. The process variables considered here include nozzle diameter, jet traverse rate, jet pressure and abrasive flow rate. Depth of cut, as one of the most important output characteristics, has been evaluated based on different parameter settings. The Taguchi method and regression modeling are used in order to establish the relationships between input and output parameters. The adequacy of the model is evaluated using analysis of variance (ANOVA) technique. The pairwise effects of process parameters settings on process response outputs are also shown graphically. The proposed model is then embedded into a Simulated Annealing algorithm to optimize the process parameters. The optimization is carried out for any desired values of depth of cut. The objective is to determine proper levels of process parameters in order to obtain a certain level of depth of cut. Computational results demonstrate that the proposed solution procedure is quite effective in solving such multi-variable problems.
Abstract: The paper presents the method developed to assess
rating points of objects with qualitative indexes. The novelty of the
method lies in the fact that the authors use linguistic scales that allow
to formalize the values of the indexes with the help of fuzzy sets. As
a result it is possible to operate correctly with dissimilar indexes on
the unified basis and to get stable final results. The obtained rating
points are used in decision making based on fuzzy expert opinions.
Abstract: Assembly line balancing is a very important issue in
mass production systems due to production cost. Although many
studies have been done on this topic, but because assembly line
balancing problems are so complex they are categorized as NP-hard
problems and researchers strongly recommend using heuristic
methods. This paper presents a new heuristic approach called the
critical task method (CTM) for solving U-shape assembly line
balancing problems. The performance of the proposed heuristic
method is tested by solving a number of test problems and comparing
them with 12 other heuristics available in the literature to confirm the
superior performance of the proposed heuristic. Furthermore, to
prove the efficiency of the proposed CTM, the objectives are
increased to minimize the number of workstation (or equivalently
maximize line efficiency), and minimizing the smoothness index.
Finally, it is proven that the proposed heuristic is more efficient than
the others to solve the U-shape assembly line balancing problem.
Abstract: The applications on numbers are across-the-board that there is much scope for study. The chic of writing numbers is diverse and comes in a variety of form, size and fonts. Identification of Indian languages scripts is challenging problems. In Optical Character Recognition [OCR], machine printed or handwritten characters/numerals are recognized. There are plentiful approaches that deal with problem of detection of numerals/character depending on the sort of feature extracted and different way of extracting them. This paper proposes a recognition scheme for handwritten Hindi (devnagiri) numerals; most admired one in Indian subcontinent our work focused on a technique in feature extraction i.e. Local-based approach, a method using 16-segment display concept, which is extracted from halftoned images & Binary images of isolated numerals. These feature vectors are fed to neural classifier model that has been trained to recognize a Hindi numeral. The archetype of system has been tested on varieties of image of numerals. Experimentation result shows that recognition rate of halftoned images is 98 % compared to binary images (95%).
Abstract: Hydrodynamic pressures acting on upstream of concrete dams during an earthquake are an important factor in designing and assessing the safety of these structures in Earthquake regions. Due to inherent complexities, assessing exact hydrodynamic pressure is only feasible for problems with simple geometry. In this research, the governing equation of concrete gravity dam reservoirs with effect of fluid viscosity in frequency domain is solved and then compared with that in which viscosity is assumed zero. The results show that viscosity influences the reservoir-s natural frequency. In excitation frequencies near the reservoir's natural frequencies, hydrodynamic pressure has a considerable difference in compare to the results of non-viscose fluid.
Abstract: High voltage generators are being subject to higher
voltage rating and are being designed to operate in harsh conditions.
Stator windings are the main component of generators in which
Electrical, magnetical and thermal stresses remain major failures for
insulation degradation accelerated aging. A large number of
generators failed due to stator winding problems, mainly insulation
deterioration. Insulation degradation assessment plays vital role in the
asset life management. Mostly the stator failure is catastrophic
causing significant damage to the plant. Other than generation loss,
stator failure involves heavy repair or replacement cost. Electro
thermal analysis is the main characteristic for improvement design of
stator slot-s insulation. Dielectric parameters such as insulation
thickness, spacing, material types, geometry of winding and slot are
major design consideration. A very powerful method available to
analyze electro thermal performance is Finite Element Method
(FEM) which is used in this paper. The analysis of various stator coil
and slot configurations are used to design the better dielectric system
to reduce electrical and thermal stresses in order to increase the
power of generator in the same volume of core. This paper describes
the process used to perform classical design and improvement
analysis of stator slot-s insulation.
Abstract: In this article the homotopy continuation method (HCM) to solve the forward kinematic problem of the 3-PRS parallel manipulator is used. Since there are many difficulties in solving the system of nonlinear equations in kinematics of manipulators, the numerical solutions like Newton-Raphson are inevitably used. When dealing with any numerical solution, there are two troublesome problems. One is that good initial guesses are not easy to detect and another is related to whether the used method will converge to useful solutions. Results of this paper reveal that the homotopy continuation method can alleviate the drawbacks of traditional numerical techniques.
Abstract: Several methods are available for weight and shape
optimization of structures, among which Evolutionary Structural
Optimization (ESO) is one of the most widely used methods. In ESO,
however, the optimization criterion is completely case-dependent.
Moreover, only the improving solutions are accepted during the
search. In this paper a Simulated Annealing (SA) algorithm is used
for structural optimization problem. This algorithm differs from other
random search methods by accepting non-improving solutions. The
implementation of SA algorithm is done through reducing the
number of finite element analyses (function evaluations).
Computational results show that SA can efficiently and effectively
solve such optimization problems within short search time.
Abstract: Through the course of this paper we outline how
mobile Business Intelligence (m-BI) can help businesses to work
smarter and to improve their agility. When we analyze the industry
from the usage perspective or how interaction with the enterprise BI
system happens via mobile devices, we may easily understand that
there are two major types of mobile BI: passive and active. Active
mobile BI gives provisions for users to interact with the BI systems
on-the-fly. Active mobile business intelligence often works as a
combination of both “push and pull" techniques. Some mistakes were
done in the up-to-day progress of mobile technologies and mobile BI,
as well as some problems that still have to be resolved. We discussed
in the paper rather broadly.
Abstract: Several numerical schemes utilizing central difference
approximations have been developed to solve the Goursat problem.
However, in a recent years compact discretization methods which
leads to high-order finite difference schemes have been used since it
is capable of achieving better accuracy as well as preserving certain
features of the equation e.g. linearity. The basic idea of the new
scheme is to find the compact approximations to the derivative terms
by differentiating centrally the governing equations. Our primary
interest is to study the performance of the new scheme when applied
to two Goursat partial differential equations against the traditional
finite difference scheme.