Abstract: The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.
Abstract: This paper presents a high secure data hiding technique using image cropping and Least Significant Bit (LSB) steganography. The predefined certain secret coordinate crops will be extracted from the cover image. The secret text message will be divided into sections. These sections quantity is equal the image crops quantity. Each section from the secret text message will embed into an image crop with a secret sequence using LSB technique. The embedding is done using the cover image color channels. Stego image is given by reassembling the image and the stego crops. The results of the technique will be compared to the other state of art techniques. Evaluation is based on visualization to detect any degradation of stego image, the difficulty of extracting the embedded data by any unauthorized viewer, Peak Signal-to-Noise Ratio of stego image (PSNR), and the embedding algorithm CPU time. Experimental results ensure that the proposed technique is more secure compared with the other traditional techniques.
Abstract: In this paper, a motion generation algorithm for a six Degrees of Freedom (DoF) robotic hand in a static environment is presented. The purpose of developing this method is to be used in the path generation of the end-effector for edge finishing and inspection processes by utilizing the CAD model of the considered workpiece. Nonetheless, the proposed algorithm may be extended to be applicable for other similar manufacturing processes. A software package programmed in the application programming interface (API) of SolidWorks generates tool path data for the robot. The proposed method significantly simplifies the given problem, resulting in a reduction in the CPU time needed to generate the path, and offers an efficient overall solution. The ABB IRB2000 robot is chosen for executing the generated tool path.
Abstract: Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.
Abstract: Conjugate gradient method has been enormously used
to solve large scale unconstrained optimization problems due to the
number of iteration, memory, CPU time, and convergence property,
in this paper we find a new class of nonlinear conjugate gradient
coefficient with global convergence properties proved by exact line
search. The numerical results for our new βK give a good result when
it compared with well known formulas.
Abstract: Accurate modeling of high speed RLC interconnects
has become a necessity to address signal integrity issues in current
VLSI design. To accurately model a dispersive system of interconnects
at higher frequencies; a full-wave analysis is required.
However, conventional circuit simulation of interconnects with full
wave models is extremely CPU expensive. We present an algorithm
for reducing large VLSI circuits to much smaller ones with similar
input-output behavior. A key feature of our method, called Frequency
Shift Technique, is that it is capable of reducing linear time-varying
systems. This enables it to capture frequency-translation and sampling
behavior, important in communication subsystems such as mixers,
RF components and switched-capacitor filters. Reduction is obtained
by projecting the original system described by linear differential
equations into a lower dimension. Experiments have been carried out
using Cadence Design Simulator cwhich indicates that the proposed
technique achieves more % reduction with less CPU time than the
other model order reduction techniques existing in literature. We
also present applications to RF circuit subsystems, obtaining size
reductions and evaluation speedups of orders of magnitude with
insignificant loss of accuracy.
Abstract: Today, design requirements are extending more and
more from electronic (analogue and digital) to multidiscipline design.
These current needs imply implementation of methodologies to make
the CAD product reliable in order to improve time to market, study
costs, reusability and reliability of the design process.
This paper proposes a high level design approach applied for the
characterization and the optimization of Switched-Current Sigma-
Delta Modulators. It uses the new hardware description language
VHDL-AMS to help the designers to optimize the characteristics of
the modulator at a high level with a considerably reduced CPU time
before passing to a transistor level characterization.
Abstract: Skin color can provide a useful and robust cue
for human-related image analysis, such as face detection,
pornographic image filtering, hand detection and tracking,
people retrieval in databases and Internet, etc. The major
problem of such kinds of skin color detection algorithms is
that it is time consuming and hence cannot be applied to a real
time system. To overcome this problem, we introduce a new
fast technique for skin detection which can be applied in a real
time system. In this technique, instead of testing each image
pixel to label it as skin or non-skin (as in classic techniques),
we skip a set of pixels. The reason of the skipping process is
the high probability that neighbors of the skin color pixels are
also skin pixels, especially in adult images and vise versa. The
proposed method can rapidly detect skin and non-skin color
pixels, which in turn dramatically reduce the CPU time
required for the protection process. Since many fast detection
techniques are based on image resizing, we apply our
proposed pixel skipping technique with image resizing to
obtain better results. The performance evaluation of the
proposed skipping and hybrid techniques in terms of the
measured CPU time is presented. Experimental results
demonstrate that the proposed methods achieve better result
than the relevant classic method.
Abstract: The volume of XML data exchange is explosively
increasing, and the need for efficient mechanisms of XML data
management is vital. Many XML storage models have been proposed
for storing XML DTD-independent documents in relational database
systems. Benchmarking is the best way to highlight pros and cons of
different approaches. In this study, we use a common benchmarking
scheme, known as XMark to compare the most cited and newly
proposed DTD-independent methods in terms of logical reads,
physical I/O, CPU time and duration. We show the effect of Label
Path, extracting values and storing in another table and type of join
needed for each method-s query answering.
Abstract: A major part of the flow field involves no complicated
turbulent behavior in many turbulent flows. In this research work, in
order to reduce required memory and CPU time, the flow field was
decomposed into several blocks, each block including its special
turbulence. A two dimensional backward facing step was considered
here. Four combinations of the Prandtl mixing length and standard k-
E models were implemented as well. Computer memory and CPU
time consumption in addition to numerical convergence and accuracy
of the obtained results were mainly investigated. Observations
showed that, a suitable combination of turbulence models in different
blocks led to the results with the same accuracy as the high order
turbulence model for all of the blocks, in addition to the reductions in
memory and CPU time consumption.
Abstract: The volume of XML data exchange is explosively increasing, and the need for efficient mechanisms of XML data management is vital. Many XML storage models have been proposed for storing XML DTD-independent documents in relational database systems. Benchmarking is the best way to highlight pros and cons of different approaches. In this study, we use a common benchmarking scheme, known as XMark to compare the most cited and newly proposed DTD-independent methods in terms of logical reads, physical I/O, CPU time and duration. We show the effect of Label Path, extracting values and storing in another table and type of join needed for each method's query answering.
Abstract: Mapping between local and global coordinates is an
important issue in finite element method, as all calculations are
performed in local coordinates. The concern arises when subparametric
are used, in which the shape functions of the field variable
and the geometry of the element are not the same. This is particularly
the case for C* elements in which the extra degrees of freedoms
added to the nodes make the elements sub-parametric. In the present
work, transformation matrix for C1* (an 8-noded hexahedron
element with 12 degrees of freedom at each node) is obtained using
equivalent C0 elements (with the same number of degrees of
freedom). The convergence rate of 8-noded C1* element is nearly
equal to its equivalent C0 element, while it consumes less CPU time
with respect to the C0 element. The existence of derivative degrees
of freedom at the nodes of C1* element along with excellent
convergence makes it superior compared with it equivalent C0
element.
Abstract: In this paper processes including large deformations of a rubber with hyperelastic material behavior are simulated by the RKPM method. Due to the loss of kronecker delta properties in the mesh less shape functions, the imposition of essential boundary conditions consumes significant CPU time in mesh free computations. In this work transformation method is used for imposition of essential boundary conditions. A RKPM material shape function is used in this analysis. The support of the material shape functions covers the same set of particles during material deformation and hence the transformation matrix is formed only once at the initial stages. A computer program in MATLAB is developed for simulations.
Abstract: This work aims to test the application of computational fluid dynamics (CFD) modeling to fixed bed catalytic cracking reactors. Studies of CFD with a fixed bed design commonly use a regular packing with N=2 to define bed geometry. CFD allows us to obtain a more accurate view of the fluid flow and heat transfer mechanisms present in fixed bed equipment. Naphtha was used as feedstock and the reactor length was 80cm. It is divided in three sections that catalyst bed packed in the middle section of the reactor. The reaction scheme was involved one primary reaction and 24 secondary reactions. Because of high CPU times in these simulations, parallel processing have been used. In this study the coke formation process in fixed bed and empty tube reactor was simulated and coke in these reactors are compared. In addition, the effect of steam ratio and feed flow rate on coke formation was investigated.
Abstract: The development of many measurement and inspection systems of products based on real-time image processing can not be carried out totally in a laboratory due to the size or the temperature of the manufactured products. Those systems must be developed in successive phases. Firstly, the system is installed in the production line with only an operational service to acquire images of the products and other complementary signals. Next, a recording service of the image and signals must be developed and integrated in the system. Only after a large set of images of products is available, the development of the real-time image processing algorithms for measurement or inspection of the products can be accomplished under realistic conditions. Finally, the recording service is turned off or eliminated and the system operates only with the real-time services for the acquisition and processing of the images. This article presents a systematic performance evaluation of the image compression algorithms currently available to implement a real-time recording service. The results allow establishing a trade off between the reduction or compression of the image size and the CPU time required to get that compression level.
Abstract: This paper considers the problem of scheduling maintenance actions for identical aircraft gas turbine engines. Each one of the turbines consists of parts which frequently require replacement. A finite inventory of spare parts is available and all parts are ready for replacement at any time. The inventory consists of both new and refurbished parts. Hence, these parts have different field lives. The goal is to find a replacement part sequencing that maximizes the time that the aircraft will keep functioning before the inventory is replenished. The problem is formulated as an identical parallel machine scheduling problem where the minimum completion time has to be maximized. Two models have been developed. The first one is an optimization model which is based on a 0-1 linear programming formulation, while the second one is an approximate procedure which consists in decomposing the problem into several two-machine subproblems. Each subproblem is optimally solved using the first model. Both models have been implemented using Lingo and have been tested on two sets of randomly generated data with up to 150 parts and 10 turbines. Experimental results show that the optimization model is able to solve only instances with no more than 4 turbines, while the decomposition procedure often provides near-optimal solutions within a maximum CPU time of 3 seconds.
Abstract: Animation is simply defined as the sequencing of a
series of static images to generate the illusion of movement. Most
people believe that actual drawings or creation of the individual
images is the animation, when in actuality it is the arrangement of
those static images that conveys the motion. To become an animator,
it is often assumed that needed the ability to quickly design
masterpiece after masterpiece. Although some semblance of artistic
skill is a necessity for the job, the real key to becoming a great
animator is in the comprehension of timing. This paper will use a
combination of sprite animation, frame animation, and some other
techniques to cause a group of multi-colored static images to slither
around in the bounded area. In addition to slithering, the images
will also change the color of different parts of their body, much like
the real world creatures that have this amazing ability to change the
colors on their bodies do. This paper was implemented by using
Java 2 Standard Edition (J2SE).
It is both time-consuming and expensive to create animations,
regardless if they are created by hand or by using motion-capture
equipment. If the animators could reuse old animations and even
blend different animations together, a lot of work would be saved in
the process. The main objective of this paper is to examine a method
for blending several animations together in real time. This paper
presents and analyses a solution using Weighted Skeleton
Animation (WSA) resulting in limited CPU time and memory waste
as well as saving time for the animators. The idea presented is
described in detail and implemented. In this paper, text animation,
vertex animation, sprite part animation and whole sprite animation
were tested.
In this research paper, the resolution, smoothness and movement
of animated images will be carried out from the parameters, which
will be obtained from the experimental research of implementing
this paper.