Abstract: Background: A facility layout problem (FLP) is an NP-complete (non-deterministic polynomial) problem, for which is hard to obtain an exact optimal solution. FLP has been widely studied in various limited spaces and workflows. For example, cafeterias with many types of equipment for troops cause chaotic processes when dining. Objective: This article tried to optimize the layout of a troops’ cafeteria and to improve the overall efficiency of the dining process. Methods: First, the original cafeteria layout design scheme was analyzed from an ergonomic perspective and two new design schemes were generated. Next, three facility layout models were designed, and further simulation was applied to compare the total time and density of troops between each scheme. Last, an experiment of the dining process with video observation and analysis verified the simulation results. Results: In a simulation, the dining time under the second new layout is shortened by 2.25% and 1.89% (p
Abstract: Electricity markets throughout the world have
undergone substantial changes. Accurate, reliable, clear and
comprehensible modeling and forecasting of different variables
(loads and prices in the first instance) have achieved increasing
importance. In this paper, we describe the actual state of the
art focusing on reg-SARMA methods, which have proven to be
flexible enough to accommodate the electricity price/load behavior
satisfactory. More specifically, we will discuss: 1) The dichotomy
between point and interval forecasts; 2) The difficult choice between
stochastic (e.g. climatic variation) and non-deterministic predictors
(e.g. calendar variables); 3) The confrontation between modelling
a single aggregate time series or creating separated and potentially
different models of sub-series. The noteworthy point that we would
like to make it emerge is that prices and loads require different
approaches that appear irreconcilable even though must be made
reconcilable for the interests and activities of energy companies.
Abstract: Wireless sensor networks have enticed much of the spotlight from researchers all around the world, owing to its extensive applicability in agricultural, industrial and military fields. Energy conservation node deployment stratagems play a notable role for active implementation of Wireless Sensor Networks. Clustering is the approach in wireless sensor networks which improves energy efficiency in the network. The clustering algorithm needs to have an optimum size and number of clusters, as clustering, if not implemented properly, cannot effectively increase the life of the network. In this paper, an algorithm has been proposed to address connectivity issues with the aim of ensuring the uniform energy consumption of nodes in every part of the network. The results obtained after simulation showed that the proposed algorithm has an edge over existing algorithms in terms of throughput and networks lifetime.
Abstract: The use of optical technologies in the
telecommunications has been increasing due to its ability to transmit
large amounts of data over long distances. However, as in all systems
of data transmission, optical communication channels suffer from
undesirable and non-deterministic effects, being essential to know the
same. Thus, this research allows the assessment of these effects, as
well as their characterization and beneficial uses of these effects.
Abstract: The expanded Invasive Weed Optimization algorithm (exIWO) is an optimization metaheuristic modelled on the original IWO version created by the researchers from the University of Tehran. The authors of the present paper have extended the exIWO algorithm introducing a set of both deterministic and non-deterministic strategies of individuals’ selection. The goal of the project was to evaluate the exIWO by testing its usefulness for solving some test instances of the traveling salesman problem (TSP) taken from the TSPLIB collection which allows comparing the experimental results with optimal values.
Abstract: A procedure commonly used in Job Shop Scheduling Problem (JSSP) to evaluate the neighborhoods functions that use the non-deterministic algorithms is the calculation of the critical path in a digraph. This paper presents an experimental study of the cost of computation that exists when the calculation of the critical path in the solution for instances in which a JSSP of large size is involved. The results indicate that if the critical path is use in order to generate neighborhoods in the meta-heuristics that are used in JSSP, an elevated cost of computation exists in spite of the fact that the calculation of the critical path in any digraph is of polynomial complexity.
Abstract: Information hiding for authenticating and verifying the content integrity of the multimedia has been exploited extensively in the last decade. We propose the idea of using genetic algorithm and non-deterministic dependence by involving the un-watermarkable coefficients for digital image authentication. Genetic algorithm is used to intelligently select coefficients for watermarking in a DCT based image authentication scheme, which implicitly watermark all the un-watermarkable coefficients also, in order to thwart different attacks. Experimental results show that such intelligent selection results in improvement of imperceptibility of the watermarked image, and implicit watermarking of all the coefficients improves security against attacks such as cover-up, vector quantization and transplantation.
Abstract: Colored Petri Nets (CPN) are very known kind of
high level Petri nets. With sound and complete semantics, rewriting
logic is one of very powerful logics in description and verification of
non-deterministic concurrent systems. Recently, CPN semantics are
defined in terms of rewriting logic, allowing us to built models by
formal reasoning. In this paper, we propose an automatic translation
of CPN to the rewriting logic language Maude. This tool allows
graphical editing and simulating CPN. The tool allows the user
drawing a CPN graphically and automatic translating the graphical
representation of the drawn CPN to Maude specification. Then,
Maude language is used to perform the simulation of the resulted
Maude specification. It is the first rewriting logic based environment
for this category of Petri Nets.
Abstract: In cryptography, confusion and diffusion are very
important to get confidentiality and privacy of message in block
ciphers and stream ciphers. There are two types of network to provide
confusion and diffusion properties of message in block ciphers. They
are Substitution- Permutation network (S-P network), and Feistel
network. NLFS (Non-Linear feedback stream cipher) is a fast and
secure stream cipher for software application. NLFS have two modes
basic mode that is synchronous mode and self synchronous mode.
Real random numbers are non-deterministic. R-box (random box)
based on the dynamic properties and it performs the stochastic
transformation of data that can be used effectively meet the
challenges of information is protected from international destructive
impacts. In this paper, a new implementation of stochastic
transformation will be proposed.
Abstract: This work explores blind image deconvolution by recursive function approximation based on supervised learning of neural networks, under the assumption that a degraded image is linear convolution of an original source image through a linear shift-invariant (LSI) blurring matrix. Supervised learning of neural networks of radial basis functions (RBF) is employed to construct an embedded recursive function within a blurring image, try to extract non-deterministic component of an original source image, and use them to estimate hyper parameters of a linear image degradation model. Based on the estimated blurring matrix, reconstruction of an original source image from a blurred image is further resolved by an annealed Hopfield neural network. By numerical simulations, the proposed novel method is shown effective for faithful estimation of an unknown blurring matrix and restoration of an original source image.
Abstract: Time series analysis often requires data that represents
the evolution of an observed variable in equidistant time steps. In
order to collect this data sampling is applied. While continuous
signals may be sampled, analyzed and reconstructed applying
Shannon-s sampling theorem, time-discrete signals have to be dealt
with differently. In this article we consider the discrete-event
simulation (DES) of job-shop-systems and study the effects of
different sampling rates on data quality regarding completeness and
accuracy of reconstructed inventory evolutions. At this we discuss
deterministic as well as non-deterministic behavior of system
variables. Error curves are deployed to illustrate and discuss the
sampling rate-s impact and to derive recommendations for its wellfounded
choice.
Abstract: Due to important issues, such as deadlock, starvation,
communication, non-deterministic behavior and synchronization,
concurrent systems are very complex, sensitive, and error-prone.
Thus ensuring reliability and accuracy of these systems is very
essential. Therefore, there has been a big interest in the formal
specification of concurrent programs in recent years. Nevertheless,
some features of concurrent systems, such as dynamic process
creation, scheduling and starvation have not been specified formally
yet. Also, some other features have been specified partially and/or
have been described using a combination of several different
formalisms and methods whose integration needs too much effort. In
other words, a comprehensive and integrated specification that could
cover all aspects of concurrent systems has not been provided yet.
Thus, this paper makes two major contributions: firstly, it provides a
comprehensive formal framework to specify all well-known features
of concurrent systems. Secondly, it provides an integrated
specification of these features by using just a single formal notation,
i.e., the Z language.
Abstract: Designing modern machine tools is a complex task. A
simulation tool to aid the design work, a virtual machine, has
therefore been developed in earlier work. The virtual machine
considers the interaction between the mechanics of the machine
(including structural flexibility) and the control system. This paper
exemplifies the usefulness of the virtual machine as a tool for product
development. An optimisation study is conducted aiming at
improving the existing design of a machine tool regarding weight and
manufacturing accuracy at maintained manufacturing speed. The
problem can be categorised as constrained multidisciplinary multiobjective
multivariable optimisation. Parameters of the control and
geometric quantities of the machine are used as design variables. This
results in a mix of continuous and discrete variables and an
optimisation approach using a genetic algorithm is therefore
deployed. The accuracy objective is evaluated according to
international standards. The complete systems model shows nondeterministic
behaviour. A strategy to handle this based on statistical
analysis is suggested. The weight of the main moving parts is reduced
by more than 30 per cent and the manufacturing accuracy is
improvement by more than 60 per cent compared to the original
design, with no reduction in manufacturing speed. It is also shown
that interaction effects exist between the mechanics and the control,
i.e. this improvement would most likely not been possible with a
conventional sequential design approach within the same time, cost
and general resource frame. This indicates the potential of the virtual
machine concept for contributing to improved efficiency of both
complex products and the development process for such products.
Companies incorporating such advanced simulation tools in their
product development could thus improve its own competitiveness as
well as contribute to improved resource efficiency of society at large.