Abstract: In 2002 an amendment to SOLAS opened for
lightweight material constructions in vessels if the same fire safety as
in steel constructions could be obtained. FISPAT (FIreSPread
Analysis Tool) is a computer application that simulates fire spread
and fault injection in cruise vessels and identifies fire sensitive areas.
It was developed to analyze cruise vessel designs and provides a
method to evaluate network layout and safety of cruise vessels. It
allows fast, reliable and deterministic exhaustive simulations and
presents the result in a graphical vessel model. By performing the
analysis iteratively while altering the cruise vessel design it can be
used along with fire chamber experiments to show that the
lightweight design can be as safe as a steel construction and that
SOLAS regulations are fulfilled.
Abstract: Mathematical programming has been applied to various
problems. For many actual problems, the assumption that the parameters
involved are deterministic known data is often unjustified. In
such cases, these data contain uncertainty and are thus represented
as random variables, since they represent information about the
future. Decision-making under uncertainty involves potential risk.
Stochastic programming is a commonly used method for optimization
under uncertainty. A stochastic programming problem with recourse
is referred to as a two-stage stochastic problem. In this study, we
consider a stochastic programming problem with simple integer
recourse in which the value of the recourse variable is restricted to a
multiple of a nonnegative integer. The algorithm of a dynamic slope
scaling procedure for solving this problem is developed by using a
property of the expected recourse function. Numerical experiments
demonstrate that the proposed algorithm is quite efficient. The
stochastic programming model defined in this paper is quite useful
for a variety of design and operational problems.
Abstract: The security of power systems against malicious cyberphysical
data attacks becomes an important issue. The adversary
always attempts to manipulate the information structure of the power
system and inject malicious data to deviate state variables while
evading the existing detection techniques based on residual test. The
solutions proposed in the literature are capable of immunizing the
power system against false data injection but they might be too costly
and physically not practical in the expansive distribution network.
To this end, we define an algebraic condition for trustworthy power
system to evade malicious data injection. The proposed protection
scheme secures the power system by deterministically reconfiguring
the information structure and corresponding residual test. More
importantly, it does not require any physical effort in either microgrid
or network level. The identification scheme of finding meters being
attacked is proposed as well. Eventually, a well-known IEEE 30-bus
system is adopted to demonstrate the effectiveness of the proposed
schemes.
Abstract: Cryptography, Image watermarking and E-banking are
filled with apparent oxymora and paradoxes. Random sequences are
used as keys to encrypt information to be used as watermark during
embedding the watermark and also to extract the watermark during
detection. Also, the keys are very much utilized for 24x7x365
banking operations. Therefore a deterministic random sequence is
very much useful for online applications. In order to obtain the same
random sequence, we need to supply the same seed to the generator.
Many researchers have used Deterministic Random Number
Generators (DRNGs) for cryptographic applications and Pseudo
Noise Random sequences (PNs) for watermarking. Even though,
there are some weaknesses in PN due to attacks, the research
community used it mostly in digital watermarking. On the other hand,
DRNGs have not been widely used in online watermarking due to its
computational complexity and non-robustness. Therefore, we have
invented a new design of generating DRNG using Pi-series to make it
useful for online Cryptographic, Digital watermarking and Banking
applications.
Abstract: The quality of a machined surface is becoming more and more important to justify the increasing demands of sophisticated component performance, longevity, and reliability. Usually, any machining operation leaves its own characteristic evidence on the machined surface in the form of finely spaced micro irregularities (surface roughness) left by the associated indeterministic characteristics of the different elements of the system: tool-machineworkpart- cutting parameters. However, one of the most influential sources in machining affecting surface roughness is the instantaneous state of tool edge. The main objective of the current work is to relate the in-process immeasurable cutting edge deformation and surface roughness to a more reliable easy-to-measure force signals using a robust non-linear time-dependent modeling regression techniques. Time-dependent modeling is beneficial when modern machining systems, such as adaptive control techniques are considered, where the state of the machined surface and the health of the cutting edge are monitored, assessed and controlled online using realtime information provided by the variability encountered in the measured force signals. Correlation between wear propagation and roughness variation is developed throughout the different edge lifetimes. The surface roughness is further evaluated in the light of the variation in both the static and the dynamic force signals. Consistent correlation is found between surface roughness variation and tool wear progress within its initial and constant regions. At the first few seconds of cutting, expected and well known trend of the effect of the cutting parameters is observed. Surface roughness is positively influenced by the level of the feed rate and negatively by the cutting speed. As cutting continues, roughness is affected, to different extents, by the rather localized wear modes either on the tool nose or on its flank areas. Moreover, it seems that roughness varies as wear attitude transfers from one mode to another and, in general, it is shown that it is improved as wear increases but with possible corresponding workpart dimensional inaccuracy. The dynamic force signals are found reasonably sensitive to simulate either the progressive or the random modes of tool edge deformation. While the frictional force components, feeding and radial, are found informative regarding progressive wear modes, the vertical (power) components is found more representative carrier to system instability resulting from the edge-s random deformation.
Abstract: Time series analysis often requires data that represents
the evolution of an observed variable in equidistant time steps. In
order to collect this data sampling is applied. While continuous
signals may be sampled, analyzed and reconstructed applying
Shannon-s sampling theorem, time-discrete signals have to be dealt
with differently. In this article we consider the discrete-event
simulation (DES) of job-shop-systems and study the effects of
different sampling rates on data quality regarding completeness and
accuracy of reconstructed inventory evolutions. At this we discuss
deterministic as well as non-deterministic behavior of system
variables. Error curves are deployed to illustrate and discuss the
sampling rate-s impact and to derive recommendations for its wellfounded
choice.
Abstract: As a learning theory tries to borrow from science a framework to found its method, it shows paradoxes and paralysing contraddictions. This results, on one hand, from adopting a learning/teaching model as it were a mere “transfer of data" (mechanical learning approach), and on the other hand from borrowing the complexity theory (an indeterministic and non-linear model), that risks to vanish every educational effort. This work is aimed at describing existing criticism, unveiling the antinomic nature of such paradoxes, focussing on a view where neither the mechanical learning perspective nor the chaotic and nonlinear model can threaten and jeopardize the educational work. Author intends to go back over the steps that led to these paradoxes and to unveil their antinomic nature. Actually this could serve the purpose to explain some current misunderstandings about the real usefulness of Ict within the youth-s learning process and growth.
Abstract: Due to important issues, such as deadlock, starvation,
communication, non-deterministic behavior and synchronization,
concurrent systems are very complex, sensitive, and error-prone.
Thus ensuring reliability and accuracy of these systems is very
essential. Therefore, there has been a big interest in the formal
specification of concurrent programs in recent years. Nevertheless,
some features of concurrent systems, such as dynamic process
creation, scheduling and starvation have not been specified formally
yet. Also, some other features have been specified partially and/or
have been described using a combination of several different
formalisms and methods whose integration needs too much effort. In
other words, a comprehensive and integrated specification that could
cover all aspects of concurrent systems has not been provided yet.
Thus, this paper makes two major contributions: firstly, it provides a
comprehensive formal framework to specify all well-known features
of concurrent systems. Secondly, it provides an integrated
specification of these features by using just a single formal notation,
i.e., the Z language.
Abstract: In this paper, some practical solid transportation models are formulated considering per trip capacity of each type of conveyances with crisp and rough unit transportation costs. This is applicable for the system in which full vehicles, e.g. trucks, rail coaches are to be booked for transportation of products so that transportation cost is determined on the full of the conveyances. The models with unit transportation costs as rough variables are transformed into deterministic forms using rough chance constrained programming with the help of trust measure. Numerical examples are provided to illustrate the proposed models in crisp environment as well as with unit transportation costs as rough variables.
Abstract: Maintenance costs incurred on building differs. The
difference can be as results of the types, functions, age, building
health index, size, form height, location and complexity of the
building. These are contributing to the difficulty in maintenance
development of deterministic maintenance cost model. This paper is
concerns with reporting the preliminary findings on the creation of
building maintenance cost distributions for universities in Malaysia.
This study is triggered by the need to provide guides on maintenance
costs distributions for decision making. For this purpose, a survey
questionnaire was conducted to investigate the distribution of
maintenance costs in the universities. Altogether, responses were
received from twenty universities comprising both private and
publicly owned. The research found that engineering services,
roofing and finishes were the elements contributing the larger
segment of the maintenance costs. Furthermore, the study indicates
the significance of maintenance cost distribution as decision making
tool towards maintenance management.
Abstract: This paper proposes a new optimization techniques
for the optimization a gas processing plant uncertain feed and
product flows. The problem is first formulated using a continuous
linear deterministic approach. Subsequently, the single and joint
chance constraint models for steady state process with timedependent
uncertainties have been developed. The solution approach
is based on converting the probabilistic problems into their
equivalent deterministic form and solved at different confidence
levels Case study for a real plant operation has been used to
effectively implement the proposed model. The optimization results
indicate that prior decision has to be made for in-operating plant
under uncertain feed and product flows by satisfying all the
constraints at 95% confidence level for single chance constrained and
85% confidence level for joint chance constrained optimizations
cases.
Abstract: With the increasing complexity of engineering
problems, the traditional, single-objective and deterministic
optimization method can not meet people-s requirements. A
multi-objective fuzzy optimization model of resource input is built for
M chlor-alkali chemical eco-industrial park in this paper. First, the
model is changed into the form that can be solved by genetic algorithm
using fuzzy theory. And then, a fitness function is constructed for
genetic algorithm. Finally, a numerical example is presented to show
that the method compared with traditional single-objective
optimization method is more practical and efficient.
Abstract: The analysis of electromagnetic environment using
deterministic mathematical models is characterized by the
impossibility of analyzing a large number of interacting network
stations with a priori unknown parameters, and this is characteristic,
for example, of mobile wireless communication networks. One of the
tasks of the tools used in designing, planning and optimization of
mobile wireless network is to carry out simulation of electromagnetic
environment based on mathematical modelling methods, including
computer experiment, and to estimate its effect on radio
communication devices. This paper proposes the development of a
statistical model of electromagnetic environment of a mobile
wireless communication network by describing the parameters and
factors affecting it including the propagation channel and their
statistical models.
Abstract: The primary objective of the paper is to propose a new method for solving assignment problem under uncertain situation. In the classical assignment problem (AP), zpqdenotes the cost for assigning the qth job to the pth person which is deterministic in nature. Here in some uncertain situation, we have assigned a cost in the form of composite relative degree Fpq instead of and this replaced cost is in the maximization form. In this paper, it has been solved and validated by the two proposed algorithms, a new mathematical formulation of IVIF assignment problem has been presented where the cost has been considered to be an IVIFN and the membership of elements in the set can be explained by positive and negative evidences. To determine the composite relative degree of similarity of IVIFS the concept of similarity measure and the score function is used for validating the solution which is obtained by Composite relative similarity degree method. Further, hypothetical numeric illusion is conducted to clarify the method’s effectiveness and feasibility developed in the study. Finally, conclusion and suggestion for future work are also proposed.
Abstract: This paper discusses the applicability of the Data
Distribution Service (DDS) for the development of automated and modular manufacturing systems which require a flexible and robust
communication infrastructure. DDS is an emergent standard for datacentric publish/subscribe middleware systems that provides an
infrastructure for platform-independent many-to-many
communication. It particularly addresses the needs of real-time systems that require deterministic data transfer, have low memory
footprints and high robustness requirements. After an overview of the
standard, several aspects of DDS are related to current challenges for the development of modern manufacturing systems with distributed architectures. Finally, an example application is presented based on a modular active fixturing system to illustrate the described aspects.
Abstract: In this paper bi-annual time series data on unemployment rates (from the Labour Force Survey) are expanded to quarterly rates and linked to quarterly unemployment rates (from the Quarterly Labour Force Survey). The resultant linked series and the consumer price index (CPI) series are examined using Johansen’s cointegration approach and vector error correction modeling. The study finds that both the series are integrated of order one and are cointegrated. A statistically significant co-integrating relationship is found to exist between the time series of unemployment rates and the CPI. Given this significant relationship, the study models this relationship using Vector Error Correction Models (VECM), one with a restriction on the deterministic term and the other with no restriction.
A formal statistical confirmation of the existence of a unique linear and lagged relationship between inflation and unemployment for the period between September 2000 and June 2011 is presented. For the given period, the CPI was found to be an unbiased predictor of the unemployment rate. This relationship can be explored further for the development of appropriate forecasting models incorporating other study variables.
Abstract: Designing modern machine tools is a complex task. A
simulation tool to aid the design work, a virtual machine, has
therefore been developed in earlier work. The virtual machine
considers the interaction between the mechanics of the machine
(including structural flexibility) and the control system. This paper
exemplifies the usefulness of the virtual machine as a tool for product
development. An optimisation study is conducted aiming at
improving the existing design of a machine tool regarding weight and
manufacturing accuracy at maintained manufacturing speed. The
problem can be categorised as constrained multidisciplinary multiobjective
multivariable optimisation. Parameters of the control and
geometric quantities of the machine are used as design variables. This
results in a mix of continuous and discrete variables and an
optimisation approach using a genetic algorithm is therefore
deployed. The accuracy objective is evaluated according to
international standards. The complete systems model shows nondeterministic
behaviour. A strategy to handle this based on statistical
analysis is suggested. The weight of the main moving parts is reduced
by more than 30 per cent and the manufacturing accuracy is
improvement by more than 60 per cent compared to the original
design, with no reduction in manufacturing speed. It is also shown
that interaction effects exist between the mechanics and the control,
i.e. this improvement would most likely not been possible with a
conventional sequential design approach within the same time, cost
and general resource frame. This indicates the potential of the virtual
machine concept for contributing to improved efficiency of both
complex products and the development process for such products.
Companies incorporating such advanced simulation tools in their
product development could thus improve its own competitiveness as
well as contribute to improved resource efficiency of society at large.
Abstract: In this paper the design of maximally flat linear phase
finite impulse response (FIR) filters is considered. The problem is
handled with totally two different approaches. The first one is
completely deterministic numerical approach where the problem is
formulated as a Linear Complementarity Problem (LCP). The other
one is based on a combination of Markov Random Fields (MRF's)
approach with messy genetic algorithm (MGA). Markov Random
Fields (MRFs) are a class of probabilistic models that have been
applied for many years to the analysis of visual patterns or textures.
Our objective is to establish MRFs as an interesting approach to
modeling messy genetic algorithms. We establish a theoretical result
that every genetic algorithm problem can be characterized in terms of
a MRF model. This allows us to construct an explicit probabilistic
model of the MGA fitness function and introduce the Ising MGA.
Experimentations done with Ising MGA are less costly than those
done with standard MGA since much less computations are involved.
The least computations of all is for the LCP. Results of the LCP,
random search, random seeded search, MGA, and Ising MGA are
discussed.
Abstract: In this article an evolutionary technique has been used
for the solution of nonlinear Riccati differential equations of fractional order. In this method, genetic algorithm is used as a tool for
the competent global search method hybridized with active-set algorithm for efficient local search. The proposed method has been
successfully applied to solve the different forms of Riccati
differential equations. The strength of proposed method has in its
equal applicability for the integer order case, as well as, fractional
order case. Comparison of the method has been made with standard
numerical techniques as well as the analytic solutions. It is found
that the designed method can provide the solution to the equation
with better accuracy than its counterpart deterministic approaches.
Another advantage of the given approach is to provide results on
entire finite continuous domain unlike other numerical methods
which provide solutions only on discrete grid of points.
Abstract: Perishable goods constitute a large portion of retailer inventory and lose value with time due to deterioration and/or obsolescence. Retailers dealing with such goods required considering the factors of short shelf life and the dependency of sales on inventory displayed in determining optimal procurement policy. Many retailers follow the practice of using two bins - primary bin sales fresh items at a list price and secondary bin sales unsold items at a discount price transferred from primary bin on attaining certain age. In this paper, mathematical models are developed for primary bin and for secondary bin that maximizes profit with decision variables of order quantities, optimal review period and optimal selling price at secondary bin. The demand rates in two bins are assumed to be deterministic and dependent on displayed inventory level, price and age but independent of each other. The validity of the model is shown by solving an example and the sensitivity analysis of the model is also reported.