Abstract: This paper develops a robust deadlock control technique for shared and unreliable resources in automated manufacturing systems (AMSs) based on structural analysis and colored Petri nets, which consists of three steps. The first step involves using strict minimal siphon control to create a live (deadlock-free) system that does not consider resource failure. The second step uses an approach based on colored Petri net, in which all monitors designed in the first step are merged into a single monitor. The third step addresses the deadlock control problems caused by resource failures. For all resource failures in the Petri net model a common recovery subnet based on colored petri net is proposed. The common recovery subnet is added to the obtained system at the second step to make the system reliable. The proposed approach is evaluated using an AMS from the literature. The results show that the proposed approach can be applied to an unreliable complex Petri net model, has a simpler structure and less computational complexity, and can obtain one common recovery subnet to model all resource failures.
Abstract: As part of the development of a 4D autopilot system for unmanned aerial vehicles (UAVs), i.e. a time-dependent robust trajectory generation and control algorithm, this work addresses the problem of optimal path control based on the flight sensors data output that may be unreliable due to noise on data acquisition and/or transmission under certain circumstances. Although several filtering methods, such as the Kalman-Bucy filter or the Linear Quadratic Gaussian/Loop Transfer Recover Control (LQG/LTR), are available, the utter complexity of the control system, together with the robustness and reliability required of such a system on a UAV for airworthiness certifiable autonomous flight, required the development of a proper robust filter for a nonlinear system, as a way of further mitigate errors propagation to the control system and improve its ,performance. As such, a nonlinear algorithm based upon the LQG/LTR, is validated through computational simulation testing, is proposed on this paper.
Abstract: Building systems are highly vulnerable to different
kinds of faults and failures. In fact, various faults, failures and human
behaviors could affect the building performance. This paper tackles
the detection of unreliable sensors in buildings. Different literature
surveys on diagnosis techniques for sensor grids in buildings have
been published but all of them treat only bias and outliers. Occurences
of data gaps have also not been given an adequate span of attention
in the academia. The proposed methodology comprises the automatic thresholding
for data gap detection for a set of heterogeneous sensors in
instrumented buildings. Sensor measurements are considered to be
regular time series. However, in reality, sensor values are not
uniformly sampled. So, the issue to solve is from which delay each
sensor become faulty? The use of time series is required for detection of abnormalities on
the delays. The efficiency of the method is evaluated on measurements
obtained from a real power plant: an office at Grenoble Institute of
technology equipped by 30 sensors.
Abstract: Renewable energy sources and distributed power generation units already have an important role in electrical power generation. A mixture of different technologies penetrating the electrical grid, adds complexity in the management of distribution networks. High penetration of distributed power generation units creates node over-voltages, huge power losses, unreliable power management, reverse power flow and congestion. This paper presents an optimization algorithm capable of reducing congestion and power losses, both described as a function of weighted sum. Two factors that describe congestion are being proposed. An upgraded selective particle swarm optimization algorithm (SPSO) is used as a solution tool focusing on the technique of network reconfiguration. The upgraded SPSO algorithm is achieved with the addition of a heuristic algorithm specializing in reduction of power losses, with several scenarios being tested. Results show significant improvement in minimization of losses and congestion while achieving very small calculation times.
Abstract: The demands of smart visual thing recognition in various devices have been increased rapidly for daily smart production, living and learning systems in recent years. This paper proposed a visual thing recognition system, which combines binary scale-invariant feature transform (SIFT), bag of words model (BoW), and support vector machine (SVM) by using color information. Since the traditional SIFT features and SVM classifiers only use the gray information, color information is still an important feature for visual thing recognition. With color-based SIFT features and SVM, we can discard unreliable matching pairs and increase the robustness of matching tasks. The experimental results show that the proposed object recognition system with color-assistant SIFT SVM classifier achieves higher recognition rate than that with the traditional gray SIFT and SVM classification in various situations.
Abstract: Geosynthetics utilization plays an important role in the construction of highways with no additive layers, such as asphalt concrete or cement concrete, or in a subgrade layer which affects the bearing capacity of unbounded layers. This laboratory experimental study was carried out to evaluate changes in the load bearing capacity of reinforced soil with these materials in highway roadbed with regard to geotextile properties. California Bearing Ratio (CBR) test samples were prepared with two types of soil: Clayey and sandy containing non-reinforced and reinforced soil. The samples comprised three types of geotextiles with different characteristics (150, 200, 300 g/m2) and depths (H= 5, 10, 20, 30, 50, 100 mm), and were grouped into two forms, one-layered and two-layered, based on the sample materials in order to perform defined tests. The results showed that the soil bearing characteristics increased when one layer of geotextile was used in clayey and sandy samples reinforced by geotextile. However, the bearing capacity of the soil, in the presence of a geotextile layer material with depth of more than 30 mm, had no remarkable effect. Furthermore, when the two-layered geotextile was applied in material samples, although it increased the soil resistance, it also showed that through the addition of a number or weights of geotextile into samples, the natural composition of the soil changed and the results are unreliable.
Abstract: Surrogate model has received increasing attention for use in detecting damage of structures based on vibration modal parameters. However, uncertainties existing in the measured vibration data may lead to false or unreliable output result from such model. In this study, an efficient approach based on Monte Carlo simulation is proposed to take into account the effect of uncertainties in developing a surrogate model. The probability of damage existence (PDE) is calculated based on the probability density function of the existence of undamaged and damaged states. The kriging technique allows one to genuinely quantify the surrogate error, therefore it is chosen as metamodeling technique. Enhanced version of ideal gas molecular movement (EIGMM) algorithm is used as main algorithm for model updating. The developed approach is applied to detect simulated damage in numerical models of 72-bar space truss and 120-bar dome truss. The simulation results show the proposed method can perform well in probability-based damage detection of structures with less computational effort compared to direct finite element model.
Abstract: A digital signature is an electronic signature form used by an original signer to sign a specific document. When the original signer is not in his office or when he/she travels outside, he/she delegates his signing capability to a proxy signer and then the proxy signer generates a signing message on behalf of the original signer. The two parties must be able to authenticate one another and agree on a secret encryption key, in order to communicate securely over an unreliable public network. Authenticated key agreement protocols have an important role in building a secure communications network between the two parties. In this paper, we present a secure proxy signature scheme over an efficient and secure authenticated key agreement protocol based on factoring and discrete logarithm problem.
Abstract: Proxy signature scheme permits an original signer to delegate his/her signing capability to a proxy signer, and then the proxy signer generates a signing message on behalf of the original signer. The two parties must be able to authenticate one another and agree on a secret encryption key, in order to communicate securely over an unreliable public network. Authenticated key agreement protocols have an important role in building secure communications network between the two parties. In this paper, we present a secure proxy signature scheme over an efficient and secure authenticated key agreement protocol based on the discrete logarithm problem.
Abstract: In Pakistan, major demands of fuel wood and timber wood are fulfilled by agroforestry. However, the information regarding economic significance of agroforestry and its productivity in Pakistan is still insufficient and unreliable. Survey of field conditions to examine the agroforestry status at local level helps us to know the future trends and to formulate the policies for sustainable wood supply. The objectives of this research were to examine the actual status and potential of agroforestry and to point out the barriers that are faced by farmers in the adoption of agroforestry. Research was carried out in Chiniot district, Pakistan because it is the famous city for furniture industry that is largely dependent on farm trees. A detailed survey of district Chiniot was carried out from 150 randomly selected farmer respondents using multi-objective oriented and pre-tested questionnaire. It was found that linear tree planting method was more adopted (45%) as compared to linear + interplanting (42%) and/or compact planting (12.6%). Chi-square values at P-value
Abstract: Connected vehicles are one of the promising technologies for future Intelligent Transportation Systems (ITS). A connected vehicle system is essentially a set of vehicles communicating through a network to exchange their information with each other and the infrastructure. Although this interconnection of the vehicles can be potentially beneficial in creating an efficient, sustainable, and green transportation system, a set of safety and reliability challenges come out with this technology. The first challenge arises from the information loss due to unreliable communication network which affects the control/management system of the individual vehicles and the overall system. Such scenario may lead to degraded or even unsafe operation which could be potentially catastrophic. Secondly, faulty sensors and actuators can affect the individual vehicle’s safe operation and in turn will create a potentially unsafe node in the vehicular network. Further, sending that faulty sensor information to other vehicles and failure in actuators may significantly affect the safe operation of the overall vehicular network. Therefore, it is of utmost importance to take these issues into consideration while designing the control/management algorithms of the individual vehicles as a part of connected vehicle system. In this paper, we consider a connected vehicle system under Co-operative Adaptive Cruise Control (CACC) and propose a fault diagnosis scheme that deals with these aforementioned challenges. Specifically, the conventional CACC algorithm is modified by adding a Kalman filter-based estimation algorithm to suppress the effect of lost information under unreliable network. Further, a sliding mode observer-based algorithm is used to improve the sensor reliability under faults. The effectiveness of the overall diagnostic scheme is verified via simulation studies.
Abstract: Opportunistic Routing (OR) increases the
transmission reliability and network throughput. Traditional routing
protocols preselects one or more predetermined nodes before
transmission starts and uses a predetermined neighbor to forward a
packet in each hop. The opportunistic routing overcomes the
drawback of unreliable wireless transmission by broadcasting one
transmission can be overheard by manifold neighbors. The first
cooperation-optimal protocol for Multirate OR (COMO) used to
achieve social efficiency and prevent the selfish behavior of the
nodes. The novel link-correlation-aware OR improves the
performance by exploiting the miscellaneous low correlated forward
links. Context aware Adaptive OR (CAOR) uses active suppression
mechanism to reduce packet duplication. The Context-aware OR
(COR) can provide efficient routing in mobile networks. By using
Cooperative Opportunistic Routing in Mobile Ad hoc Networks
(CORMAN), the problem of opportunistic data transfer can be
tackled. While comparing to all the protocols, COMO is the best as it
achieves social efficiency and prevents the selfish behavior of the
nodes.
Abstract: This research studies the joint production,
maintenance and subcontracting control policy for an unreliable
deteriorating manufacturing system. Production activities are
controlled by a derivation of the Hedging Point Policy, and given that
the system is subject to deterioration, it reduces progressively its
capacity to satisfy product demand. Multiple deterioration effects are
considered, reflected mainly in the quality of the parts produced and
the reliability of the machine. Subcontracting is available as support
to satisfy product demand; also, overhaul maintenance can be
conducted to reduce the effects of deterioration. The main objective
of the research is to determine simultaneously the production,
maintenance and subcontracting rate, which minimize the total,
incurred cost. A stochastic dynamic programming model is
developed and solved through a simulation-based approach
composed of statistical analysis and optimization with the response
surface methodology. The obtained results highlight the strong
interactions between production, deterioration and quality, which
justify the development of an integrated model. A numerical example
and a sensitivity analysis are presented to validate our results.
Abstract: This paper investigates the benefits of deliberately
unbalancing both operation time means (MTs) and unreliability
(failure and repair rates) for non-automated production lines. The
lines were simulated with various line lengths, buffer capacities,
degrees of imbalance and patterns of MT and unreliability imbalance.
Data on two performance measures, namely throughput (TR) and
average buffer level (ABL) were gathered, analyzed and compared to
a balanced line counterpart. A number of conclusions were made
with respect to the ranking of configurations, as well as to the
relationships among the independent design parameters and the
dependent variables. It was found that the best configurations are a
balanced line arrangement and a monotone decreasing MT order,
coupled with either a decreasing or a bowl unreliability configuration,
with the first generally resulting in a reduced TR and the second
leading to a lower ABL than those of a balanced line.
Abstract: Different tools and technologies were implemented
for Crisis Response and Management (CRM) which is generally
using available network infrastructure for information exchange.
Depending on type of disaster or crisis, network infrastructure could
be affected and it could not be able to provide reliable connectivity.
Thus any tool or technology that depends on the connectivity could
not be able to fulfill its functionalities. As a solution, a new message
exchange framework has been developed. Framework provides
offline/online information exchange platform for CRM Information
Systems (CRMIS) and it uses XML compression and packet
prioritization algorithms and is based on open source web
technologies. By introducing offline capabilities to the web
technologies, framework will be able to perform message exchange
on unreliable networks. The experiments done on the simulation
environment provide promising results on low bandwidth networks
(56kbps and 28.8 kbps) with up to 50% packet loss and the solution is
to successfully transfer all the information on these low quality
networks where the traditional 2 and 3 tier applications failed.
Abstract: In the past, the most comprehensively adopted light
source was incandescent light bulbs, but with the appearance of LED
light sources, traditional light sources have been gradually replaced by
LEDs because of its numerous superior characteristics. However,
many of the standards do not apply to LEDs as the two light sources
are characterized differently. This also intensifies the significance of
studies on LEDs. As a Kansei design study investigating the visual
glare produced by traffic arrows implemented with LEDs, this study
conducted a semantic analysis on the styles of traffic arrows used in
domestic and international occasions. The results will be able to
reduce drivers’ misrecognition that results in the unsuccessful arrival
at the destination, or in traffic accidents. This study started with a
literature review and surveyed the status quo before conducting
experiments that were divided in two parts. The first part involved a
screening experiment of arrow samples, where cluster analysis was
conducted to choose five representative samples of LED displays. The
second part was a semantic experiment on the display of arrows using
LEDs, where the five representative samples and the selected ten
adjectives were incorporated. Analyzing the results with
Quantification Theory Type I, it was found that among the
composition of arrows, fletching was the most significant factor that
influenced the adjectives. In contrast, a “no fletching” design was
more abstract and vague. It lacked the ability to convey the intended
message and might bear psychological negative connotation including
“dangerous,” “forbidden,” and “unreliable.” The arrow design
consisting of “> shaped fletching” was found to be more concrete and
definite, showing positive connotation including “safe,” “cautious,”
and “reliable.” When a stimulus was placed at a farther distance, the
glare could be significantly reduced; moreover, the visual evaluation
scores would be higher. On the contrary, if the fletching and the shaft
had a similar proportion, looking at the stimuli caused higher
evaluation at a closer distance. The above results will be able to be
applied to the design of traffic arrows by conveying information
definitely and rapidly. In addition, drivers’ safety could be enhanced
by understanding the cause of glare and improving visual
recognizability.
Abstract: Recent experimental evidences have shown that because
of a fast convergence and a nice accuracy, neural networks training
via extended kalman filter (EKF) method is widely applied. However,
as to an uncertainty of the system dynamics or modeling error, the
performance of the method is unreliable. In order to overcome this
problem in this paper, a new finite impulse response (FIR) filter based
learning algorithm is proposed to train radial basis function neural
networks (RBFN) for nonlinear function approximation. Compared
to the EKF training method, the proposed FIR filter training method
is more robust to those environmental conditions. Furthermore , the
number of centers will be considered since it affects the performance
of approximation.
Abstract: Estimating the reliability of a computer network has been a subject of great interest. It is a well known fact that this problem is NP-hard. In this paper we present a very efficient combinatorial approach for Monte Carlo reliability estimation of a network with unreliable nodes and unreliable edges. Its core is the computation of some network combinatorial invariants. These invariants, once computed, directly provide pure and simple framework for computation of network reliability. As a specific case of this approach we obtain tight lower and upper bounds for distributed network reliability (the so called residual connectedness reliability). We also present some simulation results.
Abstract: Many factors affect the success of Machine Learning
(ML) on a given task. The representation and quality of the instance
data is first and foremost. If there is much irrelevant and redundant
information present or noisy and unreliable data, then knowledge
discovery during the training phase is more difficult. It is well known
that data preparation and filtering steps take considerable amount of
processing time in ML problems. Data pre-processing includes data
cleaning, normalization, transformation, feature extraction and
selection, etc. The product of data pre-processing is the final training
set. It would be nice if a single sequence of data pre-processing
algorithms had the best performance for each data set but this is not
happened. Thus, we present the most well know algorithms for each
step of data pre-processing so that one achieves the best performance
for their data set.
Abstract: This paper presents a model for an unreliable
production line, which is operated according to demand with constant
work-in-process (CONWIP). A simulation model is developed based
on the discrete model and several case problems are analyzed using
the model. The model is utilized to optimize storage space capacities
at intermediate stages and the number of kanbans at the last stage,
which is used to trigger the production at the first stage. Furthermore,
effects of several line parameters on production rate are analyzed
using design of experiments.