Abstract: This paper is motivated by the importance of multi-sensor image fusion with specific focus on Infrared (IR) and Visible image (VI) fusion for various applications including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like Visible camera & IR Thermal Imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (IR) that may be reflected or self-emitted. A digital color camera captures the visible source image and a thermal IR camera acquires the thermal source image. In this paper, some image fusion algorithms based upon Multi-Scale Transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, but they also make it hard to become deployed in system and applications that require real-time operation, high flexibility and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.
Abstract: This paper represents the conception that complex problems do not necessary need similar complex solutions in order to cope with the complexity. Furthermore, a simple solution based on established methods can provide a sufficient way dealing with the complexity. To verify this conception, the presented paper focuses on the field of change management as a part of new product development process in automotive sector. In the field of complexity management, dealing with increasing complexity is essential, while, only non-flexible rigid processes that are not designed to handle complexity are available. The basic methodology of this paper can be divided in four main sections: 1) analyzing the complexity of the change management, 2) literature review in order to identify potential solutions and methods, 3) capturing and implementing expertise of experts from change management filed of an automobile manufacturing company and 4) systematical comparison of the identified methods from literature and connecting these with defined requirements of the complexity of the change management in order to develop a solution. As a practical outcome, this paper provides a method to capture the complexity of engineering changes (EC) and includes it within the EC evaluation process, following case-related process guidance to cope with the complexity. Furthermore, this approach supports the conception that dealing with complexity is possible while utilizing rather simple and established methods by combining them in to a powerful tool.
Abstract: In recent years, maintenance optimization has attracted special attention due to the growth of industrial systems complexity. Maintenance costs are high for many systems, and preventive maintenance is effective when it increases operations' reliability and safety at a reduced cost. The novelty of this research is to consider general repair in the modeling of multi-unit series systems and solve the maintenance problem for such systems using the semi-Markov decision process (SMDP) framework. We propose an opportunistic maintenance policy for a series system composed of two main units. Unit 1, which is more expensive than unit 2, is subjected to condition monitoring, and its deterioration is modeled using a gamma process. Unit 1 hazard rate is estimated by the proportional hazards model (PHM), and two hazard rate control limits are considered as the thresholds of maintenance interventions for unit 1. Maintenance is performed on unit 2, considering an age control limit. The objective is to find the optimal control limits and minimize the long-run expected average cost per unit time. The proposed algorithm is applied to a numerical example to compare the effectiveness of the proposed policy (policy Ⅰ) with policy Ⅱ, which is similar to policy Ⅰ, but instead of general repair, replacement is performed. Results show that policy Ⅰ leads to lower average cost compared with policy Ⅱ.
Abstract: The goal of option pricing theory is to help the investors
to manage their money, enhance returns and control their financial
future by theoretically valuing their options. However, most of the
option pricing models have no analytical solution. Furthermore,
not all the numerical methods are efficient to solve these models
because they have nonsmoothing payoffs or discontinuous derivatives
at the exercise price. In this paper, we solve the American option
under jump diffusion models by using efficient time-dependent
numerical methods. several techniques are integrated to reduced
the overcome the computational complexity. Fast Fourier Transform
(FFT) algorithm is used as a matrix-vector multiplication solver,
which reduces the complexity from O(M2) into O(M logM).
Partial fraction decomposition technique is applied to rational
approximation schemes to overcome the complexity of inverting
polynomial of matrices. The proposed method is easy to implement
on serial or parallel versions. Numerical results are presented to prove
the accuracy and efficiency of the proposed method.
Abstract: A Finite Element (FE) based scheme is presented
for quantifying guided wave interaction with Localised Nonlinear
Structural Damage (LNSD) within structures of arbitrary layering
and geometric complexity. The through-thickness mode-shape of the
structure is obtained through a wave and finite element method. This
is applied in a time domain FE simulation in order to generate
time harmonic excitation for a specific wave mode. Interaction of
the wave with LNSD within the system is computed through an
element activation and deactivation iteration. The scheme is validated
against experimental measurements and a WFE-FE methodology for
calculating wave interaction with damage. Case studies for guided
wave interaction with crack and delamination are presented to verify
the robustness of the proposed method in classifying and identifying
damage.
Abstract: Due to the development of the current civilization, one must create suitable models of its pervasive massive phenomena. Such a phenomenon is the digital transformation, which has a substantial number of disciplined, methodical interpretations forming the diversified reflection. This reflection could be understood pragmatically as the current temporal, a local differential state of knowledge. The model of the discursive space is proposed as a model for the analysis and description of this knowledge. Discursive space is understood as an autonomous multidimensional space where separate discourses traverse specific trajectories of what can be presented in multidimensional parallel coordinate system. Discursive space built on the world of facts preserves the complex character of that world. Digital transformation as a discursive space has a relativistic character that means that at the same time, it is created by the dynamic discourses and these discourses are molded by the shape of this space.
Abstract: Problem-based learning (PBL) is a student-centered pedagogy that originated in the medical field and has also been used extensively in other knowledge disciplines with recognized advantages and limitations. PBL has been used in various undergraduate engineering programs with mixed outcomes. The current fourth industrial revolution (digital era or Industry 4.0) has made it essential for many science and engineering students to receive effective training in advanced courses such as industrial automation and robotics. This paper presents a case study at Assumption University of Thailand, where a PBL-like approach was used to teach some aspects of automation and robotics to selected groups of undergraduate engineering students. These students were given some basic level training in automation prior to participating in a subsequent training session in order to solve technical problems with increased complexity. The participating students’ evaluation of the training sessions in terms of learning effectiveness, skills enhancement, and incremental knowledge following the problem-solving session was captured through a follow-up survey consisting of 14 questions and a 5-point scoring system. From the most recent training event, an overall 70% of the respondents indicated that their skill levels were enhanced to a much greater level than they had had before the training, whereas 60.4% of the respondents from the same event indicated that their incremental knowledge following the session was much greater than what they had prior to the training. The instructor-facilitator involved in the training events suggested that this method of learning was more suitable for senior/advanced level students than those at the freshmen level as certain skills to effectively participate in such problem-solving sessions are acquired over a period of time, and not instantly.
Abstract: Biclustering is the way of two-dimensional data
analysis. For several years it became possible to express such issue
in terms of Boolean reasoning, for processing continuous, discrete
and binary data. The mathematical backgrounds of such approach —
proved ability of induction of exact and inclusion–maximal biclusters
fulfilling assumed criteria — are strong advantages of the method.
Unfortunately, the core of the method has quite high computational
complexity. In the paper the basics of Boolean reasoning approach
for biclustering are presented. In such context the problems of
computation parallelization are risen.
Abstract: Building system is highly vulnerable to different kinds
of faults and human misbehaviors. Energy efficiency and user comfort
are directly targeted due to abnormalities in building operation. The
available fault diagnosis tools and methodologies particularly rely on
rules or pure model-based approaches. It is assumed that model or
rule-based test could be applied to any situation without taking into
account actual testing contexts. Contextual tests with validity domain
could reduce a lot of the design of detection tests. The main objective of this paper is to consider fault validity when
validate the test model considering the non-modeled events such
as occupancy, weather conditions, door and window openings and
the integration of the knowledge of the expert on the state of the
system. The concept of heterogeneous tests is combined with test
validity to generate fault diagnoses. A combination of rules, range
and model-based tests known as heterogeneous tests are proposed
to reduce the modeling complexity. Calculation of logical diagnoses
coming from artificial intelligence provides a global explanation
consistent with the test result. An application example shows the efficiency of the proposed
technique: an office setting at Grenoble Institute of Technology.
Abstract: Digital Do It Yourself (DIY) is a contemporary socio-technological phenomenon, enabled by technological tools. The nature and potential long-term effects of this phenomenon have been widely studied within the framework of the EU funded project ‘Digital Do It Yourself’, in which the authors have created and experimented a specific Digital Do It Yourself (DiDIY) co-design process. The phenomenon was first studied through a literature research to understand its multiple dimensions and complexity. Therefore, co-design workshops were used to investigate the phenomenon by involving people to achieve a complete understanding of the DiDIY practices and its enabling factors. These analyses allowed the definition of the DiDIY fundamental factors that were then translated into a design tool. The objective of the tool is to shape design concepts by transferring these factors into different environments to achieve innovation. The aim of this paper is to present the ‘DiDIY Factor Stimuli’ tool, describing the research path and the findings behind it.
Abstract: Newton-Lagrange Interpolations are widely used in
numerical analysis. However, it requires a quadratic computational
time for their constructions. In computer aided geometric design
(CAGD), there are some polynomial curves: Wang-Ball, DP and
Dejdumrong curves, which have linear time complexity algorithms.
Thus, the computational time for Newton-Lagrange Interpolations
can be reduced by applying the algorithms of Wang-Ball, DP and
Dejdumrong curves. In order to use Wang-Ball, DP and Dejdumrong
algorithms, first, it is necessary to convert Newton-Lagrange
polynomials into Wang-Ball, DP or Dejdumrong polynomials. In
this work, the algorithms for converting from both uniform and
non-uniform Newton-Lagrange polynomials into Wang-Ball, DP and
Dejdumrong polynomials are investigated. Thus, the computational
time for representing Newton-Lagrange polynomials can be reduced
into linear complexity. In addition, the other utilizations of using
CAGD curves to modify the Newton-Lagrange curves can be taken.
Abstract: The increasing costs of healthcare on one hand, and the rise in aging population and associated chronic disease, on the other hand, are putting increasing burden on the current health care system in many Western countries. For instance, chronic kidney disease (CKD) is a common disease and in Europe, the cost of renal replacement therapy (RRT) is very significant to the total health care cost. However, the recent advancement in healthcare technology, provide the opportunity to treat patients at home in their own comfort. It is evident that home healthcare offers numerous advantages apparently, low costs and high patients’ quality of life. Despite these advantages, the intake of home hemodialysis (HHD) therapy is still low in particular in Germany. Many factors are accounted for the low number of HHD intake. However, this paper is focusing on patients’ safety-related factors of current HHD practices in Germany. The aim of this paper is to analyze the current HHD practices in Germany and to identify risks related factors if any exist. A case study has been conducted in a dialysis center which consists of four dialysis centers in the south of Germany. In total, these dialysis centers have 350 chronic dialysis patients, of which, four patients are on HHD. The centers have 126 staff which includes six nephrologists and 120 other staff i.e. nurses and administration. The results of the study revealed several risk-related factors. Most importantly, these centers do not offer allied health services at the pre-dialysis stage, the HHD training did not have an established curriculum; however, they have just recently developed the first version. Only a soft copy of the machine manual is offered to patients. Surprisingly, the management was not aware of any standard available for home assessment and installation. The home assessment is done by a third party (i.e. the machines and equipment provider) and they may not consider the hygienic quality of the patient’s home. The type of machine provided to patients at home is similar to the one in the center. The model may not be suitable at home because of its size and complexity. Even though portable hemodialysis machines, which are specially designed for home use, are available in the market such as the NxStage series. Besides the type of machine, no assistance is offered for space management at home in particular for placing the machine. Moreover, the centers do not offer remote assistance to patients and their carer at home. However, telephonic assistance is available. Furthermore, no alternative is offered if a carer is not available. In addition, the centers are lacking medical staff including nephrologists and renal nurses.
Abstract: Machine Learning and Data Mining are the two important tools for extracting useful information and knowledge from large datasets. In machine learning, classification is a wildly used technique to predict qualitative variables and is generally preferred over regression from an operational point of view. Due to the enormous increase in air pollution in various countries especially China, Air Quality Classification has become one of the most important topics in air quality research and modelling. This study aims at introducing a hybrid classification model based on information theory and Support Vector Machine (SVM) using the air quality data of four cities in China namely Beijing, Guangzhou, Shanghai and Tianjin from Jan 1, 2014 to April 30, 2016. China's Ministry of Environmental Protection has classified the daily air quality into 6 levels namely Serious Pollution, Severe Pollution, Moderate Pollution, Light Pollution, Good and Excellent based on their respective Air Quality Index (AQI) values. Using the information theory, information gain (IG) is calculated and feature selection is done for both categorical features and continuous numeric features. Then SVM Machine Learning algorithm is implemented on the selected features with cross-validation. The final evaluation reveals that the IG and SVM hybrid model performs better than SVM (alone), Artificial Neural Network (ANN) and K-Nearest Neighbours (KNN) models in terms of accuracy as well as complexity.
Abstract: In this paper, we study the factors which determine the capacity of a Convolutional Neural Network (CNN) model and propose the ways to evaluate and adjust the capacity of a CNN model for best matching to a specific pattern recognition task. Firstly, a scheme is proposed to adjust the number of independent functional units within a CNN model to make it be better fitted to a task. Secondly, the number of independent functional units in the capsule network is adjusted to fit it to the training dataset. Thirdly, a method based on Bayesian GAN is proposed to enrich the variances in the current dataset to increase its complexity. Experimental results on the PASCAL VOC 2010 Person Part dataset and the MNIST dataset show that, in both conventional CNN models and capsule networks, the number of independent functional units is an important factor that determines the capacity of a network model. By adjusting the number of functional units, the capacity of a model can better match the complexity of a dataset.
Abstract: Green microgrids using mostly renewable energy (RE) for generation, are complex systems with inherent nonlinear dynamics. Among a variety of different optimization tools there are only a few ones that adequately consider this complexity. This paper evaluates applicability of two somewhat similar optimization tools tailored for standalone RE microgrids and also assesses a machine learning tool for performance prediction that can enhance the reliability of any chosen optimization tool. It shows that one of these microgrid optimization tools has certain advantages over another and presents a detailed routine of preparing input data to simulate RE microgrid behavior. The paper also shows how neural-network-based predictive modeling can be used to validate and forecast solar power generation based on weather time series data, which improves the overall quality of standalone RE microgrid analysis.
Abstract: Mass customization production increases the difficulty of the production line layout planning. The material distribution process for variety of parts is very complex, which greatly increases the cost of material handling and logistics. In response to this problem, this paper presents an approach of production line layout planning based on complexity measurement. Firstly, by analyzing the influencing factors of equipment layout, the complexity model of production line is established by using information entropy theory. Then, the cost of the part logistics is derived considering different variety of parts. Furthermore, the function of optimization including two objectives of the lowest cost, and the least configuration complexity is built. Finally, the validity of the function is verified in a case study. The results show that the proposed approach may find the layout scheme with the lowest logistics cost and the least complexity. Optimized production line layout planning can effectively improve production efficiency and equipment utilization with lowest cost and complexity.
Abstract: The principle of sustainability has been studied by different sciences with the purpose of formulating clear and concrete models. Much has been discussed about sustainability, and several points of view have been used to try to explain it; environmental science emerges from various environmental discourses that are willing to establish a new concept for understanding this complexity. This way, we focus on the activity of ecotourism as a way to integrate sustainable practices proposed by environmental science, and thus, make it possible to create a new perspective for eco-tourists and the managers of tourist destinations towards nature. The aim of this study was to suggest a direction for environmental awareness, based on environmental science, to change the eco-tourist's view of nature in ecotourism tours. The methodology used was based on a case study concerning the Jalapão State Park - JSP, located in the State of Tocantins, Northern Brazil. The study was based on discussions, theoretical studies, bibliographical research and on-site research. We have identified that to incite the tourists’ awareness, they need to visit nature to understand the environmental problems and promote actions for its preservation. We highlight in this study actions to drive their human perception through environmental science, so that the ecotourism itinerary tours to the JSP, promote a balance between the natural environment and the tourist, making them, in this way, environmental tourists.
Abstract: Internet of things (IOT) is a kind of advanced information technology which has drawn societies’ attention. Sensors and stimulators are usually recognized as smart devices of our environment. Simultaneously, IOT security brings up new issues. Internet connection and possibility of interaction with smart devices cause those devices to involve more in human life. Therefore, safety is a fundamental requirement in designing IOT. IOT has three remarkable features: overall perception, reliable transmission, and intelligent processing. Because of IOT span, security of conveying data is an essential factor for system security. Hybrid encryption technique is a new model that can be used in IOT. This type of encryption generates strong security and low computation. In this paper, we have proposed a hybrid encryption algorithm which has been conducted in order to reduce safety risks and enhancing encryption's speed and less computational complexity. The purpose of this hybrid algorithm is information integrity, confidentiality, non-repudiation in data exchange for IOT. Eventually, the suggested encryption algorithm has been simulated by MATLAB software, and its speed and safety efficiency were evaluated in comparison with conventional encryption algorithm.
Abstract: With the exponential growth of cellular users, a new generation of cellular networks is needed to enhance the required peak data rates. The co-channel interference between neighboring base stations inhibits peak data rate increase. To overcome this interference, multi-cell cooperation known as coordinated multipoint transmission is proposed. Such a solution makes use of multiple-input-multiple-output (MIMO) systems under two different structures: Micro- and macro-diversity. In this paper, we study the capacity and bit error rate in cellular networks using MIMO technology. We analyse both micro- and macro-diversity schemes and develop a hybrid model that switches between macro- and micro-diversity in the case of hard handoff based on a cut-off range of signal-to-noise ratio values. We conclude that our hybrid switched micro-macro MIMO system outperforms classical MIMO systems at the cost of increased hardware and software complexity.
Abstract: The arithmetic operations over GF(2m) have been
extensively used in error correcting codes and public-key
cryptography schemes. Finite field arithmetic includes addition,
multiplication, division and inversion operations. Addition is very
simple and can be implemented with an extremely simple circuit.
The other operations are much more complex. The multiplication
is the most important for cryptosystems, such as the elliptic
curve cryptosystem, since computing exponentiation, division, and
computing multiplicative inverse can be performed by computing
multiplication iteratively. In this paper, we present a parallel
computation algorithm that operates Montgomery multiplication over
finite field using redundant basis. Also, based on the multiplication
algorithm, we present an efficient semi-systolic multiplier over finite
field. The multiplier has less space and time complexities compared
to related multipliers. As compared to the corresponding existing
structures, the multiplier saves at least 5% area, 50% time, and 53%
area-time (AT) complexity. Accordingly, it is well suited for VLSI
implementation and can be easily applied as a basic component for
computing complex operations over finite field, such as inversion and
division operation.