Abstract: Diagram and drawing are important ways to
communicate and the reproduce of architectural design, Due to the
development of information and communication technology, the
professional thinking of architecture and interior design are also
change rapidly. In development process of design, diagram always
play very important role. This study is based on diagram theories,
observe and record interaction between man and objects, objects and
space, and space and time in a modern nuclear family. Construct a
method for diagram to systematically and visualized describe the
space plan of a modern nuclear family toward an intelligent design, to
assist designer to retrieve information and review event pattern of past
and present.
Abstract: This work proposes a data-driven multiscale based
quantitative measures to reveal the underlying complexity of
electroencephalogram (EEG), applying to a rodent model of
hypoxic-ischemic brain injury and recovery. Motivated by that real
EEG recording is nonlinear and non-stationary over different
frequencies or scales, there is a need of more suitable approach over
the conventional single scale based tools for analyzing the EEG data.
Here, we present a new framework of complexity measures
considering changing dynamics over multiple oscillatory scales. The
proposed multiscale complexity is obtained by calculating entropies of
the probability distributions of the intrinsic mode functions extracted
by the empirical mode decomposition (EMD) of EEG. To quantify
EEG recording of a rat model of hypoxic-ischemic brain injury
following cardiac arrest, the multiscale version of Tsallis entropy is
examined. To validate the proposed complexity measure, actual EEG
recordings from rats (n=9) experiencing 7 min cardiac arrest followed
by resuscitation were analyzed. Experimental results demonstrate that
the use of the multiscale Tsallis entropy leads to better discrimination
of the injury levels and improved correlations with the neurological
deficit evaluation after 72 hours after cardiac arrest, thus suggesting an
effective metric as a prognostic tool.
Abstract: Hydrological modelling plays a crucial role in the planning and management of water resources, most especially in water stressed regions where the need to effectively manage the available water resources is of critical importance. However, due to the complex, nonlinear and dynamic behaviour of hydro-climatic interactions, achieving reliable modelling of water resource systems and accurate projection of hydrological parameters are extremely challenging. Although a significant number of modelling techniques (process-based and data-driven) have been developed and adopted in that regard, the field of hydrological modelling is still considered as one that has sluggishly progressed over the past decades. This is majorly as a result of the identification of some degree of uncertainty in the methodologies and results of techniques adopted. In recent times, evolutionary computation (EC) techniques have been developed and introduced in response to the search for efficient and reliable means of providing accurate solutions to hydrological related problems. This paper presents a comprehensive review of the underlying principles, methodological needs and applications of a promising evolutionary computation modelling technique – genetic programming (GP). It examines the specific characteristics of the technique which makes it suitable to solving hydrological modelling problems. It discusses the opportunities inherent in the application of GP in water related-studies such as rainfall estimation, rainfall-runoff modelling, streamflow forecasting, sediment transport modelling, water quality modelling and groundwater modelling among others. Furthermore, the means by which such opportunities could be harnessed in the near future are discussed. In all, a case for total embracement of GP and its variants in hydrological modelling studies is made so as to put in place strategies that would translate into achieving meaningful progress as it relates to modelling of water resource systems, and also positively influence decision-making by relevant stakeholders.
Abstract: This paper proposes a data-driven, biology-inspired neural segmentation method of 3D drosophila Brainbow images. We use Bayesian Sequential Partitioning algorithm for probabilistic modeling, which can be used to detect somas and to eliminate
crosstalk effects. This work attempts to develop an automatic methodology for neuron image segmentation, which nowadays still
lacks a complete solution due to the complexity of the image. The proposed method does not need any predetermined, risk-prone thresholds, since biological information is inherently included inside the image processing procedure. Therefore, it is less sensitive to variations in neuron morphology; meanwhile, its flexibility would be beneficial for tracing the intertwining structure of neurons.
Abstract: Real-time or in-line process monitoring frameworks are designed to give early warnings for a fault along with meaningful identification of its assignable causes. In artificial intelligence and machine learning fields of pattern recognition various promising approaches have been proposed such as kernel-based nonlinear machine learning techniques. This work presents a kernel-based empirical monitoring scheme for batch type production processes with small sample size problem of partially unbalanced data. Measurement data of normal operations are easy to collect whilst special events or faults data are difficult to collect. In such situations, noise filtering techniques can be helpful in enhancing process monitoring performance. Furthermore, preprocessing of raw process data is used to get rid of unwanted variation of data. The performance of the monitoring scheme was demonstrated using three-dimensional batch data. The results showed that the monitoring performance was improved significantly in terms of detection success rate of process fault.
Abstract: Fault detection determines faultexistence and detecting
time. This paper discusses two layered fault detection methods to
enhance the reliability and safety. Two layered fault detection methods
consist of fault detection methods of component level controllers and
system level controllers. Component level controllers detect faults by
using limit checking, model-based detection, and data-driven
detection and system level controllers execute detection by stability
analysis which can detect unknown changes. System level controllers
compare detection results via stability with fault signals from lower
level controllers. This paper addresses fault detection methods via
stability and suggests fault detection criteria in nonlinear systems. The
fault detection method applies tothe hybrid control unit of a military
hybrid electric vehicleso that the hybrid control unit can detect faults
of the traction motor.
Abstract: With a rapid growth in 3D graphics technology over the last few years, people are desired to see more flexible reacting motions of a biped in animations. In particular, it is impossible to anticipate all reacting motions of a biped while facing a perturbation. In this paper, we propose a three-level tracking method for animating a 3D humanoid character. First, we take the laws of physics into account to attach physical attributes, such as mass, gravity, friction, collision, contact, and torque, to bones and joints of a character. The next step is to employ PD controller to follow a reference motion as closely as possible. Once the character cannot tolerate a strong perturbation to prevent itself from falling down, we are capable of tracking a desirable falling-down action to avoid any falling condition inaccuracy. From the experimental results, we demonstrate the effectiveness and flexibility of the proposed method in comparison with conventional data-driven approaches.
Abstract: This paper focuses on the data-driven generation
of fuzzy IF...THEN rules. The resulted fuzzy rule base can be
applied to build a classifier, a model used for prediction, or
it can be applied to form a decision support system. Among
the wide range of possible approaches, the decision tree and
the association rule based algorithms are overviewed, and two
new approaches are presented based on the a priori fuzzy
clustering based partitioning of the continuous input variables.
An application study is also presented, where the developed
methods are tested on the well known Wisconsin Breast Cancer
classification problem.
Abstract: The use of neural networks is popular in various
building applications such as prediction of heating load, ventilation
rate and indoor temperature. Significant is, that only few papers deal
with indoor carbon dioxide (CO2) prediction which is a very good
indicator of indoor air quality (IAQ). In this study, a data-driven
modelling method based on multilayer perceptron network for indoor
air carbon dioxide in an apartment building is developed.
Temperature and humidity measurements are used as input variables
to the network. Motivation for this study derives from the following
issues. First, measuring carbon dioxide is expensive and sensors
power consumptions is high and secondly, this leads to short
operating times of battery-powered sensors. The results show that
predicting CO2 concentration based on relative humidity and
temperature measurements, is difficult. Therefore, more additional
information is needed.
Abstract: The self-organizing map (SOM) model is a well-known neural network model with wide spread of applications. The main characteristics of SOM are two-fold, namely dimension reduction and topology preservation. Using SOM, a high-dimensional data space will be mapped to some low-dimensional space. Meanwhile, the topological relations among data will be preserved. With such characteristics, the SOM was usually applied on data clustering and visualization tasks. However, the SOM has main disadvantage of the need to know the number and structure of neurons prior to training, which are difficult to be determined. Several schemes have been proposed to tackle such deficiency. Examples are growing/expandable SOM, hierarchical SOM, and growing hierarchical SOM. These schemes could dynamically expand the map, even generate hierarchical maps, during training. Encouraging results were reported. Basically, these schemes adapt the size and structure of the map according to the distribution of training data. That is, they are data-driven or dataoriented SOM schemes. In this work, a topic-oriented SOM scheme which is suitable for document clustering and organization will be developed. The proposed SOM will automatically adapt the number as well as the structure of the map according to identified topics. Unlike other data-oriented SOMs, our approach expands the map and generates the hierarchies both according to the topics and their characteristics of the neurons. The preliminary experiments give promising result and demonstrate the plausibility of the method.
Abstract: An approach and its implementation in 0.18 m CMOS process of the multichannel ASIC for capacitive (up to 30 pF) sensors are described in the paper. The main design aim was to study an analog data-driven architecture. The design was done for an analog derandomizing function of the 128 to 16 structure. That means that the ASIC structure should provide a parallel front-end readout of 128 input analog sensor signals and after the corresponding fast commutation with appropriate arbitration logic their processing by means of 16 output chains, including analog-to-digital conversion. The principal feature of the ASIC is a low power consumption within 2 mW/channel (including a 9-bit 20Ms/s ADC) at a maximum average channel hit rate not less than 150 kHz.
Abstract: As the data-driven economy is growing faster than
ever and the demand for energy is being spurred, we are facing
unprecedented challenges of improving energy efficiency in data
centers. Effectively maximizing energy efficiency or minimising the
cooling energy demand is becoming pervasive for data centers. This
paper investigates overall energy consumption and the energy
efficiency of cooling system for a data center in Finland as a case
study. The power, cooling and energy consumption characteristics
and operation condition of facilities are examined and analysed.
Potential energy and cooling saving opportunities are identified and
further suggestions for improving the performance of cooling system
are put forward. Results are presented as a comprehensive evaluation
of both the energy performance and good practices of energy
efficient cooling operations for the data center. Utilization of an
energy recovery concept for cooling system is proposed. The
conclusion we can draw is that even though the analysed data center
demonstrated relatively high energy efficiency, based on its power
usage effectiveness value, there is still a significant potential for
energy saving from its cooling systems.
Abstract: In this paper, a new time-delay estimation
technique based on the cross IB-energy operator [5] is
introduced. This quadratic energy detector measures how
much a signal is present in another one. The location of the
peak of the energy operator, corresponding to the maximum of
interaction between the two signals, is the estimate of the
delay. The method is a fully data-driven approach. The
discrete version of the continuous-time form of the cross IBenergy
operator, for its implementation, is presented. The
effectiveness of the proposed method is demonstrated on real
underwater acoustic signals arriving from targets and the
results compared to the cross-correlation method.