Abstract: Supply Chain Resilience has been broadly studied during the last decade, focusing the research on many aspects of Supply Chain performance. Consequently, different definitions of Supply Chain Resilience have been developed by the research community, drawing inspiration also from other fields of study such as ecology, sociology, psychology, economy et al. This way, the definitions so far developed in the extant literature are therefore very heterogeneous, and many authors have pointed out a lack of consensus in this field of analysis. The aim of this research is to find common points between these definitions, through the development of a framework of study: the Resilience Triangle. The Resilience Triangle is a tool developed in the field of civil engineering, with the objective of modeling the loss of resilience of a given structure during and after the occurrence of a disruption such as an earthquake. The Resilience Triangle is a simple yet powerful tool: in our opinion, it can summarize all the features that authors have captured in the Supply Chain Resilience definitions over the years. This research intends to recapitulate within this framework all these heterogeneities in Supply Chain Resilience research. After collecting a various number of Supply Chain Resilience definitions present in the extant literature, the methodology approach provides a taxonomy step with the scope of collecting and analyzing all the data gathered. The next step provides the comparison of the data obtained with the plotting of a disruption profile, in order to contextualize the Resilience Triangle in the Supply Chain context. The tool and the results developed in this research will allow to lay the foundation for future Supply Chain Resilience modeling and measurement work.
Abstract: One of the main drawbacks of the Modal Pushover Analysis (MPA) is the need to perform nonlinear time-history analysis, which complicates the analysis method and time. A simplified version of the MPA has been proposed based on the concept of the inelastic deformation ratio. Furthermore, the effect of the higher modes of vibration is considered by assuming linearly-elastic responses, which enables the use of standard elastic response spectrum analysis. In this thesis, the simplified MPA (SMPA) method is applied to determine the target global drift and the inter-story drifts of steel frame building. The effect of the higher vibration modes is considered within the framework of the SMPA. A comprehensive survey about the inelastic deformation ratio is presented. After that, a suitable expression from literature is selected for the inelastic deformation ratio and then implemented in the SMPA. The estimated seismic demands using the SMPA, such as target drift, base shear, and the inter-story drifts, are compared with the seismic responses determined by applying the standard MPA. The accuracy of the estimated seismic demands is validated by comparing with the results obtained by the nonlinear time-history analysis using real earthquake records.
Abstract: Agility in Knowledge Management (AKM) tries to capture agility requirements and their respective answers within the framework of knowledge and learning for organizations. Since it is rather a new construct, it is difficult to claim that it has been sufficiently discussed and analyzed in practical and theoretical realms. Like the term ‘agile learning’, it is also commonly addressed in the software development and information technology fields and across the related areas where those technologies can be applied. The organizational perspective towards AKM, seems to need some more time to become scholarly mature. Nevertheless, in the literature one can come across some implicit usages of this term occasionally. This research is aimed to explore the conceptual background of agility in KM, re-conceptualize it and extend it to business applications with a special focus on e-business.
Abstract: The reliability of the power grid depends on the successful operation of thousands of protective relays. The failure of one relay to operate as intended may lead the entire power grid to blackout. In fact, major power system failures during transient disturbances may be caused by unnecessary protective relay tripping rather than by the failure of a relay to operate. Adequate relay testing provides a first defense against false trips of the relay and hence improves power grid stability and prevents catastrophic bulk power system failures. The goal of this research project is to design and enhance the relay tester using a technology such as Field Programmable Gate Array (FPGA) card NI 7851. A PC based tester framework has been developed using Simulink power system model for generating signals under different conditions (faults or transient disturbances) and LabVIEW for developing the graphical user interface and configuring the FPGA. Besides, the interface system has been developed for outputting and amplifying the signals without distortion. These signals should be like the generated ones by the real power system and large enough for testing the relay’s functionality. The signals generated that have been displayed on the scope are satisfactory. Furthermore, the proposed testing system can be used for improving the performance of protective relay.
Abstract: Visually impaired people, in their daily lives, face struggles and spatial barriers because the built environment is often designed with an extreme focus on the visual element, causing what is called architectural visual bias or ocularcentrism. The aim of the study is to holistically understand the world of the visually impaired as an attempt to extract the qualities of space that accommodate their needs, and to show the importance of multi-sensory, holistic designs for the blind. Within the framework of existential phenomenology, common themes are reached through "intersubjectivity": experience descriptions by blind people and blind architects, observation of how blind children learn to perceive their surrounding environment, and a personal lived blind-folded experience are analyzed. The extracted themes show how visually impaired people filter out and prioritize tactile (active, passive and dynamic touch), acoustic and olfactory spatial qualities respectively, and how this happened during the personal lived blind folded experience. The themes clarify that haptic and aural inclusive designs are essential to create environments suitable for the visually impaired to empower them towards an independent, safe and efficient life.
Abstract: In this paper, we consider the parametrization of the
discrete-time systems without the unit-delay element within the
framework of the factorization approach. In the parametrization,
we investigate the number of required parameters. We consider
single-input single-output systems in this paper. By the investigation,
we find, on the discrete-time systems without the unit-delay element,
three cases that are (1) there exist plants which require only one
parameter and (2) two parameters, and (3) the number of parameters
is at most three.
Abstract: The global coverage of broadband multimedia and
internet-based services in terrestrial-satellite networks demand
particular interests for satellite providers in order to enhance services
with low latencies and high signal quality to diverse users. In
particular, the delay of on-board processing is an inherent source
of latency in a satellite communication that sometimes is discarded
for the end-to-end delay of the satellite link. The frame work for this
paper includes modelling of an on-orbit satellite payload using an
agent model that can reproduce the properties of processing delays.
In essence, a comparison of different spatial interpolation methods is
carried out to evaluate physical data obtained by an GEO satellite
in order to define a discretization function for determining that
delay. Furthermore, the performance of the proposed agent and the
development of a delay discretization function are together validated
by simulating an hybrid satellite and terrestrial network. Simulation
results show high accuracy according to the characteristics of initial
data points of processing delay for Ku bands.
Abstract: Instance selection (IS) technique is used to reduce
the data size to improve the performance of data mining methods.
Recently, to process very large data set, several proposed methods
divide the training set into some disjoint subsets and apply IS
algorithms independently to each subset. In this paper, we analyze
the limitation of these methods and give our viewpoint about how to
divide and conquer in IS procedure. Then, based on fast condensed
nearest neighbor (FCNN) rule, we propose a large data sets instance
selection method with MapReduce framework. Besides ensuring the
prediction accuracy and reduction rate, it has two desirable properties:
First, it reduces the work load in the aggregation node; Second
and most important, it produces the same result with the sequential
version, which other parallel methods cannot achieve. We evaluate the
performance of FCNN-MR on one small data set and two large data
sets. The experimental results show that it is effective and practical.
Abstract: Sport performance analysis is a technique that is
becoming every year more important for athletes of every level. Many
techniques have been developed to measure and analyse efficiently
the performance of athletes in some sports, but in combat sports
these techniques found in many times their limits, due to the high
interaction between the two opponents during the competition. In this
paper the problem will be framed. Moreover the physical performance
measurement problem will be analysed and three different techniques
to manage it will be presented. All the techniques have been used to
analyse the performance of 22 high level Judo athletes.
Abstract: Following the E-Commerce era, M-Commerce is the next big phase in the technology involvement and advancement. This paper intends to explore how Indian consumers are influenced to adopt the M-commerce. In this paper, the revised Technology Acceptance Model (TAM) has been presented on the basis of the most dominant factors that affect the adoption of M-Commerce in Indian scenario. Furthermore, an analytical questionnaire approach was carried out to collect data from Indian consumers. These collected data were further used for the validation of the presented model. Findings indicate that customization, convenience, instant connectivity, compatibility, security, download speed in M-Commerce affect the adoption behavior. Furthermore, the findings suggest that perceived usefulness and attitude towards M-Commerce are positively influenced by number of M-Commerce drivers (i.e. download speed, compatibility, convenience, security, customization, connectivity, and input mechanism).
Abstract: Psychographic is a psychological study of values, attitudes, interests and it is used mostly in prediction, opinion research and social research. This study predicts the influence of performance expectancy, effort expectancy, social influence and facilitating condition on e-government acceptance among Malaysian citizens. The survey responses of 543 e-government users have been validated and analyzed by means of covariance-based Structural Equation Modeling. The findings indicate that e-government acceptance among Malaysian citizens are mainly influenced by performance expectancy (β = 0.66, t = 11.53, p < 0.01) and social influence (β = 0.20, t = 4.23, p < 0.01). Surprisingly, there is no significant effect of facilitating condition and effort expectancy on e-government continuance intention (β = 0.01, t = 0.27, p > 0.05; β = -0.01, t = -0.40, p > 0.05). This study offers government and vendors a frame of reference to analyze citizen’s situation before initiating new innovations. In case of Malaysian e-government technology, adoption strategies should be built around fostering level of citizens’ technological expectation and social influence on e-government usage.
Abstract: In the present research, various formulations of wavelet transform are applied on acceleration time history of earthquake. The mentioned transforms decompose the strong ground motion into low and high frequency parts. Since the high frequency portion of strong ground motion has a minor effect on dynamic response of structures, the structure is excited by low frequency part. Consequently, the seismic response of structure is predicted consuming one half of computational time, comparing with conventional time history analysis. Towards reducing the computational effort needed in seismic optimization of structure, seismic optimization of a shear frame structure is conducted by applying various forms of mentioned transformation through genetic algorithm.
Abstract: Emissions are a consequence of electricity generation. A major option for low carbon generation, local energy systems featuring Combined Heat and Power with solar PV (CHPV) has significant potential to increase energy performance, increase resilience, and offer greater control of local energy prices while complementing the UK’s emissions standards and targets. Recent advances in dynamic modelling and simulation of buildings and clusters of buildings using the IDEAS framework have successfully validated a novel multi-vector (simultaneous control of both heat and electricity) approach to integrating the wide range of primary and secondary plant typical of local energy systems designs including CHP, solar PV, gas boilers, absorption chillers and thermal energy storage, and associated electrical and hot water networks, all operating under a single unified control strategy. Results from this work indicate through simulation that integrated control of thermal storage can have a pivotal role in optimizing system performance well beyond the present expectations. Environmental impact analysis and reporting of all energy systems including CHPV LES presently employ a static annual average carbon emissions intensity for grid supplied electricity. This paper focuses on establishing and validating CHPV environmental performance against conventional emissions values and assessment benchmarks to analyze emissions performance without and with an active thermal store in a notional group of non-domestic buildings. Results of this analysis are presented and discussed in context of performance validation and quantifying the reduced environmental impact of CHPV systems with active energy storage in comparison with conventional LES designs.
Abstract: The United Arab Emirates is clearly facing a multitude of challenges in curbing its greenhouse gas emissions to meet its pre-allotted framework of Kyoto protocol and COP21 targets due to its hunger for modernization, industrialization, infrastructure growth, soaring population and oil and gas activity. In this work, we focus on the bonafide zero emission electric vehicles market penetration in the country’s transport industry for emission reduction. We study the global electric vehicle market trends, the complementary battery technologies and the trends by manufacturers, emission standards across borders and prioritized advancements which will ultimately dictate the terms of future conditions for the United Arab Emirate transport industry. Based on our findings and analysis at every stage of current viability and state-of-transport-affairs, we postulate policy recommendations to local governmental entities from a supply and demand perspective covering aspects of technology, infrastructure requirements, change in power dynamics, end user incentives program, market regulators behavior and communications amongst key stakeholders.
Abstract: This paper presents an adaptive framework for
modelling financial markets using equity risk premiums, risk free
rates and volatilities. The recorded economic factors are initially
used to train four adaptive filters for a certain limited period of time
in the past. Once the systems are trained, the adjusted coefficients
are used for modelling and prediction of an important financial
market index. Two different approaches based on least mean squares
(LMS) and recursive least squares (RLS) algorithms are investigated.
Performance analysis of each method in terms of the mean squared
error (MSE) is presented and the results are discussed. Computer
simulations carried out using recorded data show MSEs of 4% and
3.4% for the next month prediction using LMS and RLS adaptive
algorithms, respectively. In terms of twelve months prediction, RLS
method shows a better tendency estimation compared to the LMS
algorithm.
Abstract: Digital systems are in the Cognitive wave of the eTransformations and are now extensively aimed at meeting the individuals’ demands, both those of customers requiring services and those of service providers. It is also apparent that successful future systems will not just simply open doors to the traditional owners/users to offer and receive services such as Uber, for example, does today, but will in the future require more customized and cognitively enabled infrastructures that will be responsive to the system user’s needs. To be able to identify what is required for such systems this research reviews the historical and the current effects of the eTransformation process by studying: 1. eTransitions of company websites and mobile applications, 2. Emergence of new shared economy business models such as Uber, and 3. New requirements for demand driven, cognitive systems capable of learning and just-in-time decision-making. Based on the analysis, this study proposes a Cognitive eTransformation Framework capable of guiding implementations of new responsive and user aware systems.
Abstract: The paper reveals the birth and evolution of the British precedent Rylands v. Fletcher that, once adopted on the other side of the Ocean (in United States), gave rise to a general clause of liability for abnormally dangerous activities recognized by the §20 of the American Restatements of the Law Third, Liability for Physical and Emotional Harm. The main goal of the paper was to analyze the development of the legal doctrine and of the case law posterior to the precedent together with the intent of the British judicature to leapfrog from the traditional rule contained in Rylands v. Fletcher to a general clause similar to that introduced in the United States and recently also on the European level. As it is well known, within the scope of tort law two different initiatives compete with the aim of harmonizing the European laws: European Group on Tort Law with its Principles of European Tort Law (hereinafter PETL) in which article 5:101 sets forth a general clause for strict liability for abnormally dangerous activities and Study Group on European Civil Code with its Common Frame of Reference (CFR) which promotes rather ad hoc model of listing out determined cases of strict liability. Very narrow application scope of the art. 5:101 PETL, restricted only to abnormally dangerous activities, stays in opposition to very broad spectrum of strict liability cases governed by the CFR. The former is a perfect example of a general clause that offers a minimum and basic standard, possibly acceptable also in those countries in which, like in the United Kingdom, this regime of liability is completely marginalized.
Abstract: This paper is a qualitative case study analysis of the development of a fully online learning community of graduate students through arts-based community building activities. With increasing numbers and types of online learning spaces, it is incumbent upon educators to continue to push the edge of what best practices look like in digital learning environments. In digital learning spaces, instructors can no longer be seen as purveyors of content knowledge to be examined at the end of a set course by a final test or exam. The rapid and fluid dissemination of information via Web 3.0 demands that we reshape our approach to teaching and learning, from one that is content-focused to one that is process-driven. Rather than having instructors as formal leaders, today’s digital learning environments require us to share expertise, as it is the collective experiences and knowledge of all students together with the instructors that help to create a very different kind of learning community. This paper focuses on innovations pursued in a 36 hour 12 week graduate course in higher education entitled “Critical and Reflective Practice”. The authors chronicle their journey to developing a fully online learning community (FOLC) by emphasizing the elements of social, cognitive, emotional and digital spaces that form a moving interplay through the community. In this way, students embrace anywhere anytime learning and often take the learning, as well as the relationships they build and skills they acquire, beyond the digital class into real world situations. We argue that in order to increase student online engagement, pedagogical approaches need to stem from two primary elements, both creativity and critical reflection, that are essential pillars upon which instructors can co-design learning environments with students. The theoretical framework for the paper is based on the interaction and interdependence of Creativity, Intuition, Critical Reflection, Social Constructivism and FOLCs. By leveraging students’ embedded familiarity with a wide variety of technologies, this case study of a graduate level course on critical reflection in education, examines how relationships, quality of work produced, and student engagement can improve by using creative and imaginative pedagogical strategies. The authors examine their professional pedagogical strategies through the lens that the teacher acts as facilitator, guide and co-designer. In a world where students can easily search for and organize information as self-directed processes, creativity and connection can at times be lost in the digitized course environment. The paper concludes by posing further questions as to how institutions of higher education may be challenged to restructure their credit granting courses into more flexible modules, and how students need to be considered an important part of assessment and evaluation strategies. By introducing creativity and critical reflection as central features of the digital learning spaces, notions of best practices in digital teaching and learning emerge.
Abstract: This paper aims to analyse how Ian Hacking states the
theoretical basis of his research on the classification of people.
Although all his early philosophical education had been based in
Foucault, it is also true that Erving Goffman’s perspective provided
him with epistemological and methodological tools for understanding
face-to-face relationships. Hence, all his works must be thought of as
social science texts that combine the research on how the individuals
are constituted ‘top-down’ (as in Foucault), with the inquiry into how
people renegotiate ‘bottom-up’ the classifications about them. Thus,
Hacking´s proposal constitutes a middle ground between the French
Philosopher and the American Sociologist. Placing himself between
both authors allows Hacking to build a frame that is expected to
adjust to Social Sciences’ main particularity: the fact that they study
interactive kinds. These are kinds of people, which imply that those
who are classified can change in certain ways that prompt the need
for changing previous classifications themselves. It is all about the
interaction between the labelling of people and the people who are
classified. Consequently, understanding the way in which Hacking
uses Foucault’s and Goffman’s theories is essential to fully
comprehend the social dynamic between individuals and concepts,
what Bert Hansen had called dialectical realism. His theoretical
proposal, therefore, is not only valuable because it combines diverse
perspectives, but also because it constitutes an utterly original and
relevant framework for Sociological theory and particularly for
Criminology.
Abstract: Traditionally in sensor networks and recently in the
Internet of Things, numerous heterogeneous sensors are deployed
in distributed manner to monitor a phenomenon that often can be
model by an underlying stochastic process. The big time-series
data collected by the sensors must be analyzed to detect change
in the stochastic process as quickly as possible with tolerable
false alarm rate. However, sensors may have different accuracy
and sensitivity range, and they decay along time. As a result,
the big time-series data collected by the sensors will contain
uncertainties and sometimes they are conflicting. In this study, we
present a framework to take advantage of Evidence Theory (a.k.a.
Dempster-Shafer and Dezert-Smarandache Theories) capabilities of
representing and managing uncertainty and conflict to fast change
detection and effectively deal with complementary hypotheses.
Specifically, Kullback-Leibler divergence is used as the similarity
metric to calculate the distances between the estimated current
distribution with the pre- and post-change distributions. Then mass
functions are calculated and related combination rules are applied to
combine the mass values among all sensors. Furthermore, we applied
the method to estimate the minimum number of sensors needed to
combine, so computational efficiency could be improved. Cumulative
sum test is then applied on the ratio of pignistic probability to detect
and declare the change for decision making purpose. Simulation
results using both synthetic data and real data from experimental
setup demonstrate the effectiveness of the presented schemes.