Abstract: In construction industry, reinforced concrete (RC) slabs
represent fundamental elements of buildings and bridges. Different
methods are available for analysing the structural behaviour of
slabs. In the early ages of last century, the yield-line method has
been proposed to attempt to solve such problem. Simple geometry
problems could easily be solved by using traditional hand analyses
which include plasticity theories. Nowadays, advanced finite element
(FE) analyses have mainly found their way into applications of
many engineering fields due to the wide range of geometries to
which they can be applied. In such cases, the application of an
elastic or a plastic constitutive model would completely change the
approach of the analysis itself. Elastic methods are popular due to
their easy applicability to automated computations. However, elastic
analyses are limited since they do not consider any aspect of the
material behaviour beyond its yield limit, which turns to be an
essential aspect of RC structural performance. Furthermore, their
applicability to non-linear analysis for modeling plastic behaviour
gives very reliable results. Per contra, this type of analysis is
computationally quite expensive, i.e. not well suited for solving
daily engineering problems. In the past years, many researchers have
worked on filling this gap between easy-to-implement elastic methods
and computationally complex plastic analyses. This paper aims at
proposing a numerical procedure, through which a pseudo-lower
bound solution, not violating the yield criterion, is achieved. The
advantages of moment distribution are taken into account, hence the
increase in strength provided by plastic behaviour is considered. The
lower bound solution is improved by detecting over-yielded moments,
which are used to artificially rule the moment distribution among
the rest of the non-yielded elements. The proposed technique obeys
Nielsen’s yield criterion. The outcome of this analysis provides a
simple, yet accurate, and non-time-consuming tool of predicting the
lower-bound solution of the collapse load of RC slabs. By using
this method, structural engineers can find the fracture patterns and
ultimate load bearing capacity. The collapse triggering mechanism is
found by detecting yield-lines. An application to the simple case of
a square clamped slab is shown, and a good match was found with
the exact values of collapse load.
Abstract: The linear programming model is sometimes difficult to apply in real business situations due to its assumption of proportionality. This paper shows an example of how to use De Novo programming approach instead of linear programming. In the De Novo programming, resources are not fixed like in linear programming but resource quantities depend only on available budget. Budget is a new, important element of the De Novo approach. Two different production situations are presented: increasing costs and quantity discounts of raw materials. The focus of this paper is on advantages of the De Novo approach in the optimization of production plan for production company which produces souvenirs made from famous stone from the island of Brac, one of the greatest islands from Croatia.
Abstract: Introduction: Currently, there has been an increasing concern about the provision of palliative care in non-oncological patients in both professional literature and clinical practice. However, there is not much scientific information on how to provide neurological and palliative care together. The main objective of the project is to create and to verify a concept of neuro-palliative and rehabilitative care for patients with selected neurological diseases in an advanced stage of the disease and also to evaluate bio-psychosocial and spiritual needs of these patients and their caregivers related to the quality of life using created standardized tools. Methodology: Triangulation of research methods (qualitative and quantitative) will be used. A concept of care and assessment tools will be developed by analyzing interviews and focus groups. Qualitative data will be analyzed using grounded theory. The concept of care will be tested in the context of the intervention study. Using quantitative analysis, we will assess the effect of an intervention provided on the saturation of needs, quality of life, and quality of care. A research sample will be made up of the patients with selected neurological diseases (Parkinson´s syndrome, motor neuron disease, multiple sclerosis, Huntington’s disease), together with patients´ family members. Based on the results, educational materials and a certified course for health care professionals will be created. Findings: Based on qualitative data analysis, we will propose the concept of integrated care model combining neurological, rehabilitative and specialist palliative care for patients with selected neurological diseases in different settings of care and services. Patients´ needs related to quality of life will be described by newly created and validated measuring tools before the start of intervention (application of neuro-palliative and palliative approach) and then in the time interval. Conclusion: Based on the results, educational materials and a certified course for doctors and health care professionals will be created.
Abstract: In an energy-intensive world, minimizing energy consumption is paramount to cost saving and reducing the carbon footprint. Improving mixture procedures utilizing warm mix additive Fischer-Tropsch (FT) wax in ethylene vinyl acetate (EVA) and modified bitumen highlights a greener and sustainable approach to modified bitumen. In this study, the impact of FT wax on optimized EVA/waste crumb rubber modified bitumen is assayed with a maximum loading of 2.5%. The rationale of the FT wax loading is to maintain the original maximum loading of EVA in the optimized mixture. The phase change abilities of FT wax enable EVA co-crystallization with the support of the elastomeric backbone of crumb rubber. Less than 1% loading of FT wax worked in the EVA/crumb rubber modified bitumen energy-sustainability nexus. Response surface methodology approach to the mixture design is implemented amongst the different loadings of FT wax, EVA for a consistent amount of crumb rubber and bitumen. Rheological parameters (complex shear modulus, phase angle and rutting parameter) were the factors used as performance indicators of the different optimized mixtures. The low temperature chemistry of the optimized mixtures is analyzed using elementary beam theory and the elastic-viscoelastic correspondence principle. Master curves and black space diagrams are developed and used to predict age-induced cracking of the different long term aged mixtures. Modified binder rheology reveals that the strain response is not linear and that there is substantial re-arrangement of polymer chains as stress is increased, this is based on the age state of the mixture and the FT wax and EVA loadings. Dominance of individual effects is evident over effects of synergy in co-interaction of EVA and FT wax. All-inclusive FT wax and EVA formulations were best optimized in mixture 4 with mixture 7 reflecting increase in ease of workability. Findings show that interaction chemistry of bitumen, crumb rubber EVA, and FT wax is first and second order in all cases involving individual contributions and co-interaction amongst the components of the mixture.
Abstract: This paper presents a Monte Carlo (MC) method-based dose distributions on lung tumor for 6 MV photon beam to improve the dosimetric accuracy for cancer treatment. The polystyrene which is tissue equivalent material to the lung tumor density is used in this research. In the empirical calculations, TRS-398 formalism of IAEA has been used, and the setup was made according to the ICRU recommendations. The research outcomes were compared with the state-of-the-art experimental results. From the experimental results, it is observed that the proposed based approach provides more accurate results and improves the accuracy than the existing approaches. The average %variation between measured and TPS simulated values was obtained 1.337±0.531, which shows a substantial improvement comparing with the state-of-the-art technology.
Abstract: Crop yield prediction is a paramount issue in
agriculture. The main idea of this paper is to find out efficient
way to predict the yield of corn based meteorological records.
The prediction models used in this paper can be classified into
model-driven approaches and data-driven approaches, according to
the different modeling methodologies. The model-driven approaches are based on crop mechanistic
modeling. They describe crop growth in interaction with their
environment as dynamical systems. But the calibration process of
the dynamic system comes up with much difficulty, because it
turns out to be a multidimensional non-convex optimization problem.
An original contribution of this paper is to propose a statistical
methodology, Multi-Scenarios Parameters Estimation (MSPE), for the
parametrization of potentially complex mechanistic models from a
new type of datasets (climatic data, final yield in many situations).
It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction
is free of the complex biophysical process. But it has some strict
requirements about the dataset.
A second contribution of the paper is the comparison of these
model-driven methods with classical data-driven methods. For this
purpose, we consider two classes of regression methods, methods
derived from linear regression (Ridge and Lasso Regression, Principal
Components Regression or Partial Least Squares Regression) and
machine learning methods (Random Forest, k-Nearest Neighbor,
Artificial Neural Network and SVM regression).
The dataset consists of 720 records of corn yield at county scale
provided by the United States Department of Agriculture (USDA) and
the associated climatic data. A 5-folds cross-validation process and
two accuracy metrics: root mean square error of prediction(RMSEP),
mean absolute error of prediction(MAEP) were used to evaluate the
crop prediction capacity.
The results show that among the data-driven approaches, Random
Forest is the most robust and generally achieves the best prediction
error (MAEP 4.27%). It also outperforms our model-driven approach
(MAEP 6.11%). However, the method to calibrate the mechanistic
model from dataset easy to access offers several side-perspectives.
The mechanistic model can potentially help to underline the stresses
suffered by the crop or to identify the biological parameters of interest
for breeding purposes. For this reason, an interesting perspective is
to combine these two types of approaches.
Abstract: In this paper, we evaluate the resilience of the smart grid system in the presence of multiple cyber-physical attacks on its distinct functional components. We discuss attack-defense scenarios and their effect on smart grid resilience. Through contingency simulations in the Network and PowerWorld Simulator, we analyze multiple cyber-physical attacks that propagate from the cyber domain to power systems and discuss how such attacks destabilize the underlying power grid. The analysis of such simulations helps system administrators develop more resilient systems and improves the response of the system in the presence of cyber-physical attacks.
Abstract: The transition of a student with a disability from school to work is the most crucial phase while moving from the stage of adolescence into early adulthood. In this process, young individuals face various difficulties and challenges in order to accomplish the next venture of life successfully. In this respect, this paper aims to examine the challenges encountered by the individuals with intellectual disabilities in transition to work in Saudi Arabia. For this purpose, this study has undertaken a qualitative research-based methodology; wherein interpretivist philosophy has been followed along with inductive approach and exploratory research design. The data for the research has been gathered with the help of semi-structured interviews, whose findings are analysed with the help of thematic analysis. Semi-structured interviews were conducted with parents of persons with intellectual disabilities, officials, supervisors and specialists of two vocational rehabilitation centres providing training to intellectually disabled students, in addition to that, directors of companies and websites in hiring those individuals. The total number of respondents for the interview was 15. The purposive sampling method was used to select the respondents for the interview. This sampling method is a non-probability sampling method which draws respondents from a known population and allows flexibility and suitability in selecting the participants for the study. The findings gathered from the interview revealed that the lack of awareness among their parents regarding the rights of their children who are intellectually disabled; the lack of adequate communication and coordination between various entities; concerns regarding their training and subsequent employment are the key difficulties experienced by the individuals with intellectual disabilities. Training in programmes such as bookbinding, carpentry, computing, agriculture, electricity and telephone exchange operations were involved as key training programmes. The findings of this study also revealed that information technology and media were playing a significant role in smoothing the transition to employment of individuals with intellectual disabilities. Furthermore, religious and cultural attitudes have been identified to be restricted for people with such disabilities in seeking advantages from job opportunities. On the basis of these findings, it can be implied that the information gathered through this study will serve to be highly beneficial for Saudi Arabian schools/ rehabilitation centres for individuals with intellectual disability to facilitate them in overcoming the problems they encounter during the transition to work.
Abstract: The physical effects of upstream flow obstructions such
as vegetation on cross-ventilation phenomena of a building are
important for issues such as indoor thermal comfort. Modelling such
effects in Computational Fluid Dynamics simulations may also be
challenging. The aim of this work is to establish the cross-ventilation
jet behaviour in such complex terrain conditions as well as to provide
guidelines on the implementation of CFD numerical simulations in
order to model complex terrain features such as vegetation in an
efficient manner. The methodology consists of onsite measurements
on a test cell coupled with numerical simulations. It was found
that the cross-ventilation flow is highly turbulent despite the very
low velocities encountered internally within the test cells. While no
direct measurement of the jet direction was made, the measurements
indicate that flow tends to be reversed from the leeward to the
windward side. Modelling such a phenomenon proves challenging
and is strongly influenced by how vegetation is modelled. A solid
vegetation tends to predict better the direction and magnitude of the
flow than a porous vegetation approach. A simplified terrain model
was also shown to provide good comparisons with observation. The
findings have important implications on the study of cross-ventilation
in complex terrain conditions since the flow direction does not remain
trivial, as with the traditional isolated building case.
Abstract: In impedance spectroscopy (IS) the response of a photo-active device is analysed as a function of ac bias. It is widely applied in a broad class of material systems and devices. It gives access to fundamental mechanisms of operation of solar cells. We have implemented a method of IS where we modulate the light instead of the bias. This scheme allows us to analyze not only carrier dynamics but also impedance of device locally. Here, using this scheme, we have measured the frequency-dependent photocurrent response of the thin-film planar and nano-textured Si solar cells using this method. Photocurrent response is measured in range of 50 Hz to 50 kHz. Bode and Nyquist plots are used to determine characteristic lifetime of both the cells. Interestingly, the carrier lifetime of both planar and nano-textured solar cells depend on back and front contact positions. This is due to either heterogeneity of device or contacts are not optimized. The estimated average lifetime is found to be shorter for the nano-textured cell, which could be due to the influence of the textured interface on the carrier relaxation dynamics.
Abstract: With the increasing dependency on our computer
devices, we face the necessity of adequate, efficient and effective
mechanisms, for protecting our network. There are two main
problems that Intrusion Detection Systems (IDS) attempt to solve.
1) To detect the attack, by analyzing the incoming traffic and inspect
the network (intrusion detection). 2) To produce a prompt response
when the attack occurs (intrusion prevention). It is critical creating an
Intrusion detection model that will detect a breach in the system on
time and also challenging making it provide an automatic and with
an acceptable delay response at every single stage of the monitoring
process. We cannot afford to adopt security measures with a high
exploiting computational power, and we are not able to accept a
mechanism that will react with a delay. In this paper, we will
propose an intrusion response mechanism that is based on artificial
intelligence, and more precisely, reinforcement learning techniques
(RLT). The RLT will help us to create a decision agent, who will
control the process of interacting with the undetermined environment.
The goal is to find an optimal policy, which will represent the
intrusion response, therefore, to solve the Reinforcement learning
problem, using a Q-learning approach. Our agent will produce an
optimal immediate response, in the process of evaluating the network
traffic.This Q-learning approach will establish the balance between
exploration and exploitation and provide a unique, self-learning and
strategic artificial intelligence response mechanism for IDS.
Abstract: Delays in the construction industry are a global phenomenon. Many construction projects experience extensive delays exceeding the initially estimated completion time. The main purpose of this study is to identify construction projects typical behaviors in order to develop a prognosis and management tool. Being able to know a construction projects schedule tendency will enable evidence-based decision-making to allow resolutions to be made before delays occur. This study presents an innovative approach that uses Cluster Analysis Method to support predictions during Earned Value Analyses. A clustering analysis was used to predict future scheduling, Earned Value Management (EVM), and Earned Schedule (ES) principal Indexes behaviors in construction projects. The analysis was made using a database with 90 different construction projects. It was validated with additional data extracted from literature and with another 15 contrasting projects. For all projects, planned and executed schedules were collected and the EVM and ES principal indexes were calculated. A complete linkage classification method was used. In this way, the cluster analysis made considers that the distance (or similarity) between two clusters must be measured by its most disparate elements, i.e. that the distance is given by the maximum span among its components. Finally, through the use of EVM and ES Indexes and Tukey and Fisher Pairwise Comparisons, the statistical dissimilarity was verified and four clusters were obtained. It can be said that construction projects show an average delay of 35% of its planned completion time. Furthermore, four typical behaviors were found and for each of the obtained clusters, the interim milestones and the necessary rhythms of construction were identified. In general, detected typical behaviors are: (1) Projects that perform a 5% of work advance in the first two tenths and maintain a constant rhythm until completion (greater than 10% for each remaining tenth), being able to finish on the initially estimated time. (2) Projects that start with an adequate construction rate but suffer minor delays culminating with a total delay of almost 27% of the planned time. (3) Projects which start with a performance below the planned rate and end up with an average delay of 64%, and (4) projects that begin with a poor performance, suffer great delays and end up with an average delay of a 120% of the planned completion time. The obtained clusters compose a tool to identify the behavior of new construction projects by comparing their current work performance to the validated database, thus allowing the correction of initial estimations towards more accurate completion schedules.
Abstract: In recent decades, there have been significant developments in the European Union in the field of collective consumer redress. South East European countries (SEE) covered by this paper, in line with their EU accession priorities and duties under Stabilisation and Association Agreements, have to harmonize their national laws with the relevant EU acquis for consumer protection (Chapter 28: Health and Consumer). In these countries, only minimal compliance is achieved. SEE countries have introduced rudimentary collective redress mechanisms, with modest enforcement of collective redress and case law. This paper is based on comprehensive interdisciplinary research conducted for SEE countries on common principles for injunctive and compensatory collective redress mechanisms, emphasizing cross-national comparisons, underlining issues of commonality and difference aiming to develop recommendations for an adequate enforcement of collective redress. SEE countries are recognized by the sectoral approach for regulating collective redress contrary to the majority of EU Member States with having adopted horizontal approach to collective redress. In most SEE countries, the laws do not recognize compensatory but only injunctive collective redress in consumer protection. All responsible stakeholders for implementation of collective redress in SEE countries, lack information and awareness on collective redress mechanisms and the way they function in practice. Therefore, specific actions are needed in these countries to make the whole system of collective redress for consumer protection operational and efficient. Taking into consideration the various designated stakeholders in collective redress in each SEE countries, there is a need of their mutual coordination and cooperation in order to develop consumer protection system and policies. By putting into practice the national collective redress mechanisms, effective access to justice for all consumers, the principle of rule of law will be secured and appropriate procedural guarantees to avoid abusive litigation will be ensured.
Abstract: The aim of the paper is to explore the role of social marketing in changing the behavior of consumers on road safety, identify critical aspects and priority needs which impede the implementation of road safety program in Georgia. Given the goals of the study, a quantitative method was used to carry out interviews for primary data collection. This research identified the awareness level of road safety, legislation base, and marketing interventions to change behavior of drivers and pedestrians. During several years the non-governmental sector together with the local authorities and media have been very intensively working on the road safety issue in Georgia, but only seat-belts campaign should be considered rather successful. Despite achievements in this field, efficiency of road safety programs far from fulfillment and needs strong empowering.
Abstract: Studying DNA (deoxyribonucleic acid) sequence is useful in biological processes and it is applied in the fields such as diagnostic and forensic research. DNA is the hereditary information in human and almost all other organisms. It is passed to their generations. Earlier stage detection of defective DNA sequence may lead to many developments in the field of Bioinformatics. Nowadays various tedious techniques are used to identify defective DNA. The proposed work is to analyze and identify the cancer-causing DNA motif in a given sequence. Initially the human DNA sequence is separated as k-mers using k-mer separation rule. The separated k-mers are clustered using Self Organizing Map (SOM). Using Levenshtein distance measure, cancer associated DNA motif is identified from the k-mer clusters. Experimental results of this work indicate the presence or absence of cancer causing DNA motif. If the cancer associated DNA motif is found in DNA, it is declared as the cancer disease causing DNA sequence. Otherwise the input human DNA is declared as normal sequence. Finally, elapsed time is calculated for finding the presence of cancer causing DNA motif using clustering formation. It is compared with normal process of finding cancer causing DNA motif. Locating cancer associated motif is easier in cluster formation process than the other one. The proposed work will be an initiative aid for finding genetic disease related research.
Abstract: The advancement in various concrete ingredients like plasticizers, additives and fibers, etc. has enabled concrete technologists to develop many viable varieties of special concretes in recent decades. Such various varieties of concrete have significant enhancement in green as well as hardened properties of concrete. A prudent selection of appropriate type of concrete can resolve many design and application issues in construction projects. This paper focuses on usage of self-compacting concrete, high early strength concrete, structural lightweight concrete, fiber reinforced concrete, high performance concrete and ultra-high strength concrete in the structures. The modified properties of strength at various ages, flowability, porosity, equilibrium density, flexural strength, elasticity, permeability etc. need to be carefully studied and incorporated into the design of the structures. The paper demonstrates various mixture combinations and the concrete properties that can be leveraged. The selection of such products based on the end use of structures has been proposed in order to efficiently utilize the modified characteristics of these concrete varieties. The study involves mapping the characteristics with benefits and savings for the structure from design perspective. Self-compacting concrete in the structure is characterized by high shuttering loads, better finish, and feasibility of closer reinforcement spacing. The structural design procedures can be modified to specify higher formwork strength, height of vertical members, cover reduction and increased ductility. The transverse reinforcement can be spaced at closer intervals compared to regular structural concrete. It allows structural lightweight concrete structures to be designed for reduced dead load, increased insulation properties. Member dimensions and steel requirement can be reduced proportionate to about 25 to 35 percent reduction in the dead load due to self-weight of concrete. Steel fiber reinforced concrete can be used to design grade slabs without primary reinforcement because of 70 to 100 percent higher tensile strength. The design procedures incorporate reduction in thickness and joint spacing. High performance concrete employs increase in the life of the structures by improvement in paste characteristics and durability by incorporating supplementary cementitious materials. Often, these are also designed for slower heat generation in the initial phase of hydration. The structural designer can incorporate the slow development of strength in the design and specify 56 or 90 days strength requirement. For designing high rise building structures, creep and elasticity properties of such concrete also need to be considered. Lastly, certain structures require a performance under loading conditions much earlier than final maturity of concrete. High early strength concrete has been designed to cater to a variety of usages at various ages as early as 8 to 12 hours. Therefore, an understanding of concrete performance specifications for special concrete is a definite door towards a superior structural design approach.
Abstract: The latent heat thermal energy storage system is a
thrust area of research due to exuberant thermal energy storage
potential. The thermal performance of PCM is significantly
augmented by installation of the high thermal conductivity fins. The
objective of the present study is to obtain optimum size and location
of the fins to enhance diffusion heat transfer without altering overall
melting time. Hence, the constructal theory is employed to eliminate,
resize, and re-position the fins. A numerical code based on conjugate
heat transfer coupled enthalpy porosity approached is developed to
solve Navier-Stoke and energy equation.The numerical results show
that the constructal fin design has enhanced the thermal performance
along with the increase in the overall volume of PCM when
compared to conventional. The overall volume of PCM is found to be
increased by half of total of volume of fins. The elimination and repositioning
the fins at high temperature gradient from low
temperature gradient is found to be vital.
Abstract: In this paper, an (irregular) case relating to base circle, root circle, and pressure angle has been discussed and a computer programme has been developed to simulate and plot spur gear tooth profile, including involute and trochoid curves based on the formulation of rack cutter using different values of pressure angle and profile shift factor and it gave the values of all important geometric parameters. The results showed the flexibility of this approach and versatility of the programme to draw many different cases of spur gear teeth of any module, pressure angle, profile shift factor, number of teeth and rack cutter tip radius. The procedure developed can be extended to produce finite element models of heretofore intractable geometrical forms, to exploring fabrication of nonstandard tooth forms also. Finite elements model of these irregular cases have been built using above programme, and modal analysis has been done using ANSYS software, and natural frequencies of these selected cases have been obtained and discussed.
Abstract: The purpose of the present research is to equate two
test forms as part of a study to evaluate the educational effectiveness
of the ARTé: Mecenas art history learning game. The researcher
applied Item Response Theory (IRT) procedures to calculate item,
test, and mean-sigma equating parameters. With the sample size
n=134, test parameters indicated “good” model fit but low Test
Information Functions and more acute than expected equating
parameters. Therefore, the researcher applied equipercentile equating
and linear equating to raw scores and compared the equated form
parameters and effect sizes from each method. Item scaling in IRT
enables the researcher to select a subset of well-discriminating items.
The mean-sigma step produces a mean-slope adjustment from the
anchor items, which was used to scale the score on the new form
(Form R) to the reference form (Form Q) scale. In equipercentile
equating, scores are adjusted to align the proportion of scores in each
quintile segment. Linear equating produces a mean-slope adjustment,
which was applied to all core items on the new form. The study
followed a quasi-experimental design with purposeful sampling of
students enrolled in a college level art history course (n=134) and
counterbalancing design to distribute both forms on the pre- and posttests.
The Experimental Group (n=82) was asked to play ARTé:
Mecenas online and complete Level 4 of the game within a two-week
period; 37 participants completed Level 4. Over the same period, the
Control Group (n=52) did not play the game. The researcher
examined between group differences from post-test scores on test
Form Q and Form R by full-factorial Two-Way ANOVA. The raw
score analysis indicated a 1.29% direct effect of form, which was
statistically non-significant but may be practically significant. The
researcher repeated the between group differences analysis with all
three equating methods. For the IRT mean-sigma adjusted scores,
form had a direct effect of 8.39%. Mean-sigma equating with a small
sample may have resulted in inaccurate equating parameters.
Equipercentile equating aligned test means and standard deviations,
but resultant skewness and kurtosis worsened compared to raw score
parameters. Form had a 3.18% direct effect. Linear equating
produced the lowest Form effect, approaching 0%. Using linearly
equated scores, the researcher conducted an ANCOVA to examine
the effect size in terms of prior knowledge. The between group effect
size for the Control Group versus Experimental Group participants
who completed the game was 14.39% with a 4.77% effect size
attributed to pre-test score. Playing and completing the game
increased art history knowledge, and individuals with low prior
knowledge tended to gain more from pre- to post test. Ultimately,
researchers should approach test equating based on their theoretical
stance on Classical Test Theory and IRT and the respective assumptions. Regardless of the approach or method, test equating
requires a representative sample of sufficient size. With small sample
sizes, the application of a range of equating approaches can expose
item and test features for review, inform interpretation, and identify
paths for improving instruments for future study.
Abstract: The world-wide population of people over 60 years
of age is growing rapidly. The explosion is placing increasingly
onerous demands on individual families, multiple industries and
entire countries. Current, human-intensive approaches to eldercare
are not sustainable, but IoT and AI technologies can help. The
Knowledge Reactor (KR) is a contextual, data fusion engine built to
address this and other similar problems. It fuses and centralizes IoT
and System of Record/Engagement data into a reactive knowledge
graph. Cognitive applications and services are constructed with its
multiagent architecture. The KR can scale-up and scaledown, because
it exploits container-based, horizontally scalable services for graph
store (JanusGraph) and pub-sub (Kafka) technologies. While the KR
can be applied to many domains that require IoT and AI technologies,
this paper describes how the KR specifically supports the challenging
domain of cognitive eldercare. Rule- and machine learning-based
analytics infer activities of daily living from IoT sensor readings. KR
scalability, adaptability, flexibility and usability are demonstrated.