Attitudes of Gratitude: An Analysis of 30 Cancer Narratives Published by Leading U.S. Cancer Care Centers

This study examines the ways in which cancer patient narratives are portrayed and framed on the websites of three leading U.S. cancer care centers – The University of Texas MD Anderson Cancer Center in Houston, Memorial Sloan Kettering Cancer Center in New York, and Seattle Cancer Care Alliance. Thirty patient stories, 10 from each cancer center website blog, were analyzed using qualitative and quantitative textual analysis of unstructured data, documenting common themes and other elements of story structure and content. Patient narratives were coded using grounded theory as the basis for conducting emergent qualitative research. As part of a systematic, inductive approach to collecting and analyzing data, recurrent and unique themes were examined and compared in terms of positive and negative framing, patient agency, and institutional praise. All three of these cancer care centers are teaching hospitals, with university affiliations, that emphasize an evidence-based scientific approach to treatment that utilizes the latest research and cutting-edge techniques and technology. The featured cancer stories suggest positive outcomes based on anecdotal narratives as opposed to the science-based treatment models employed by the cancer centers. An analysis of 30 sample stories found skewed representation of the “cancer experience” that emphasizes positive outcomes while minimizing or excluding more negative realities of cancer diagnosis and treatment. The stories also deemphasize patient agency, instead focusing on deference and gratitude toward the cancer care centers, which are cast in the role of savior.  

Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured Global Navigation Satellite System Denied Environments

In global navigation satellite system (GNSS) denied settings, such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.

Tailormade Geometric Properties of Chitosan by Gamma Irradiation

Chitosans, CSs, in solution are increasingly used in a range of geometric properties in various academic and industrial sectors, especially in the domain of pharmaceutical and biomedical engineering. In order to provide a tailoring guide of CSs to the applicants, gamma (γ)-irradiation technology and simple viscosity measurements have been used in this study. Accordingly, CS solid discs (0.5 cm thickness and 2.5 cm diameter) were exposed in air to Cobalt-60 (γ)-radiation, at room temperature and constant 50 kGy dose for different periods of exposer time (tγ). Diluted solutions of native and different irradiated CS were then prepared by dissolving 1.25 mg cm-3 of each polymer in 0.1 M NaCl/0.2 M CH3COOH. The single-concentration relative viscosity (ƞr) measurements were employed to obtain their intrinsic viscosity ([ƞ]) values and interrelated parameters, like: the molar mass (Mƞ), hydrodynamic radiuses (RH,ƞ), radius of gyration (RG,ƞ), and second virial coefficient (A2,ƞ) of CSs in the solution. The results show an exponential decrease of ƞr, [ƞ], Mƞ, RH,ƞ and RG,ƞ with increasing tγ. This suggests the influence of random chain-scission of CSs glycosidic bonds, with rate constant kr and kr-1 (lifetime τr ~ 0.017 min-1 and 57.14 min, respectively). The results also show an exponential decrease of A2ƞ with increasing tγ, which can be attributed to the growth of excluded volume effect in CS segments by tγ and, hence, better solution quality. The results are represented in following scaling laws as a tailoring guide to the applicants: RH,ƞ = 6.98 x 10-3 Mr0.65; RG,ƞ = 7.09 x 10-4 Mr0.83; A2,ƞ = 121.03 Mƞ,r-0.19.

U-Turn on the Bridge to Freedom: An Interaction Process Analysis of Task and Relational Messages in Totalistic Organization Exit Conversations on Online Discussion Boards

Totalistic organizations include organizations that operate by playing a prominent role in the life of its members through embedding values and practices. The Church of Scientology (CoS) is an example of a religious totalistic organization and has recently garnered attention because of the questionable treatment of members by those with authority, particularly when members try to leave the Church. The purpose of this study was to analyze exit communication and evaluate the task and relational messages discussed on online discussion boards for individuals with a previous or current connection to the totalistic CoS. Using organizational exit phases and interaction process analysis (IPA), researchers coded 30 boards consisting of 14,179 thought units from the Exscn.net website. Findings report that all stages of exit were present, and post-exit surfaced most often. Posts indicated more tasks than relational messages, where individuals mainly provided orientation/information. After a discussion of the study’s contributions, limitations and directions for future research are explained.

Biomarkers in a Post-Stroke Population: Allied to Health Care in Brazil

Stroke affects not only the individual, but has significant impacts on the social and family context. Therefore, it is necessary to know the peculiarities of each region, in order to contribute to regional public health policies effectively. Thus, the present study discusses biomarkers in a post-stroke population, admitted to a stroke unit (U-stroke) of reference in the southern region of Brazil. Biomarkers were analyzed, such as age, length of stay, mortality rate, survival time, risk factors and family history of stroke in patients after ischemic stroke. In this studied population, comparing men and women, it was identified that men were more affected than women, and the average age of women affected was higher, as they also had the highest mortality rate and the shortest hospital stay. The risk factors identified here were according to the global scenario; with systemic arterial hypertension (SAH) being the most frequent and those associated with sedentary lifestyle in women the most frequent (dyslipidemia, heart disease and obesity). In view of this, the importance of studies that characterize populations regionally is evident, strengthening the strategic planning of policies in favor of health care.

Early Age Behavior of Wind Turbine Gravity Foundations

Wind turbine gravity foundations are designed to resist overturning failure through gravitational forces resulting from their masses. Owing to the relatively high volume of the cementitious material present, the foundations tend to suffer thermal strains and internal cracking due to high temperatures and temperature gradients depending on factors such as geometry, mix design and level of restraint. This is a result of a fully coupled mechanism commonly known as THMC (Thermo- Hygro - Mechanical - Chemical) coupling whose kinetics peak during the early age of concrete. The focus of this paper is therefore to present and offer a discussion on the temperature and humidity evolutions occurring in mass pours such as wind turbine gravity foundations based on sensor results obtained from the monitoring of an actual wind turbine foundation. To offer prediction of the evolutions, the formulation of a 3D Thermal-Hydro-Chemical (THC) model that is mainly derived from classical fundamental physical laws is also presented and discussed. The THC model can be mathematically fully coupled in Finite Element analyses. In the current study, COMSOL Multi-physics software was used to simulate the 3D THC coupling that occurred in the monitored wind turbine foundation to predict the temperature evolution at five different points within the foundation from time of casting.

Analysis of Incidences of Collapsed Buildings in the City of Douala, Cameroon from 2011-2020

This study focuses on the problem of collapsed buildings within the city of Douala over the past ten years, and more precisely within the period from 2011 to 2020. It was carried out in a bid to ascertain the real causes of this phenomenon, which has become recurrent in the leading economic city of Cameroon. To achieve this, it was first necessary to review some works dealing with construction materials and technology as well as some case histories of structural collapse within the city. Thereafter, a statistical study was carried out on the results obtained. It was found that the causes of building collapses in the city of Douala are: Neglect of administrative procedures, use of poor quality materials, poor composition and confectioning of concrete, lack of Geotechnical study, lack of structural analysis and design, corrosion of the reinforcement bars, poor maintenance in buildings, and other causes. Out of the 46 cases of failure and collapse of buildings within the city of Douala, 7 of these were identified to have had no geotechnical study carried out, giving a percentage of 15.22%. It was also observed that out of the 46 cases of structural failure, 6 were as a result of lack of proper structural analysis and design giving a percentage of 13.04%. Subsequently, recommendations and suggestions are made in a bid to placing particular emphasis on the choice of materials, the manufacture and casting of concrete as well as the placement of the required reinforcements. All this guarantees the stability of a building.

Structural-Geotechnical Effects of the Foundation of a Medium-Height Structure

The interaction effects between the existing soil and the substructure of a 5-story building with an underground one, were evaluated in such a way that the structural-geotechnical concepts were validated through the method of impedance factors with a program based on the method of the finite elements. The continuous wall-type foundation had a constant thickness and followed inclined and orthogonal directions, while the ground had homogeneous and medium-type characteristics. The soil considered was type C according to the Ecuadorian Construction Standard (NEC) and the corresponding foundation comprised a depth of 4.00 meters and a basement wall thickness of 40 centimeters. This project is part of a mid-rise building in the city of Azogues (Ecuador). The hypotheses raised responded to the objectives in such a way that the model implemented with springs had a variation with respect to the embedded base, obtaining conservative results.

Construction Noise Management: Hong Kong Reviews and International Best Practices

Hong Kong is known worldwide for high density living and the ability to thrive under trying circumstances. The 7.5 million residents of this busy metropolis live primarily in high-rise buildings which are built and demolished incessantly. Hong Kong residents are therefore affected continuously by numerous construction activities. In 2020, the Hong Kong Environmental Protection Department (EPD) commissioned a feasibility study on the management of construction noise, including those associated with renovation of domestic premises. A key component of the study focused on the review of practices concerning the management and control of construction noise in metropolitans in other parts of the world. To benefit from international best practices, this extensive review aimed at identifying possible areas of improvement in Hong Kong. The study first referred to the United Nations “The World’s Cities in 2016” Report and examined the top 100 cities therein. The 20 most suitable cities were then chosen for further review. Upon further screening, 12 cities with more relevant management practices were selected for further scrutiny. These 12 cities include: Asia – Tokyo, Seoul, Taipei, Guangzhou, Singapore; Europe – City of Westminster (London), Berlin; North America – Toronto, New York City, San Francisco; Oceania – Sydney, Melbourne. Subsequently, three cities, namely Sydney, City of Westminster, and New York City, were selected for in-depth review. These three were chosen primarily because of the maturity, success, and effectiveness of their construction noise management and control measures, as well as their similarity to Hong Kong in certain key aspects. One of the more important findings of the review is the importance of early focus on potential noise issues, with the objective of designing the noise away wherever practicable. The study examined the similar yet different construction noise early focus mechanisms of these three cities. This paper describes this landmark, worldwide and extensive review on international best construction noise management and control practices at the source, along the noise transmission path and at the receiver end. The methodology, approach, and key findings are presented succinctly in this paper. By sharing the findings with the acoustics professionals worldwide, it is hoped that more advanced and mature construction noise management practices can be developed to attain urban sustainability.

Disparities versus Similarities: WHO GPPQCL and ISO/IEC 17025:2017 International Standards for Quality Management Systems in Pharmaceutical Laboratories

Medicines regulatory authorities expect pharmaceutical companies and contract research organizations to seek ways to certify that their laboratory control measurements are reliable. Establishing and maintaining laboratory quality standards are essential in ensuring the accuracy of test results. ‘ISO/IEC 17025:2017’ and ‘WHO Good Practices for Pharmaceutical Quality Control Laboratories (GPPQCL)’ are two quality standards commonly employed in developing laboratory quality systems. A review was conducted on the two standards to elaborate on areas on convergence and divergence. The goal was to understand how differences in each standard's requirements may influence laboratories' choices as to which document is easier to adopt for quality systems. A qualitative review method compared similar items in the two standards while mapping out areas where there were specific differences in the requirements of the two documents. The review also provided a detailed description of the clauses and parts covering management and technical requirements in these laboratory standards. The review showed that both documents share requirements for over ten critical areas covering objectives, infrastructure, management systems, and laboratory processes. There were, however, differences in standard expectations where GPPQCL emphasizes system procedures for planning and future budgets that will ensure continuity. Conversely, ISO 17025 was more focused on the risk management approach to establish laboratory quality systems. Elements in the two documents form common standard requirements to assure the validity of laboratory test results that promote mutual recognition. The ISO standard currently has more global patronage than GPPQCL.

Decision-Making Strategies on Smart Dairy Farms: A Review

Farm management and operations will drastically change due to access to real-time data, real-time forecasting and tracking of physical items in combination with Internet of Things (IoT) developments to further automate farm operations. Dairy farms have embraced technological innovations and procured vast amounts of permanent data streams during the past decade; however, the integration of this information to improve the whole farm decision-making process does not exist. It is now imperative to develop a system that can collect, integrate, manage, and analyze on-farm and off-farm data in real-time for practical and relevant environmental and economic actions. The developed systems, based on machine learning and artificial intelligence, need to be connected for useful output, a better understanding of the whole farming issue and environmental impact. Evolutionary Computing (EC) can be very effective in finding the optimal combination of sets of some objects and finally, in strategy determination. The system of the future should be able to manage the dairy farm as well as an experienced dairy farm manager with a team of the best agricultural advisors. All these changes should bring resilience and sustainability to dairy farming as well as improving and maintaining good animal welfare and the quality of dairy products. This review aims to provide an insight into the state-of-the-art of big data applications and EC in relation to smart dairy farming and identify the most important research and development challenges to be addressed in the future. Smart dairy farming influences every area of management and its uptake has become a continuing trend.

A Commercial Building Plug Load Management System That Uses Internet of Things Technology to Automatically Identify Plugged-In Devices and Their Locations

Plug and process loads (PPLs) account for a large portion of U.S. commercial building energy use. There is a huge potential to reduce whole building consumption by targeting PPLs for energy savings measures or implementing some form of plug load management (PLM). Despite this potential, there has yet to be a widely adopted commercial PLM technology. This paper describes the Automatic Type and Location Identification System (ATLIS), a PLM system framework with automatic and dynamic load detection (ADLD). ADLD gives PLM systems the ability to automatically identify devices as they are plugged into the outlets of a building. The ATLIS framework takes advantage of smart, connected devices to identify device locations in a building, meter and control their power, and communicate this information to a central database. ATLIS includes five primary capabilities: location identification, communication, control, energy metering, and data storage. A laboratory proof of concept (PoC) demonstrated all but the energy metering capability, and these capabilities were validated using a series of system tests. The PoC was able to identify when a device was plugged into an outlet and the location of the device in the building. When a device was moved, the PoC’s dashboard and database were automatically updated with the new location. The PoC implemented controls to devices from the system dashboard so that devices maintained correct schedules regardless of where they were plugged in within the building. ATLIS’s primary technology application is improved PLM, but other applications include asset management, energy audits, and interoperability for grid-interactive efficient buildings. An ATLIS-based system could also be used to direct power to critical devices, such as ventilators, during a brownout or blackout. Such a framework is an opportunity to make PLM more widespread and reduce the amount of energy consumed by PPLs in current and future commercial buildings.

An Overview of Technology Availability to Support Remote Decentralized Clinical Trials

Developing new medicine and health solutions and improving patient health currently rely on the successful execution of clinical trials, which generate relevant safety and efficacy data. For their success, recruitment and retention of participants are some of the most challenging aspects of protocol adherence. Main barriers include: i) lack of awareness of clinical trials; ii) long distance from the clinical site; iii) the burden on participants, including the duration and number of clinical visits, and iv) high dropout rate. Most of these aspects could be addressed with a new paradigm, namely the Remote Decentralized Clinical Trials (RDCTs). Furthermore, the COVID-19 pandemic has highlighted additional advantages and challenges for RDCTs in practice, allowing participants to join trials from home and not depending on site visits, etc. Nevertheless, RDCTs should follow the process and the quality assurance of conventional clinical trials, which involve several processes. For each part of the trial, the Building Blocks, existing software and technologies were assessed through a systematic search. The technology needed to perform RDCTs is widely available and validated but is yet segmented and developed in silos, as different software solutions address different parts of the trial and at various levels. The current paper is analyzing the availability of technology to perform RDCTs, identifying gaps and providing an overview of Basic Building Blocks and functionalities that need to be covered to support the described processes.

A Numerical Study of the Interaction between Residual Stress Profiles Induced by Quasi-Static Plastification

One of the most relevant phenomena in manufacturing is the residual stress state development through the manufacturing chain. In most cases, the residual stresses have their origin in the heterogenous plastification produced by the processes. Although a few manufacturing processes have been successfully approached by numerical modeling, there is still lack of understanding on how these processes' interactions will affect the final stress state. The objective of this work is to analyze the effect of the grinding procedure on the residual stress state generated by a quasi-static indentation. The model consists in a simplified approach of shot peening, modeling four cases with variations in indenter size and force. This model was validated through topography, measured by optical 3D focus-variation. The indentation model configured with two loads was then exposed to two grinding procedures and the result was analyzed. It was observed that the grinding procedure will have a significant effect on the stress state.

Machine Learning Methods for Flood Hazard Mapping

This paper proposes a neural network approach for assessing flood hazard mapping. The core of the model is a machine learning component fed by frequency ratios, namely statistical correlations between flood event occurrences and a selected number of topographic properties. The classification capability was compared with the flood hazard mapping River Basin Plans (Piani Assetto Idrogeologico, acronimed as PAI) designed by the Italian Institute for Environmental Research and Defence, ISPRA (Istituto Superiore per la Protezione e la Ricerca Ambientale), encoding four different increasing flood hazard levels. The study area of Piemonte, an Italian region, has been considered without loss of generality. The frequency ratios may be used as a standalone block to model the flood hazard mapping. Nevertheless, the mixture with a neural network improves the classification power of several percentage points, and may be proposed as a basic tool to model the flood hazard map in a wider scope.

Influence of Laser Treatment on the Growth of Sprouts of Different Wheat Varieties

Cereals are considered as a strategic product in human life and their demand is increasing with the growth of world population. Increasing wheat production is important for the country. One of the ways to solve the problem is to develop and implement new, environmentally and economically acceptable technologies. Such technologies include pre-sowing treatment of seed with a laser and associative nitrogen-fixing bacteria - Azospirillum brasilense. In the region there are the wheat varieties - Dika and Lomtagora, which are among the most common in Georgia. Dika is a frost-resistant wheat, with a high ability to adapt to the environment, resistant to falling and it is sown in highlands. Lomtagora 126 differs with its winter and drought resistance, and it has a great ability to germinate. Lomtagora is characterized by a strong root system and a high budding capacity. It is an early variety, fall-resistant, easy to thresh and suitable for mechanized harvesting with large and red grains. This paper presents some preliminary experimental results where a continuous CO2 laser with a power of 25-40 W was used to radiate grains at a flow rate of 10 and 15 cm/sec. The treatment was carried out on grains of the Triticum aestivum L. var. Lutescens (local variety name - Lomtagora 126), and Triticum carthlicum Nevski (local variety name - Dika). Here the grains were treated with A. brasilense isolate (108-109 CFU/ml), which was isolated from the rhizosphere of wheat. It was observed that the germination of the wheat was not significantly influenced by either laser or bacteria treatment. The results of our research show that combined treatment with laser and A. brasilense significantly influenced the germination of wheat. In the case of the Lomtagora 126 variety, grains were exposed to the beam on a speed of 10 cm/sec, only slightly improved the growth for 38-day seedlings, in case of exposition of grains with a speed of 15 cm/sec - by 23%. Treatment of seeds with A. brasilense in both exposed and non-exposed variants led to an improvement in the growth of seedlings, with A. brasilense alone - by 22%, and with combined treatment of grains - by 29%. In the case of the Dika variety, only exposure led to growth by 8-9%, and the combined treatment - by 10-15%, in comparison with the control variant. Superior effect on growth of seedlings of different varieties was achieved with the combinations of laser treatment on grains in a beam of 15 cm/sec (radiation power 30-40 W) and in addition of A. brasilense - nitrogen fixing bacteria. Therefore, this is a promising application of A. brasilense as active agents of bacterial fertilizers due to their ability of molecular nitrogen fixation in cereals in combination with laser irradiation: choosing a proper strain gives a good ability to colonize roots of agricultural crops, providing a high nitrogen-fixing ability and the ability to mobilize soil phosphorus, and laser treatment stimulates natural processes occurring in plant cells, will increase the yield.

Automated 3D Segmentation System for Detecting Tumor and Its Heterogeneity in Patients with High Grade Ovarian Epithelial Cancer

High grade ovarian epithelial cancer (OEC) is the most fatal gynecological cancer and poor prognosis of this entity is closely related to considerable intratumoral genetic heterogeneity. By examining imaging data, it is possible to assess the heterogeneity of tumorous tissue. This study presents a methodology for aligning, segmenting and finally visualizing information from various magnetic resonance imaging series, in order to construct 3D models of heterogeneity maps from the same tumor in OEC patients. The proposed system may be used as an adjunct digital tool by health professionals for personalized medicine, as it allows for an easy visual assessment of the heterogeneity of the examined tumor.

Methane versus Carbon Dioxide: Mitigation Prospects

Atmospheric carbon dioxide (CO2) has dominated the discussion around the causes of climate change. This is a reflection of a 100-year time horizon for all greenhouse gases that became a norm.  The 100-year time horizon is much too long – and yet, almost all mitigation efforts, including those set in the near-term frame of within 30 years, are still geared toward it. In this paper, we show that for a 30-year time horizon, methane (CH4) is the greenhouse gas whose radiative forcing exceeds that of CO2. In our analysis, we use the radiative forcing of greenhouse gases in the atmosphere, because they directly affect the rise in temperature on Earth. We found that in 2019, the radiative forcing (RF) of methane was ~2.5 W/m2 and that of carbon dioxide was ~2.1 W/m2. Under a business-as-usual (BAU) scenario until 2050, such forcing would be ~2.8 W/m2 and ~3.1 W/m2 respectively. There is a substantial spread in the data for anthropogenic and natural methane (CH4) emissions, along with natural gas, (which is primarily CH4), leakages from industrial production to consumption. For this reason, we estimate the minimum and maximum effects of a reduction of these leakages, and assume an effective immediate reduction by 80%. Such action may serve to reduce the annual radiative forcing of all CH4 emissions by ~15% to ~30%. This translates into a reduction of RF by 2050 from ~2.8 W/m2 to ~2.5 W/m2 in the case of the minimum effect that can be expected, and to ~2.15 W/m2 in the case of the maximum effort to reduce methane leakages. Under the BAU, we find that the RF of CO2 will increase from ~2.1 W/m2 now to ~3.1 W/m2 by 2050. We assume a linear reduction of 50% in anthropogenic emission over the course of the next 30 years, which would reduce the radiative forcing of CO2 from ~3.1 W/m2 to ~2.9 W/m2. In the case of "net zero," the other 50% of only anthropogenic CO2 emissions reduction would be limited to being either from sources of emissions or directly from the atmosphere. In this instance, the total reduction would be from ~3.1 W/m2 to ~2.7 W/m2, or ~0.4 W/m2. To achieve the same radiative forcing as in the scenario of maximum reduction of methane leakages of ~2.15 W/m2, an additional reduction of radiative forcing of CO2 would be approximately 2.7 -2.15 = 0.55 W/m2. In total, one would need to remove ~660 GT of CO2 from the atmosphere in order to match the maximum reduction of current methane leakages, and ~270 GT of CO2 from emitting sources, to reach "negative emissions". This amounts to over 900 GT of CO2.

Investigation of Tbilisi City Atmospheric Air Pollution with PM in Usual and Emergency Situations Using the Observational and Numerical Modeling Data

Pollution of the Tbilisi atmospheric air with PM2.5 and PM10 in usual and pandemic situations by using the data of 5 stationary observation points is investigated. The values of the statistical characteristic parameters of PM in the atmosphere of Tbilisi are analyzed and trend graphs are constructed. By means of analysis of pollution levels in the quarantine and usual periods the proportion of vehicle traffic in pollution of city is estimated. Experimental measurements of PM2.5, PM10 in the atmosphere have been carried out in different districts of the city and map of the distribution of their concentrations were constructed. It is shown that maximum pollution values are recorded in the city center and along major motorways. It is shown that the average monthly concentrations vary in the range of 0.6-1.6 Maximum Permissible Concentration (MPC). Average daily values of concentration vary at 2-4 days intervals. The distribution of PM10 generated as a result of traffic is numerical modeled. The modeling results are compared with the observation data.

A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network

We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.