6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Toward practical causal epidemiology

      brief-report

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Population attributable fraction (PAF), probability of causation, burden of disease, and related quantities derived from relative risk ratios are widely used in applied epidemiology and health risk analysis to quantify the extent to which reducing or eliminating exposures would reduce disease risks. This causal interpretation conflates association with causation. It has sometimes led to demonstrably mistaken predictions and ineffective risk management recommendations. Causal artificial intelligence (CAI) methods developed at the intersection of many scientific disciplines over the past century instead use quantitative high-level descriptions of networks of causal mechanisms (typically represented by conditional probability tables or structural equations) to predict the effects caused by interventions. We summarize these developments and discuss how CAI methods can be applied to realistically imperfect data and knowledge – e.g., with unobserved (latent) variables, missing data, measurement errors, interindividual heterogeneity in exposure-response functions, and model uncertainty. We recommend that CAI methods can help to improve the conceptual foundations and practical value of epidemiological calculations by replacing association-based attributions of risk to exposures or other risk factors with causal predictions of the changes in health effects caused by interventions.

          Related collections

          Most cited references66

          • Record: found
          • Abstract: not found
          • Article: not found

          Investigating Causal Relations by Econometric Models and Cross-spectral Methods

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Robust causal inference using directed acyclic graphs: the R package ‘dagitty’

            Directed acyclic graphs (DAGs), which offer systematic representations of causal relationships, have become an established framework for the analysis of causal inference in epidemiology, often being used to determine covariate adjustment sets for minimizing confounding bias. DAGitty is a popular web application for drawing and analysing DAGs. Here we introduce the R package 'dagitty', which provides access to all of the capabilities of the DAGitty web application within the R platform for statistical computing, and also offers several new functions. We describe how the R package 'dagitty' can be used to: evaluate whether a DAG is consistent with the dataset it is intended to represent; enumerate 'statistically equivalent' but causally different DAGs; and identify exposure-outcome adjustment sets that are valid for causally different but statistically equivalent DAGs. This functionality enables epidemiologists to detect causal misspecifications in DAGs and make robust inferences that remain valid for a range of different DAGs. The R package 'dagitty' is available through the comprehensive R archive network (CRAN) at [https://cran.r-project.org/web/packages/dagitty/]. The source code is available on github at [https://github.com/jtextor/dagitty]. The web application 'DAGitty' is free software, licensed under the GNU general public licence (GPL) version 2 and is available at [http://dagitty.net/].
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Measuring information transfer

              An information theoretic measure is derived that quantifies the statistical coherence between systems evolving in time. The standard time delayed mutual information fails to distinguish information that is actually exchanged from shared information due to common history and input signals. In our new approach, these influences are excluded by appropriate conditioning of transition probabilities. The resulting transfer entropy is able to distinguish effectively driving and responding elements and to detect asymmetry in the interaction of subsystems.
                Bookmark

                Author and article information

                Contributors
                Journal
                Glob Epidemiol
                Glob Epidemiol
                Global Epidemiology
                Elsevier
                2590-1133
                21 October 2021
                November 2021
                21 October 2021
                : 3
                : 100065
                Affiliations
                University of Colorado School of Business and Cox Associates, 503 N. Franklin Street, Denver, CO 80218, USA
                Author notes
                [* ]Corresponding author. tcoxdenver@ 123456aol.com
                Article
                S2590-1133(21)00019-5 100065
                10.1016/j.gloepi.2021.100065
                10446107
                35ff9049-5ead-48ce-add9-c002f639e264
                © 2021 The Author

                This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

                History
                : 1 September 2021
                : 17 October 2021
                : 18 October 2021
                Categories
                Commentary

                causality,causal artificial intelligence,population attributable fraction,probability of causation,risk analysis,statistical methods

                Comments

                Comment on this article