Hartung, Thomas
Forschungsvorhaben
Organisationseinheiten
Berufsbeschreibung
Nachname
Vorname
Name
Suchergebnisse Publikationen
Deconvoluting gene and environment interactions to develop an “epigenetic score meter” of disease
2023-08-04, Butera, Alessio, Smirnova, Lena, Ferrando-May, Elisa, Hartung, Thomas, Brunner, Thomas, Leist, Marcel, Amelio, Ivano
Human health is determined both by genetics (G) and environment (E). This is clearly illustrated in groups of individuals who are exposed to the same environmental factor showing differential responses. A quantitative measure of the gene–environment interactions (GxE) effects has not been developed and in some instances, a clear consensus on the concept has not even been reached; for example, whether cancer is predominantly emerging from “bad luck” or “bad lifestyle” is still debated. In this article, we provide a panel of examples of GxE interaction as drivers of pathogenesis. We highlight how epigenetic regulations can represent a common connecting aspect of the molecular bases. Our argument converges on the concept that the GxE is recorded in the cellular epigenome, which might represent the key to deconvolute these multidimensional intricated layers of regulation. Developing a key to decode this epigenetic information would provide quantitative measures of disease risk. Analogously to the epigenetic clock introduced to estimate biological age, we provocatively propose the theoretical concept of an “epigenetic score‐meter” to estimate disease risk.
First Organoid Intelligence (OI) workshop to form an OI community
2023-02-28, Morales Pantoja, Itzy E., Smirnova, Lena, Muotri, Alysson R., Wahlin, Karl J., Kahn, Jeffrey, Boyd, J. Lomax, Gracias, David H., Harris, Timothy D., Herrmann, Kathrin, Hartung, Thomas
The brain is arguably the most powerful computation system known. It is extremely efficient in processing large amounts of information and can discern signals from noise, adapt, and filter faulty information all while running on only 20 watts of power. The human brain's processing efficiency, progressive learning, and plasticity are unmatched by any computer system. Recent advances in stem cell technology have elevated the field of cell culture to higher levels of complexity, such as the development of three-dimensional (3D) brain organoids that recapitulate human brain functionality better than traditional monolayer cell systems. Organoid Intelligence (OI) aims to harness the innate biological capabilities of brain organoids for biocomputing and synthetic intelligence by interfacing them with computer technology. With the latest strides in stem cell technology, bioengineering, and machine learning, we can explore the ability of brain organoids to compute, and store given information (input), execute a task (output), and study how this affects the structural and functional connections in the organoids themselves. Furthermore, understanding how learning generates and changes patterns of connectivity in organoids can shed light on the early stages of cognition in the human brain. Investigating and understanding these concepts is an enormous, multidisciplinary endeavor that necessitates the engagement of both the scientific community and the public. Thus, on Feb 22–24 of 2022, the Johns Hopkins University held the first Organoid Intelligence Workshop to form an OI Community and to lay out the groundwork for the establishment of OI as a new scientific discipline. The potential of OI to revolutionize computing, neurological research, and drug development was discussed, along with a vision and roadmap for its development over the coming decade.
REACH out-numbered! : The future of REACH and animal numbers
2023, Rovida, Costanza, Busquet, Francois, Leist, Marcel, Hartung, Thomas
The EU’s REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals) Regulation requires animal testing only as a last resort. However, our study (Knight et al., 2023) in this issue reveals that approximately 2.9 million animals have been used for REACH testing for reproductive toxicity, developmental toxicity, and repeated-dose toxicity alone as of December 2022. Currently, additional tests requiring about 1.3 million more animals are in the works. As compliance checks continue, more animal tests are anticipated. According to the European Chemicals Agency (ECHA), 75% of read-across methods have been rejected during compliance checks. Here, we estimate that 0.6 to 3.2 million animals have been used for other endpoints, likely at the lower end of this range. The ongoing discussion about the grouping of 4,500 registered petrochemicals can still have a major impact on these numbers. The 2022 amendment of REACH is estimated to add 3.6 to 7.0 million animals. This information comes as the European Parliament is set to consider changes to REACH that could further increase animal testing. Two proposals currently under discussion would likely necessitate new animal testing: extending the requirement for a chemical safety assessment (CSA) to Annex VII substances could add 1.6 to 2.6 million animals, and the registration of polymers adds a challenge comparable to the petrochemical discussion. These findings highlight the importance of understanding the current state of REACH animal testing for the upcoming debate on REACH revisions as an opportunity to focus on reducing animal use.
New approach methodologies in human regulatory toxicology : Not if, but how and when!
2023-08, Schmeisser, Sebastian, Miccoli, Andrea, von Bergen, Martin, Berggren, Elisabet, Braeuning, Albert, Busch, Wibke, Desaintes, Christian, Gourmelon, Anne, Hartung, Thomas, Leist, Marcel
The predominantly animal-centric approach of chemical safety assessment has increasingly come under pressure. Society is questioning overall performance, sustainability, continued relevance for human health risk assessment and ethics of this system, demanding a change of paradigm. At the same time, the scientific toolbox used for risk assessment is continuously enriched by the development of “New Approach Methodologies” (NAMs). While this term does not define the age or the state of readiness of the innovation, it covers a wide range of methods, including quantitative structure–activity relationship (QSAR) predictions, high-throughput screening (HTS) bioassays, omics applications, cell cultures, organoids, microphysiological systems (MPS), machine learning models and artificial intelligence (AI). In addition to promising faster and more efficient toxicity testing, NAMs have the potential to fundamentally transform today’s regulatory work by allowing more human-relevant decision-making in terms of both hazard and exposure assessment. Yet, several obstacles hamper a broader application of NAMs in current regulatory risk assessment. Constraints in addressing repeated-dose toxicity, with particular reference to the chronic toxicity, and hesitance from relevant stakeholders, are major challenges for the implementation of NAMs in a broader context. Moreover, issues regarding predictivity, reproducibility and quantification need to be addressed and regulatory and legislative frameworks need to be adapted to NAMs. The conceptual perspective presented here has its focus on hazard assessment and is grounded on the main findings and conclusions from a symposium and workshop held in Berlin in November 2021. It intends to provide further insights into how NAMs can be gradually integrated into chemical risk assessment aimed at protection of human health, until eventually the current paradigm is replaced by an animal-free “Next Generation Risk Assessment” (NGRA).
Human brain microphysiological systems in the study of neuroinfectious disorders
2023, Barreras, Paula, Pamies, David, Hartung, Thomas, Pardo, Carlos A.
Microphysiological systems (MPS) are 2D or 3D multicellular constructs able to mimic tissue microenvironments. The latest models encompass a range of techniques, including co-culturing of various cell types, utilization of scaffolds and extracellular matrix materials, perfusion systems, 3D culture methods, 3D bioprinting, organ-on-a-chip technology, and examination of tissue structures. Several human brain 3D cultures or brain MPS (BMPS) have emerged in the last decade. These organoids or spheroids are 3D culture systems derived from induced pluripotent cells or embryonic stem cells that contain neuronal and glial populations and recapitulate structural and physiological aspects of the human brain. BMPS have been introduced recently in the study and modeling of neuroinfectious diseases and have proven to be useful in establishing neurotropism of viral infections, cell-pathogen interactions needed for infection, assessing cytopathological effects, genomic and proteomic profiles, and screening therapeutic compounds. Here we review the different methodologies of organoids used in neuroinfectious diseases including spheroids, guided and unguided protocols as well as microglia and blood-brain barrier containing models, their specific applications, and limitations. The review provides an overview of the models existing for specific infections including Zika, Dengue, JC virus, Japanese encephalitis, measles, herpes, SARS-CoV2, and influenza viruses among others, and provide useful concepts in the modeling of disease and antiviral agent screening.
Organoid intelligence (OI) : The ultimate functionality of a brain microphysiological system
2023, Smirnova, Lena, Morales Pantoja, Itzy E., Hartung, Thomas
Understanding brain function remains challenging as work with human and animal models is complicated by compensatory mechanisms, while in vitro models have been too simple until now. With the advent of human stem cells and the bioengineering of brain microphysiological systems (MPS), understanding how both cognition and long-term memory arise is now coming into reach. We suggest combining cutting-edge AI with MPS research to spearhead organoid intelligence (OI) as synthetic biological intelligence. The vision is to realize cognitive functions in brain MPS and scale them to achieve relevant short- and long-term memory capabilities and basic information processing as the ultimate functional experimental models for neurodevelopment and neurological function and as cell-based assays for drug and chemical testing. By advancing the frontiers of biological computing, we aim to (a) create models of intelligence-in-a-dish to study the basis of human cognitive functions, (b) provide models to advance the search for toxicants contributing to neurological diseases and identify remedies for neurological maladies, and (c) achieve relevant biological computational capacities to complement traditional computing. Increased understanding of brain functionality, in some respects still superior to today’s supercomputers, may allow to imitate this in neuromorphic computer architectures or might even open up biological computing to complement silicon computers. At the same time, this raises ethical questions such as where sentience and consciousness start and what the relationship between a stem cell donor and the respective OI system is. Such ethical discussions will be critical for the socially acceptable advance of brain organoid models of cognition.
G × E interactions as a basis for toxicological uncertainty
2023-06-01, Suciu, Ilinca, Pamies, David, Peruzzo, Roberta, Wirtz, Petra H., Pallocca, Giorgia, Hauck, Christof R., Brunner, Thomas, Hartung, Thomas, Amelio, Ivano, Leist, Marcel
To transfer toxicological findings from model systems, e.g. animals, to humans, standardized safety factors are applied to account for intra-species and inter-species variabilities. An alternative approach would be to measure and model the actual compound-specific uncertainties. This biological concept assumes that all observed toxicities depend not only on the exposure situation (environment = E), but also on the genetic (G) background of the model (G × E). As a quantitative discipline, toxicology needs to move beyond merely qualitative G × E concepts. Research programs are required that determine the major biological variabilities affecting toxicity and categorize their relative weights and contributions. In a complementary approach, detailed case studies need to explore the role of genetic backgrounds in the adverse effects of defined chemicals. In addition, current understanding of the selection and propagation of adverse outcome pathways (AOP) in different biological environments is very limited. To improve understanding, a particular focus is required on modulatory and counter-regulatory steps. For quantitative approaches to address uncertainties, the concept of “genetic” influence needs a more precise definition. What is usually meant by this term in the context of G × E are the protein functions encoded by the genes. Besides the g ene sequence, the regulation of the gene expression and function should also be accounted for. The widened concept of past and present “ g ene expression” influences is summarized here as G e . Also, the concept of “environment” needs some re-consideration in situations where exposure timing (E t ) is pivotal: prolonged or repeated exposure to the insult (chemical, physical, life style) affects G e . This implies that it changes the model system. The interaction of G e with E t might be denoted as G e × E t . We provide here general explanations and specific examples for this concept and show how it could be applied in the context of New Approach Methodologies (NAM).
4.2 million and counting… The animal toll for REACH systemic toxicity studies
2023, Knight, Jean, Hartung, Thomas, Rovida, Costanza
The EU’s chemicals regulation, REACH, requires that most chemicals in the EU be evaluated for human health and ecosystem risks, with a mandate to minimize use of animal tests for these evaluations. The REACH process has been ongoing since about 2008, but a calculation of the resulting animal use is not publicly available. For this reason, we have undertaken a count of animals used for REACH. With EU legislators set to consider REACH revisions that could expand animal testing, we are releasing results for test categories counted to date: reproductive toxicity tests, developmental toxicity tests, and repeated-dose toxicity tests for human health. The total animal count as of December 2022 for these categories is about 2.9 million. Additional tests involving about 1.3 million animals are currently required by a final proposal authorization or compliance check but not yet completed. The total, 4.2 million, for just these three test categories exceeds the original European Commission forecast of 2.6 million for all REACH tests. The difference is primarily because the European Commission estimate excluded offspring, which are most of the animals used for REACH. Other reasons for the difference are extra animals included in tests to ensure sufficient survive to meet the minimum test requirement; dose range-finding tests; extra test animal groups, e.g., for recovery analysis; and a high rejection rate of read-across studies. Given higher than forecast animal use, the upcoming debate on proposed REACH revisions is an opportunity to refocus on reducing animal numbers in keeping with the REACH mandate.
The Good, The Bad, and The Perplexing : Structural Alerts and Read-Across for Predicting Skin Sensitization Using Human Data
2023, Golden, Emily, Ukaegbu, Daniel C., Ranslow, Peter, Brown, Robert H., Hartung, Thomas, Maertens, Alexandra
In our earlier work (Golden et al., 2021), we showed 70–80% accuracies for several skin sensitization computational tools using human data. Here, we expanded the data set using the NICEATM human skin sensitization database to create a final data set of 1355 discrete chemicals (largely negative, ∼70%). Using this expanded data set, we analyzed model performance and evaluated mispredictions using Toxtree (v 3.1.0), OECD QSAR Toolbox (v 4.5), VEGA’s (1.2.0 BETA) CAESAR (v 2.1.7), and a k-nearest-neighbor (kNN) classification approach. We show that the accuracy on this data set was lower than previous estimates, with balanced accuracies being 63% and 65% for Toxtree and OECD QSAR Toolbox, respectively, 46% for VEGA, and 59% for a kNN approach, with the lower accuracy likely due to the higher percentage of nonsensitizing chemicals. Two hundred eighty seven chemicals were mispredicted by both Toxtree and OECD QSAR Toolbox, which was approximately 20% of the entire data set, and 84% of these were false positives. The absence or presence of metabolic simulation in OECD QSAR Toolbox made no overall difference. While Toxtree is known for overpredicting, 60% of the chemicals in the data set had no alert for skin sensitization, and a substantial number of these chemicals were in fact sensitizers, pointing to sensitization mechanisms not recognized by Toxtree. Interestingly, we observed that chemicals with more than one Toxtree alert were more likely to be nonsensitizers. Finally, a kNN approach tended to mispredict different chemicals than either OECD QSAR Toolbox or Toxtree, suggesting that there was additional information to be garnered from a kNN approach. Overall, the results demonstrate that while there is merit in structural alerts as well as QSAR or read-across approaches (perhaps even more so in their combination), additional improvement will require a more nuanced understanding of mechanisms of skin sensitization.