UK FSA COT Paving the way for a UK Roadmap-Development, Validation and Acceptance of New Approach Methodologies Workshop summary (2021)

Session II

Last updated: 14 February 2024

The value of data and the right data

Dr Frederic Bois (Certara) presented on the “Replacement of animal experiments with a combination of innovative mathematical modelling and in vitro assays”

43.             Previously, animal models have been used, with extrapolations that are not always well defined. Today, animal models are still considered gold standard because there are a lot of data from, and experience with them, and sometimes there are no human data. Therefore, often it is asked to validate a physiologically based pharmacokinetic (PBPK) modelling and simulation model using animal data. For example, checking fit and extrapolations across three animal species, which creates a lot of work.

44.             There is now a consensus that to better predict toxicity in humans, human cells should be used. In this respect, more sophisticated in vitro systems (that use human cells) are being developed. However, because data provided by in vitro models cannot predict directly in vivo effects, models that can simulate toxicokinetics (TK) and toxicodynamics (TD) are also needed. PBPK/TD models, linking PBPK to quantitative AOP or systems toxicology models can be used in this instance, as they effectively describe the relationship between the in vitro concentration and the concentration in cells in vivo, and the subsequent effects. PBPK/TD models are mechanistic and therefore can be parameterised with in vitro data. Such models also have better extrapolation power than more empirical models. In vitro to in vivo extrapolations (IVIVE) works quite well in this context.

45.             Hence there is a need to understand and model TK, TD, and their interplay. It is important to model TD because they can affect TK by feedback effects, for example in the liver where cytotoxicity can lead to reduced metabolism. However, there are several challenges to this, as there may be no obvious targets, and a chemical might have a mixed and complex mode of action. Cell co-culture assays and human-on-chip systems may help and there is a growing recognition that interactions between cells are very important. Although there is a lot of enthusiasm about this, it should be remembered that these are still in vitro systems which have limitations. Complex in vitro systems may pose ethical problems, such as determining when does consciousness emerge. Another challenge which is starting to be seriously tackled is the complexity of metabolic processes. However, understanding the fate and effects of metabolites even in vitro requires large analytical chemistry resources and assay development time.

46.             Build generic PBPK/TD models for predicting toxicity faces other challenges:
i) even if we have defined the AOP and mechanism of action (MoA), it may be difficult to model these mathematically and to integrate the large amounts of omics data using current statistical approaches; and ii) understanding the details of TK and toxic effects requires many data and implies significant costs and logistics to organise.

47.             In addition, regulatory agencies are starting to ask how to address variability in humans, instead of applying blanket safety factors. Assessing this variability is becoming possible with high-throughput in vitro systems but implies proper statistical analysis. It is a challenge to design an in vitro model which captures human variability. Relatively simple approaches, such as read across, QSAR, or small quantitative AOP models may also be useful. Within a properly defined domain of validity, they can be used to extrapolate measurements made in cells to humans, and eventually assess variability.

48.             How to integrate omics and bioinformatics data? Systems biology models may be a viable answer to that challenge. They can model biochemical reactions, organelle, or cell response, up to tissue effects. Virtual organ models have been developed, such as the cardiac simulator developed by the US EPA, and those can be integrated into a complete virtual body model. There is an on-going virtual human (VPH) project funded by the Dutch government. However, this requires strong interactions between physiologists, biochemists, bioinformaticians and mathematicians, when those communities tend to work in isolation. However, the good news is that computational models are being integrated earlier in the design of research projects and can even be at the core of Research and Development projects.

49.             What roadmap steps are needed to get regulatory acceptance as the scientific evidence emerges? Dr Bois discussed that regulatory agencies are duly cautious and do not want to miss unforeseen targets. All possible mechanisms of action should be investigated for any chemical. New methods and models can only partly address that need in specific areas. For now, a mix of standard screening tools, statistical, or empirical models and new approaches is needed for tiered data integration and analysis. As new methods become used in tandem with standard ones, their pros and cons can be understood, and confidence in the best ones should increase. There is a need for not only tiered risk assessment but also tiered model building and tiered data development and integration.

Dr Costanza Rovida (CAAT Europe) presented on “Internationalization of read-across as a validated new approach method (NAM) for regulatory toxicology”.

50.             A workshop report, “Internalisation of read-across as a validated NAM for regulatory toxicology” was published in 2018 (Rovida et al., 2018) and had benefitted from participation of many people with different expertise and covered a wide variety of issues.

51.             For confidence building there will need to be chemical and biological starting information for similarity assessment; NAMs and AOPs; Adsorption, Distribution, Metabolism and Excretion (ADME); Applicability Domain of Read across (RAx) flow; RAx for non-classified substances; Hazard characterisation and potency.

52.          For good read across there needs to be an unambiguous algorithm. What is needed should be properly defined and be independent. There should be learning from OECD Principles, established for the validation of QSAR. There also needs to be A defined domain applicability, Good Lab Practice (GLP) principles, and mechanism interpretation that is justified.

53.          From an international perspective of risk assessment there needs to be an awareness that there are different approaches worldwide and the risk assessment needs to be as reproducible as possible. An example of this is RiskHunt3R: a group that can support industry to understand risk assessment approaches.

54.          There needs to be good communication and dialogue between regulators and industry. They both share the aim to make the world and environment safe. To reach that goal the dialogue needs to be open-mindedness to new ideas, eagerness to acquire new skills and direction. Education is not just for industry and regulators but teaching NAMs should be started at university and secondary school.

Professor Thomas Hartung (John Hopkins University) presented on “Toward a paradigm shift in toxicity testing to improve public health”

55.          There is mounting pressure to move away from animal testing. There needs to be a move from discussion around ethics to reproducibility and quality of science. It is possible to adapt but how do will the change be made? Animal tests are still strongly overestimated in what they can do. There is not tremendous appetite to talk about the shortcomings of animal tests as these have been used for a long time. There needs to be an incentive to objectively assess what these methods can do.

56.          Therefore, it is now key to try to explain what new methodology can and cannot do, and: accept the fact that they can outperform the animal methods. Algorithms have only recently become powerful enough to handle these types and size of data. Cell culture has advanced so much in the last 10 years, with the example of micro physiological systems.

57.          These new alternatives are not just for regulatory use. They are frontloading for pharma, green toxicology and green chemistry testing strategies.

58.          The big challenge is the need for good quality of reporting and results/validation and how data are handled and reviewed is important. This will be fundamental in evidence integration and defining each approach.

59.          The roadblocks for a lot of these processes are often economic and legal as well as issues around validation. However, most of the change will likely come from politics.

Session II Roundtable discussion

60.          A constant issue being raised is a lack of 'human data'.  What can/needs to be done to improve the quality and availability of human data? The toxicological and clinical communities need to work together on this.

61.          There are conversations around the word validation and what it means for NAMs in the regulatory space. Is there a possibility that synthesising the evidence and guidance on that, be an alternative way forward.

62.          It was considered that there shouldn't be an under-estimation of the ability of people to understand and feel confident that they can accept more sophisticated methods and complex data. An example of this is the Benchmark Modelling approaches which have been around and accepted for more than 20 years. 

63.          Some participants discussed the classification of methods as NAMs is too broad for acceptance by regulators and those not familiar with them. Should methods be subdivided and adopted?

64.          Suggestions around what might invalidate the data were also put forward. There will be a need to integrate these and consider them as uncertainties.

65.          There are good scientific tools that are providing useful information, but they are expensive. Why should the regulator only rely on freeware? Discussions considered that currently, most regulators do not have the IT infrastructure to accept/handle the data. How many data do applicants have to share with the regulators e.g., when submitting PBPK data/models, does the raw code need to be provided? Do contract research organization (CROs) need to work with regulators to be able to reproduce the data. Regulators need to be ready to receive the data straight away and be able to respond.