UK FSA COT Paving the way for a UK Roadmap-Development, Validation and Acceptance of New Approach Methodologies Workshop summary (2021)

Session I

Last updated: 14 February 2024

Drivers vs Challenges: Formulate the Problem Space


Professor Alan Boobis (Imperial College London) presented on “A framework for new approach methodologies for human health safety assessment”.

22.             It was stated that there has been an increasing demand for non-animal testing methods and that 20 years ago the new in vitro approaches started to be developed, with a clear vision of the future. NAMs consider biological key events and their mechanistic underpinning. These initial methods were followed by in silico / in vitro approaches that study intermediate biochemical key events (trace the toxicological potential of chemicals) to support the accumulated knowledge of in vivo effects.

23.             The challenge is to keep up with technology and innovation. There is a plethora of new methodologies published, however there is still no specific criteria for, or reliability of, these new methods. The uncertainty around these is still less than if used in a complex risk assessment. Specific criteria are needed for establishing/verifying fitness for purpose and even method performance.

24.             NAMs must be assessed with specific goals, and guidelines for decision-making need to be provided/developed following 3 major steps:

(1)  Problem formulation: context of use.

(2)  Core criteria to be met: accuracy transparency.

(3)  Specific criteria for methodologies: (chemical domain, base mechanism).

25.             The general consensus is to gain confidence with predicting in a reliable manner with accurate transparency. There were discussions around the Parish et al (2020) paper and how to integrate NAMs into a framework like the Integrated Approaches to Testing and Assessment (IATAs).

Professor Mark Cronin (Liverpool John Moores University) presented on the “Development of read-across approaches that are acceptable for regulatory purposes.”

26.             Read-across is the process of data gap filling for a data poor (or the target) substance(s) with information from similar data rich (the source) substance(s). It is often termed an analogue approach when a one-to-one read-across is performed, or a category approach when the data from many source substances are read across to the target. A variety of framework to perform read-across have been reported, with a harmonised framework presented by Patlewicz et al (2017). The harmonised framework intends to make read-across suitable for regulatory purposes and is based around seven steps that are common to all frameworks.

27.             The seven steps in the harmonised framework can be further simplified into the need for:

  • Problem formulation. This should define the purpose of the read-across (relating to regulatory requirements) and acceptable levels of uncertainty for the intended purpose. Preliminary knowledge to guide the read-across, e.g., the similarity hypothesis, should be identified, acknowledging that read-across will be specific to the substance and endpoint. There are many sources of guidance e.g., ECHA, OECD, and ECETOC etc to assist the user.
  • Use of an appropriate similarity hypothesis. A justifiable similarity hypothesis is vital to a strong read-across argument. Frequently used similarity approaches include the use of structural, mode or mechanism of action-based analogues, common degradants or metabolites, measures of chemical similarity based on e.g., Tanimoto indices derived from molecular fingerprints, or biological similarity.
  •  Identification of suitable analogues and data. Computational tools such as the OECD Quantitative Structure Activity Relationship (QSAR) toolbox, AMBIT, ToxRead, GenRA, ChemTunes. ToxGPS will assist in identifying analogues, particularly those with potentially high-quality data.
  •     Assessment of the read-across including uncertainties. The read-across needs to be evaluated to ensure its robustness and justification. The European Chemicals Agency (ECHA)  Read-Across Assessment Framework (ECHA RAAF) provides expert guidance in the assessment of a read-across for regulatory use. Uncertainties in read-across can be evaluated (e.g. Schultz et al., 2019) and have been shown to be reduced by inclusion of lines of evidence drawn from NAM data (e.g. Pestana et al., 2021).
  •    Appropriate documentation. For regulatory use, the read-across must be fully justified and described, requiring clear documentation. The documentation must be fit-for-purpose and is often based around suitable reporting templates which includes a description of the molecules (target / source), their properties and associated NAM data and other relevant information. The documentation should include a narrative justification of the read-across including an assessment of the similarity hypothesis and an evaluation of the data and relevant uncertainties. An example of how to perform and report read-across, suitable for regulatory purposes, is provided by ECHA.

Session I Roundtable discussion

28.             Participants had heard about the drivers/aspirations to replace animal testing, and were asked what is the objective in the UK, and whether there was/should be a target date? How do we meet that date in terms of method development? Participants did not know if it will be a hard deadline.

29.             The science has matured rapidly over the last 10-15 years. The science around NAMs is driven by the need to understand links between exposures and hazards. Fitting this in with regulation will require acceptance and compromise. There needs to be better dissemination of the science but also some enticement from regulators and industry to allow that to happen. Barriers need to be identified and addressed. There needs to be a better definition of NAMs, and there should be acknowledgement that the expectation can change. Do NAMs need to be predictive, or can they be indicative?

30.             It was questioned whether better problem formulation was needed. We are faced with a range of problems. The question could be which of a number of congeners is the least toxic. In the absence of data for a chemical can we at least understand whether it is likely to be a major concern or of lower concern?

31.          The ultimate aim is to protect the consumer from chemical hazards. If new data are available, they should be incorporated to improve the understanding of the biology. Up to this point, there has been a reliance on animal tests but there is a driver to change. Is it possible to get a better description of human biology than, for example, a rat model?

32.          Participants went on to discuss how risk assessors and policy makers can be convinced that a new method is fit for purpose and provides value. Method development is funded, however method verification and validation is not. The next step needs investment. The National Centre for 3Rs (NC3Rs) does some good work trying to bridge the gap. The German government has funded method verification in a couple of cases.

33.          In order to verify a method, it needs to be applied to large numbers of chemicals and establish what is missing. A deadline needs to be set, then milestones set over the next ten years to determine defined goals. What needs to be done should be determined, costed, and the investment made.

34.          Case studies were highlighted as being important. The most convincing case studies are prospective, but these are difficult to do.

35.          The OECD has considerable interest in NAMs: It approved a skin sensitisation method this year which includes a NAM; there is the adverse outcomes pathway (AOP) framework; and there is also a framework for recording omics data.

36.          It was agreed that major funding was required and that regulators need to get together with scientists and innovators to discuss what tools they need.

37.          It was pointed out that when there is the need to move on from in silico and in vitro methods to animal testing, more animal testing tends to be required. For example, based on genotoxicity testing results, regulators may then ask for another assay, or another tissue to be studied.

38.          Public engagement is needed. The public tend to be against animal testing but also expect very high standards of consumer safety and thorough testing. The issue of uncertainty was raised, and how expectations are set. Case studies may show if the uncertainty is as amenable to rigour as we think it is.

39.          If there are case studies, what is going to be the measure of success? There are not currently good benchmarks for many chemicals. Generally, we are looking for the results to not be greatly different from animal models; we want the models to be predictive of humans, but we do not have the data to say when we have been successful.

40.          The question of trade-off was raised. What is the economic and public health benefit of making better decisions on more chemicals? Organisational inertia was also raised, and the risk averse nature of both regulators and scientific advisory panels/committees.

41.          Training needs were discussed especially at a UK level. There are lots of specialisms within NAMs, and it is difficult to have all of these full represented within a regulatory body. One suggestion was to have a separate academic unit/centre, funded by government, to go into depth on all these methods and be available to be called upon by scientific advisory committees. Another participant agreed that there needs to be a group in the UK focussed on translational applied research.

42.          It was noted that the UK is still taking part in EU Horizon programmes and training people, though this needs to be better consolidated in the UK. One observation was that next generation toxicologists in training tend to be very mechanistic but not so able to conduct risk assessments therefore bridging this gap is key. One participant was surprised there is no strategic priorities funding in this area.