Future Steps - 2020 Workshop Report
In this guide
In this guideDriving future research on point of departures (PODs)
At the end of the workshop, a collective roundtable discussion was held: Directing future research – determining PODs using non-animal methods and their use in assessing chemical safety.
The following is a summary of the discussion points about how we can combine experts, themes and knowledge gaps in a multidisciplinary setting to drive priorities forward.
- There is a need for pragmatic guidelines in how to develop and implement quality assured NAMs for safety evaluations. These new methods generate complex new types of data and there are always likely to be gaps in understanding. The scientific uncertainties of NAMs can and should be described alongside the data. Confidence needs to be increased in the predictions from new methods/models i.e., there is not necessarily a need for a full validation of a NAM approach vs the outcomes of animal data because the aim is to use the data afresh in a different way to try to decide whether there is a risk of an adverse outcome in humans or not. A decision needs to be reached, based on scientific evidence and the uncertainty around the prediction needs to be explained. This needs to be described as clearly and rigorously as possible, such that decision-makers can decide whether predictions from NAMs indicate the risk is acceptable or not? This requires a complete framework to be developed, which could be tiered in terms of data generation and requirements.
- The use of probabilistic approaches (statistics) and in particular Bayesian approaches in machine learning could be used to explore uncertainty in combining data types. In Bayesian learning, everything is a distribution, therefore a mean of variance is produced. These algorithms can provide true uncertainties for every case giving an output and probability of that output being correct. There is also the possibility of combining two methods: distribution before and after generating a new piece of data.
- There were discussions on the use of benchmarking the output of NAMs with the use of in vivo animal data estimation of PODs in risk assessment.
- Toxicodynamic modelling verification: Test the impacts on potency of receptor-based mechanisms in AOPs by using PBPK models.
- Consumer facing engagement on new approach methods: There should be planning to take NAMs forward using social sciences research and technical research for integration, such that the public have confidence that NAMs can be used equally as effectively to keep them as safe as using traditional methods.
- Case studies can be used to evaluate NAMs and how they perform for safety decision-making of the assessment of the risks of contaminants in food. The FSA need to define scenarios and substances (through case studies) that would be evaluated and see what outputs occur. It would be useful if the FSA could gather and articulate current science issues in toxicology Limit of Detection (LoD) / Limit of Quantitation (LoQ), application of uncertainty and reference materials methods i.e., Measurement issues: reliable and validated? reference methods materials and LOQs should be validated along the way.
- A challenge led approach should be defined, with case studies and the models and their basis should be evaluated. Straightforward case studies could be used to start with as an initial step in the process. It was suggested that FSA should define these technical challenges where solutions are needed.
- Validation and acceptance: Use the coumarin in cosmetics case study to show how NAMs could be used in principle in safety evaluation for low level exposures.
- Provocative questions were put forward such as: How are animal models relevant to humans? And when did we decide that animals were good models for human and that we were happy with the data? It has become a matter of social acceptance that using data from animal models in our traditional methods are protective for human consumers.
- The use of exposomics and the use of exposomics data alongside both untargeted and targeted metabolomics profiling. This may generate useful information on kinetic behaviour in the body for chemicals already in use in products to learn more about human exposure modelling.
- Computational methods such as QSAR and molecular docking could be used for potency estimation if the known molecular targets could be used in a dose response. However, absolute potency needs to be evaluated objectively, to understand the relationship between potential activity at a molecular target and in vivo response in a range of organisms with differing pharmacokinetic attributes.
- Chemicals are processed in the body by bacteria as well as our cells/tissues. Can we incorporate microbiome in the models using in vitro methods to reflect physiology? We could use learnings from the pharmaceutical industry to guide the food industry. However, it is likely to be extremely challenging to do this.
- With regards to PBPK modelling, the WHO have developed guidance on how to develop a scientifically robust model. The onus is on the modeller to assess the validation of model/regulatory acceptance according to the WHO criteria. Further guidance on validation of models (for a given purpose) has also recently been published (Parish et al., 2020). Questions arose such as: Are there any circumstances where we can use simpler in silico compartmental models vs PBPK?
- There is generally no United Kingdom (UK) biomonitoring data for chemicals exposure in human populations (akin to that from the National Health and Nutrition Examination Survey (NHANES) programme in the USA or in the human biomonitoring 4 EU programme (HBM4EU) in Europe). It would be helpful to have UK human data for priority chemicals of interest or understand how and when EU data could be used and interpreted as being relevant or not for the UK population. It may then be possible to develop human relevant PBPK models for some classes of chemicals using human data, to learn more about human kinetics.
- Human clinical metabolomics could be used i.e., to relate in vitro metabolite signatures to those in vivo. Leverage human metabolomic data and human exposure assessment i.e., to evaluate the relevance of dose metrics in in vitro systems.
- How can we use and combine data from new technologies going forward, using data from in silico and in vitro technologies and human clinical data types and integrate all these new types of data as part of the risk assessment process to arrive at probabilistic rather than deterministic conclusions? Integration of multiple data types in clear risk-based frameworks will be key.
Research priorities & recommendations:
1. Incorporate microbiome in the models using in vitro methods to reflect physiology.
2. Exploration into intracellular dosing. It is important to define/predict what went into the cell rather than what was added to the well. The objective is to try and get close to the free concentration in the tissue.
3. Assay applicability: assay model validation and applicability for toxicity testing in a regulatory setting.
4. Explore the use of AI algorithms to prove uncertainties throughout the process step by step.
5. Exposure science need to develop formal criteria and processes for validation.