General

COT seminars and joint meetings

Papers for meetings and seminars that are not formal COT meetings but are sponsored by COT, either jointly or alone. Please note that links to the national archive may not be compatible with accessibility readers

Last updated: 11 April 2024

Evolving our Assessment and Future Guiding Principles Workshop: 17th May 2023

The Committee on Toxicity of Chemicals in Food, Consumer Products and the Environment (COT) held a workshop to start work on updating their guidance on toxicity testing and its supporting principles. The starting point for the process is to use existing frameworks and guidance but with the aim of introducing innovative improvements where appropriate.  The workshop aimed to identify areas where guidance needs to evolve and included reviewing fundamental risk assessment principles, current guidance on risk assessment and what can be learned from it, integration of new approach methodologies, exploring hazard vs risk and weight of evidence. The overall objective was to discuss how the Committee moves forward in a new era of risk assessment.

Evolving Our Assessment & Future Guiding Principles Workshop Report (2023)

Opportunities and Outlook for UK Food and Chemicals Regulation Post EU Exit Workshop: 13th July 2022

The workshop took place on the 13th of July 2022 in Liverpool, UK. Participants were from industry, academia and regulatory agencies. The day was divided into three sessions:

  • The landscape of regulation post EU exit: UK stakeholder perspectives, International perspectives, opportunities and challenges for UK divergence;
  • Major drivers for change and potential impact on chemical regulation; and
  • Indirect Effects: food prices, food security, supply chain, fraud (Food regulation/human health).

Each of the sessions consisted of presentations followed by a roundtable discussion and included interactive sessions.

Opportunities and outlook for UK Food and Chemicals regulation post EU Exit Workshop Report 2022

COT FSA  Physiologically Based Pharmacokinetic Modelling (PBPK) for Regulators Report (2021)

The UK FSA and the COT held a PBPK for Regulators workshop in a multidisciplinary setting with delegates from regulatory agencies, government bodies, academia, and industry. The workshop provided a platform to enable expert discussions and presentations on the application of physiologically based pharmacokinetic (PBPK) modelling to human health risk assessment in a regulatory context as well as potential future research.

COT FSA PBPK for Regulators Workshop Report 2021

Handbook - COT FSA PBPK for Regulators Workshop 2021

UK FSA COT Paving the way for a UK Roadmap-Development, Validation and Acceptance of New Approach Methodologies Workshop summary (2021)

The UK FSA and the COT held a workshop that took place online over 2 days in October 2021. It had worldwide participation from industry, academics, and regulators. The aim of the workshop was to receive insights, comments, and input from a wide variety of

stakeholders and industry, academia, and government, on the roadmap so that it can be developed, and a useful and engaging document produced, that is beneficial to more than just the FSA and COT. This included a range of scientists, policy and

lawyers, and working in the international space and engaging with the public. Furthermore, there were discussions on a range of areas such as legal, economics, socio-technical barriers, and regulatory frameworks.

UK FSA COT Paving the way for a UK Roadmap-Development, Validation and Acceptance of New Approach Methodologies Workshop summary (2021)

New Approach Methodologies (NAMs) In Regulatory Risk Assessment Workshop - Exploring Dose Response: March 2020

The UK Food Standards Agency (FSA) and the Committee on Toxicity of Chemicals in Food, Consumer Products and the Environment (COT) held an “Exploring Dose Response” workshop in a multidisciplinary setting inviting regulatory agencies, government bodies, academia and industry. The workshop provided a platform from which to address and enable expert discussions on the latest in silico prediction models, new approach methodologies (NAMs), physiologically based pharmacokinetics (PBPK), future methodologies, integrated approaches to testing and assessment (IATA) as well as methodology validation.

Using a series of presentations from external experts and case study (plastic particles, polymers, tropane alkaloids, selective androgen receptor modulators) discussions, the workshop outlined and explored an approach that is fit for purpose applied to future human health risk assessment in the context of food safety. Furthermore, possible future research opportunities were explored to establish points of departure (PODs) using non-animal alternative models and to improve the use of exposure metrics in risk assessment.

New Approach Methodologies (NAMs) In Regulatory Risk Assessment Workshop Report 2020- Exploring Dose Response

COT Symposium, Wednesday 18th March 2015

The potential implications of obesity on the kinetics of persistent organic pollutants and possible ramifications for the risk assessment process, Wednesday 18th March 2015, Jury's Inn, Birmingham

Agenda and application form

External link to the national archive for the COT symposium agenda 18th March 2015

External link to the national archive for the application form to attend the COT symposium 18th March 2015

COT workshop on the evaluation and expression of uncertainties in risk assessment: 3rd February 2010

The COT held a one-day workshop on Wednesday 3rd February 2010 at Oulton Hall, Leeds.

The workshop brought together a number of internationally distinguished experts to evaluate a framework, using four case studies, that had been developed for the transparent evaluation and expression of uncertainty in hazard characterisation. The framework will be developed further before publication. The framework will enable COT, and other committees that perform toxicological evaluations, to improve communication with regards to the sources of uncertainty in their risk assessments.

COT workshop on 21st century toxicology: 11th February 2009

The COT held a one-day workshop in Meriden, West Midlands on Wednesday 11th February 2009

Internationally distinguished speakers described the activities underway for the refinement of experimental and risk assessment toxicology though the generation of improved understanding of mechanisms and the interplay of systems. The workshop addressed a recent US National Academy report called Toxicity Testing in the 21st Century: A Vision and a Strategy. The report called for accelerated development and adoption of human cell in vitro and in silico methods for the prediction of hazards, the determination of mechanistic information, and the integration of data. The aim is to facilitate predictions of human in vivo responses without animals, and in a high-throughput manner applicable to combinations of mixtures and historic compounds with incomplete safety data.

Introduction to the workshop from the COT Chair

The Committee on Toxicity (COT) evaluates chemicals for their potential to harm human health. Risk assessments are carried out at the request of the Food Standards Agency, Health Protection Agency and other Government Departments including the Regulatory Authorities . They cover chemicals in food, consumer products and the environment.

The Committee also provides advice on important general principles of risk assessment for chemicals, and on new scientific discoveries in connection with toxic risks. An important part of the COT’s work is its annual ‘horizon scanning’ exercise, in which it considers emerging issues on which scientific advice or research may be needed.

This workshop focuses on questions that emerge from a report published by the United States National Academy in 2007 on "Toxicity Testing in the 21st Century: A Vision and a Strategy". The National Academy report sets out a 10-20 year strategy in which the goal would be to develop and validate toxicological protocols that move away from testing in animals through use of in vitro and computer-based assessments of toxicity and mechanisms.

The aim is to enable predictions of human in vivo responses to chemicals in a high-throughput and cost-effective manner, with less use of experimental animals. Among other things, this might facilitate toxicological assessment of combined exposure to multiple chemicals, which has been an area of increasing interest in recent years.

I would like to welcome all of our distinguished speakers and delegates and I hope that the presentations will stimulate lively discussions and encourage the audience to participate. I would also like to thank the Secretariat for organising this workshop.

Professor D Coggon (Chair)
OBE MA PhD DM FRCP FFOM FFPH FMedSci

The COT will discuss the workshop at its next meeting in April, this will be followed with a statement from the Committee.

Related content

External link to the national archive for the COT workshop on 21st century toxicology scientific programme

 

COT workshop on transgenerational epigenetics: 6th February 2008

The COT held a one-day workshop on transgenerational epigenetics in Prestbury, Cheshire, on Wednesday 6th February 2008.

Internationally distinguished speakers presented the latest hypotheses on the potential for induction of heritable epigenetic changes by environmental factors and the implication of such effects for chemical risk assessment.

See below for full details of the scientific programme.

Scientific programme

Time

Item

Speaker

10.00 Introduction from Chairman of COT Professor Ieuan Hughes
Department of Paediatrics University of Cambridge
10.05 Chromatin and epigenetics: from genotype to epigenotype to complex diseases? Dr Richard Meehan
MRC Human Genetics Unit, Edinburgh
10.40 Reviewing transgenerational inheritance Professor Lorraine Young
University of Nottingham
11.25 Coffee break
11.40 Searching for a heritable epigenetic code Professor Bryan Turner
University of Birmingham
12.15 Male-line transgenerational responses: A new aspect of human inheritance Professor Marcus Pembrey
Institute of Child Health, University College London; Avon Longitudinal Study of Parents and Children, Bristol University
13.00 Lunch
14.00 Current Testing: Repro-toxicology, Transgenerational studies. A science based regulatory perspective Dr Jenny Odum
Syngenta Central Toxicology Laboratory
14.35 The role of genetics in transgenerational epigenetic inheritance Dr Vardhman Rakyan
Barts and the London Queen Mary’s School of Medicine and Dentistry
15.20 Transgenerational epigenetics: What do we need to know before altering the risk assessment paradigm? Professor Jay Goodman
Michigan State University
16.05 Discussion and close of meeting (estimated close 16.30)

 

COT workshop on evolving approaches to chemical risk assessment: 7th February 2007

The Committee on Toxicity of Chemicals in Food, Consumer Products and the Environment (COT) held a one-day workshop on Wednesday 7th February 2007 at Bailbrook House, Bath

Internationally distinguished speakers presented the latest hypotheses on evolving approaches to risk assessment, with a focus on mathematical models. An abstract booklet is available to download.

Scientific programme

Time

Item

Speaker

10.00 Introduction from Chair of COT Professor Ieuan Hughes
Department of Paediatrics University of Cambridge
Session 1: Chair: Professor Wout Slob
10.10 The Benchmark Approach: A Demonstration of Software Dr Bas Bokkers
RIVM, Netherlands
10.40 Probabilistic Exposure Assessment Modelling Dr Andy Hart
Central Science Laboratory
Sand Hutton, York
11.25 Refreshments
Session 2: Chair: Dr Peter Jackson
11.40 Probabilistic Approaches to Hazard Characterisation and Integrated Risk Assessment Professor Wout Slob
RIVM, Netherlands
12.10 Exploring Uncertainty Using Sensitivity Analysis Dr Martin Spendiff
Health and Safety Laboratory Buxton, UK
12.40 Lunch
Session 3: Chair: Professor Ieuan Hughes
14.00 Framework Approaches in Risk Assessment and Weight of Evidence Considerations Professor Alan Boobis OBE
Division of Medicine
Imperial College, London, UK
14.30 Meta-analysis and the Combination of Epidemiological and Toxicological Evidence Professor David Jones
Department of Health Sciences
University of Leicester, UK
15.00 Discussion and close of meeting
(estimated close 16.00)
Professor Alan Boobis OBE

Related content

External link to the national archive for the workshop on the evolving approaches to chemical risk assessment abstract booklet

 

COT workshop on the development and function in adulthood of the male reproductive system: 15th February 2006

The COT held a workshop on the development and function in adulthood of the male reproductive system - potential chemical induced effects at the York Moat House Hotel, York on Wednesday 15th February 2006.

Internationally distinguished speakers will present the latest hypotheses together with experimental and epidemiological data on potential chemical-induced effects on the development and subsequent function of the male reproductive system.

Scientific programme

Time

Item

Speaker

10.00 Introduction from Chair of COT Professor Ieuan Hughes
Department of Paediatrics University of Cambridge
Session 1: Chair: Professor Ieuan Hughes
10.10 Cross-sectional studies of semen quality Dr Stewart Irvine
Centre for Reproductive Biology, Queen's Medical Research Institute
Edinburgh
10.40 Male reproductive health in Europe – Cryptorchidism and hypospadias Dr Jorma Toppari
Dept of Physiology and Paediatrics
University of Turku, Finland
11.10 Refreshments
Session 2: Chair: Professor Ieuan Hughes
11.25 Testicular dysgenesis syndrome: a meeting of environmental and lifestyle effects? Professor Richard Sharpe
MRC Human Reproductive Sciences Unit
Queen's Medical Research Institute, Edinburgh
11.55 Genetics and the testicular dysgenesis syndrome Dr Mike Joffe
Department of Epidemiology and Public Health, Imperial College London
12.25 Discussion of human data
12.45 Lunch
Session 3: Chair: Dr Andreas Kortenkamp
13.45 Mixture effects of similarly acting anti-androgens in male rat offspring: Prediction and assessment of effects on anogenital distance and nipple retention Dr Ulla Hass
Danish Institute for Food and Veterinary Research, Copenhagen
14.15 Cumulative effects of in utero administration of mixtures of “antiandrogens” on male rat reproductive development Dr L Earl Gray Jnr.
National Health and Environmental Effects Research Laboratory
US EPA, North Carolina
15.00 Discussion and close of meeting
(16.00) Close of meeting
Chair: Professor Ieuan Hughes

 

Joint COT and CSM workshop on Diet and Drug interactions: 2nd February 2005

 

The COT and Committee on Safety of Medicines(CSM) held a joint workshop on Diet and Drug interactions at The Copthorne Hotel, Effingham Park, Gatwick on 2nd February 2005

Scientific programme

Time

Item

Speaker

9.30 Registration  
Session 1: Chair: Professor John Caldwell
10.00 Introduction to session: Drug-food and food-drug interactions Professor John Caldwell
Dean of the Faculty of Medicine,
University of Liverpool
10.15 Overview of mechanisms of drug-food interactions Professor Kevin Chipman
Professor of Cell Toxicology
University of Birmingham
10.45 The epidemiology of drug-nutrient interactions Dr Corinne de Vries
Senior Lecturer in pharmacoepidemiology
University of Surrey
11.10 Refreshments
11.30 Practical consequences of drug-nutrient interactions Dr Catherine Duggan
Senior Clinical Lecturer
School of Pharmacy University of London
11.25 Testicular dysgenesis syndrome: a meeting of environmental and lifestyle effects? Professor Richard Sharpe
MRC Human Reproductive Sciences Unit
Queen's Medical Research Institute, Edinburgh
11.45 Regulatory aspects of drug-food interactions Dr Tim Berridge
Senior Scientific Assessor
Medicines and Healthcare Regulatory Agency
12.15 Panel Discussion: How serious an issue is food-drug interaction for the consumer or patient?
12.45 Lunch
Session 2: Session Chair: Professor Ian Rowland
13.45 Introduction to afternoon session Professor Ian Rowland
Professor of Human Nutrition and Director of the Northern Ireland Centre for Food and Health
University of Ulster
14.00 Obesity and its implications for pharmacology Professor Peter Kopelman
Vice Principal (NHS Liason) and Deputy Warden, Barts and the London, Queen Mary’s School of Medicine and Dentistry
University of London
14.30 Drug-food interactions and vulnerable groups Professor Faith Williams
Professor of Toxicology
Medical School, University of Newcastle
15.00 Tea break
15.15 Herbal medicines – interaction with other medicines Professor Edzard Ernst
Director of Complementary Medicines
Peninsula Medical School,
Universities of Exeter and Plymouth
15.45 Panel discussion: What lesson can be drawn from food-drug interactions for food safety in general or for other mixtures and interactions?
16.15 Close of meeting

 

Workshop on the working group on variability and uncertainty in toxicology: 3rd February 2004

A one-day workshop was held on Tuesday 3 February 2004 at the Marriott Hotel, Cardiff.

In order to ensure that a full range of views and concerns were taken into account, the Working Group on Variability and Uncertainty in Toxicology (VUT) held a scientific symposium/workshop at which stakeholders gave presentations on the issues of relevance to the Group's work followed by an open discussion.

The second meeting of the Working Group was an open event held at the Marriott Hotel, Cardiff.

Related pages

External link to the national archive for information on the WGVUT meeting: 3rd February 2004

See COT working groups under sub groups here for details

 

Open Seminar on Physiologically-based Pharmacokinetic (PBPK) Modelling: 12th February 2003

The replacement of default uncertainty factors with adjustment factors based on chemical-specific data is increasingly possible and there is an interest in using such data in chemical risk assessment. The use of PBPK modelling has been identified by the COT as a method that should be investigated further.

This meeting considered the use of PBPK in risk assessment, consider requirements for PBPK models and define parameters of PBPK methods into risk assessments. It was hoped that the general discussion on the strengths and weaknesses of PBPK would help to identify if and when PBPK can usefully be integrated into COT risk assessments.

Interested groups, organisations or individuals were invited to apply to attend the meeting to be held on Wednesday 12th February 2003 at the Oxford Hotel, Oxford.

 

COT open meeting on Working Group on Risk Assessment of Mixtures of Pesticides (WiGRAMP) held on 28th February 2002

The COT held an open meeting 28 February 2002 to enable interested parties to discuss and comment on the Working group’s draft report that was issued for public consultation on 15th February 2002.

Consumers, academics and industry representatives were amongst the 42 stakeholders that attended the COT Working Group's second open meeting on pesticides and veterinary medicines at the De Vere Dunton Hall, Norwich on 28 February 2002. This meeting was held as part the consultation process to enable interested parties to discuss and comment on the Working Group's draft report, which was issued for public consultation on 15 February 2002. The Working Group was established to consider the risk assessment of mixtures of pesticides and veterinary medicines.

See COT working groups under sub groups here for details

 

Symposium on Genomics and Proteomics: 8th October 2001

The use of genomics and proteomics has been identified as a suitable topic for a joint meeting between the COT, COC and COM, because it is a rapidly growing area with important implications for toxicological risk assessment and will have implications in the regulatory area.

These technologies are already being used extensively and recent papers have highlighted examples of possible applications of this knowledge.

Meeting objectives

The objectives of the meeting are:

  • To provide advice to government departments and regulatory agencies on use of genomics and proteomics in toxicological risk assessment
  • To facilitate closer working and greater collaboration between the COT, COC and COM.

Strategy for Meeting

In order to help Members prepare, the secretariat has collated abstracts for presentations to the working groups, a bibliography and a glossary of useful terms. A number of selected references has been provided as background information.

  • Session 1. Introduction to the use of genomics/proteomics in toxicology. An overview presentation by Dr G Orphanides (Syngenta) will be followed by a brief introduction from each of the Working Groups. (Chaired by Professor Woods, Chair COT.)
  • Session 2. Members will attend one of three Working Groups. A number of questions have been suggested by the secretariat for each of the WGs (see papers below) in order to help stimulate discussion.
  • Session 3. Professor Blain (Chair COC) will chair a discussion of the outcome from each of the WGs (to be presented by the Facilitators). A number of overall conclusions will be drawn regarding the current and potential future uses of genomics and proteomics in toxicology and areas where further research and development are required.

Post meeting

The Secretariat will prepare a detailed report for consideration by the Committees. It is envisaged that the report will be published in a peer reviewed journal. A statement containing the main conclusions will also be published on the Committees' websites.

Seminar programme

Session 1

Time

Item

Speaker

10:30 - 10:40 INTRODUCTION Dr David Harper (Chief Scientist, Department of Health)
10:40 - 11:20 Development of techniques and historical aspects. Objectives of the meeting? Dr George Orphanides (Syngenta, Macclesfield)
11:20 - 11:40 Introduction to genomics Dr Valerie Baker (Unilever)
11:40 – 12:00
  • Introduction to proteomics
Dr Cliff Elcombe (Dundee)
12:00 – 12:20
  • Introduction to use in risk assessment
Dr Tim Gant (MRC, Leicester)
12:20 – 13:00 LUNCH  

Session 2

Time

COT/COM/COC Workgroups

Speaker

Facilitator

13:00 - 14:45 WG1 - Use of Genomics in Screening Dr Valerie Baker Dr Philip Carthew (COT)
WG2 - Use of Proteomics in Screening Dr Cliff Elcombe Dr Sandy Kennedy (COC)
WG3 - Use of Genomics/Proteomics in Risk Assessment (the problems) Dr Tim Gant Dr Andy Smith (COT)

14:45 - 15:00 Tea

Session 3

15:00 - 15:45 Joint Discussion Led By The Three Facilitators

Working Group 1: Use of Genomics in Screening

Speaker: Dr Valerie Baker Facilitator: Dr Philip Carthew

Questions suggested by the Secretariat

1) What applications can toxicogenomics currently be applied to with regard to hazard identification or studies of toxicological mechanisms?

2) Could toxicogenomics be applied to specific toxicological endpoints such as immunotoxicity, neurotoxicity, DNA damage, carcinogenesis, and developmental effects?

3) What are the current difficulties regarding methods and data analysis (e.g variation in hybridisation) which limit potential for use in hazard identification?

4) What are the likely developments in technology and potential uses of toxicogenomics in the future? What further research and development is required?

The Application of Genomics in Screening for Target Organ Toxicity

Dr Valerie Baker, Safety and Environmental Assurance Centre, Unilever, Colworth House, Sharnbrook, Bedfordshire, MK44 1LQ, UK

The rapid progress in the development of genomic, transcriptomic and proteomic technologies has the potential to have a significant impact on our ability to identify toxic hazards. This could in turn form the basis of more predictive risk assessments whilst greatly improving our current understanding of the mechanisms of toxic processes.

The principles surrounding the application of global gene expression analysis (transcriptomics) in screening for target organ toxicity are based on the premise that gene expression changes will occur as a result of exposure to a toxic chemical. These changes in gene expression are often a more sensitive, characteristic and measurable (at sub-toxic doses) endpoint than the more usual indicators of toxicity (e.g. histopathology) and provide novel information to complement and refine established methods. The use of these technologies (e.g. DNA microarrays) to analyse global changes in gene expression, may permit the identification of diagnostic gene expression patterns which can be used to determine the toxic potential of agents (at sub-toxic doses and early exposure time points). In addition, they may provide new markers of toxicity and will allow enhanced extrapolation between experimental animals, humans and human in vitro models in the context of hazard identification.

The potential of DNA microarray technology to identify gene expression pattern changes associated with target organ toxicity (molecular fingerprints) has recently been the focus of several studies. In addition, there are collaborative efforts involving chemical, agrochemical and pharmaceutical industries, such as the International Life Sciences Institute (ILSI) Health and Environmental Sciences Institute (HESI) subcommittee (application of genomics and proteomics to mechanism based risk assessment) to generate gene expression profiles for 'reference toxicants' (e.g. similar toxic endpoint, mechanism etc) for different target organs. This could permit the identification of diagnostic gene expression patterns which, when established as a database, could be used to classify responses to new chemicals based on pattern recognition (examples of these studies will be discussed during the symposium).

Identification of specific gene responses can also provide insights that may lead to the identification of mechanisms of toxicity.

In short, these new technologies allow the visualisation of large-scale global changes in cells and tissues at the molecular level and will facilitate the development of new and refined approaches to hazard identification and safety evaluation based on the identification of biologically relevant markers of toxicity.

References

Nadadur SS, Schladweiler MC and Kodavanti UP (2000). A pulmonary rat gene array for screening altered expression profiles in air pollutant-induced lung injury. Inhalation Toxicology, 12(12): 1239-1254.

Waring JF, Ciurlionis R, Jolly RA, Heindel M and Ulrich RG (2001). Microarray analysis of hepatotoxins in vitro reveals a correlation between gene expression profiles and mechanisms of toxicity. Toxicology Letters, 120, 359-368.

Waring JF, Jolly RA, Ciurlionis R, Lum PY, Praestgaard JT, Morfitt DC, Buratto B, Robers C, Schadt E and Ulrich RG, (2001). Clustering of hepatotoxins based on mechanism of toxicity using gene expression profiles. Toxicology and Applied Pharmacology, 175 (1), 28-42.

Possible use of Genomics in a Screening Context for Safety Assessment

Dr Phil Carthew, SEAC Toxicology Unit, Unilever

There are two major possibilities for the use of toxicogenomics/proteomics in assisting with regulatory decision making. These are the use of data from,

Global genomic/proteomic changes.

Pattern recognition is potentially only partially interpretable with our current level of understanding. However, the development of databases of genomic changes for known toxins will aid the development of this area.

and

Mechanistic studies identifying restricted pattern recognition.

Used for understanding biological mechanisms and supporting hypothesis testing.

Example: Hepatomegaly (liver enlargement) in studies.

Liver enlargement is a common finding in studies submitted to the FSA supporting submissions for the approval of food additives. In many cases no other studies are submitted to explain the reasons for this observation, therefore the default position for a regulatory decision will be that the effect is an adverse one, in the absence of any mechanistic studies to indicate otherwise.

Common causes of liver enlargement:

  • Hypertrophy (phenobarbital, dioxins, PCBs)
  • Hyperplasia (chemicals inducing cell death)
  • Hydropic changes (acute toxic changes)
  • Peroxisome proliferation (hypolipidemics, phthalates)
  • Glycogen accumulation (artificial sweeteners, sugar replacements)
  • Fatty Liver (fats, toxins)
  • Angiectasis (polyhalogenated hydrocarbons in bioassays)
  • Early tumour (carcinogens)

Proposed toxicogenomic/proteomic approach to problem.

If increased liver weight is found at autopsy.

Freeze liver samples in liquid N2.
Examine for gene expression characteristic of the above pathologies
(Genomic fingerprint (pattern recognition) of pathology or proteomic specific profile.)

Hypertrophy

Pathology - Liver cell enlargement, quantitated by measurement of nuclear profile density in zone 3 and comparison with zone 1 of the liver.
Genes - Microsomal enzymes induced (P450s)

Hyperplasia

Pathology - Mitoses and BrdU incorporation, PCNA expression.
Genes - Cell cycle associated genes, (Cyclins, c-fos cell death genes etc).

Hydropic changes

Pathology - Water uptake by hepatocytes.
Genes - Serum soluble IGF-II/M6P receptor, cell death genes, Ca-sensing receptor (CaR).

Peroxisome proliferation

Pathology - Quantitation of peroxisomes by EM.
Genes - PPAR associated and cell proliferation genes.

Glycogen accumulation

Pathology - Glycogen demonstrated histologically by PAS/Diastase method.
Genes - Glycogen synthase related genes.

Fatty Liver

Pathology - Fat deposition, histological demonstration using Oil red O stain.
Genes - Glucose, insulin and leptin regulation genes.

Angiectasis

Pathology - Sinusoidal dilation and blood lakes.
Genes - Associated with cell death, and endothelial cell function and blood clotting.

Early tumour

Pathology - Altered hepatic foci, focal or multifocal hyperplasia.
Genes - Rodent specific markers GST-P, GGT, alpha foetal protein, cell proliferation genes, DNA repair genes.

Although the genes expressed often overlap in the differing pathologies (e.g. cell death), the overall pattern of expression is likely to distinguish between adverse, as opposed to adaptive changes, in the liver. The advantages to both the decision making process, as well as understanding the science behind it, is very clear. Similar target organ pattern recognition could be developed for other organs (and organ systems), where the pathologies are well defined and the biological and biochemical mechanisms are understood.

Adverse vs adaptive changes in organs.

The use of genomics in a target organ toxicity context will stimulate the ongoing discussion on whether genomic changes are adverse in the same way that there has been much discussion about what constitutes an adaptive, as opposed to adverse, change in terms of pathology endpoints.

There are currently no universally accepted definitions of NOEL NOAEL and it will be important to use a definition of these terms to consider how margins of safety (MOS) or margins of exposure (MOE) will be derived in risk assessments using genomic/proteomic changes.

The US EPA definitions of the important terms currently use in safety assessment to derive margins of safety or margins of exposure are:

NOEL (US EPA)

Dose level (quantity) of a substance administered to a group of experimental animals which demonstrates the absence of adverse effects observed or measured at higher dose levels

NOAEL (US EPA)

An exposure level at which there are no statistically or biological significant increases in the frequency or severity of adverse effects between the exposed population and its appropriate control. Some effects may be produced at this level, but they are not considered as adverse, nor precursors to adverse effects.

Adverse effect (US EPA)

A biochemical change, functional impairment, or pathological lesion that either singly or in combination adversely affects the performance of the whole organism or reduces the organism's ability to respond to an additional environmental challenge.

Biologically significant effect (US EPA)

A response in an organism or other biological system that is considered to have substantial or noteworthy effect (positive or negative) on the well-being of the biological system.

These, or similar agreed definitions of the major terms used in risk assessment, will have to be discussed in the context of how genomic/proteomic data will integrate with the existing concepts in safety studies, augmenting the decision making processes leading to the derivation of MOS or MOEs.

Genomics, Transcriptomics and Proteomics: Glossary of Terms

A glossary for the terms used in this seminar.

Allele: alternative form of a gene, e.g. dominant (always expressed if present) or recessive (only expressed if no dominant allele is present).

Amplification: an increase in the number of copies of a specific DNA fragment.

Base pair (bp): two complementary nucleotide bases joined together by chemical bonds. The two strands of the DNA molecule are held together in the shape of a double helix by the bonds between base pairs. The base adenine pairs with thymine, and guanine pairs with cytosine.

Bioinformatics: the science of informatics as applied to biological research. Informatics is the management and analysis of data using advanced computing techniques. Bioinformatics is particularly important as an adjunct to genomics research, because of the large amount of complex data this research generates.

Biomarker: observable change (not necessarily pathological) in the function of an organism, related to a specific exposure or event.

Candidate Gene: A gene that has been implicated in causing or contributing to the development of a particular disease.

C.elegans: Caenorhabditis elegans, a nemotode or roundworm, the first animal to have its genome completely sequenced and all the genes fully characterised.

Chromosome: The DNA in a cell is divided into structures called chromosomes. Chromosomes are large enough to be seen under a microscope. In humans, all cells other than germ cells usually contain 46 chromosomes: 22 pairs of autosomes and either a pair of X chromosomes (in females) or an X chromosome and a Y chromosome (in males). In each pair of chromosomes, one chromosome is inherited from an individual's father and one from his or her mother.

Clone: A term which is applied to genes, cells, or entire organisms which are derived from - and are genetically identical to - a single common ancestor gene, cell, or organism, respectively. Cloning of genes and cells to create many copies in the laboratory is a common procedure essential for biomedical research. Note that several processes which are commonly described as cell 'cloning' give rise to cells which are almost but not completely genetically identical to the ancestor cell. 'Cloning' of organisms from embryonic cells occurs naturally in nature (e.g. with the occurrence of identical twins). The laboratory cloning of a sheep ('Dolly') using the genetic material from a cell of an adult animal has recently been reported.

Cloning: the process of producing a genetically identical copy (clone).

Cloning vector: DNA molecule originating from a virus, a plasmid, or the cell of a higher organism into which another DNA fragment of appropriate size can be integrated without loss of the vector's capacity for self-replication; vectors introduce foreign DNA into host cells, where it can be reproduced in large quantities. Examples are plasmids, cosmids, and yeast artificial chromosomes; vectors are often recombinant molecules containing DNA sequences from several sources.

Coding regions: those parts of the DNA that contain the information needed to form proteins. Other parts of the DNA may have non-coding functions (e.g. start-stop, pointing or timer functions) or as yet unresolved functions or maybe even 'noise'.

Codon: a set of three nucleotide bases in a DNA or RNA sequence, which together code for a unique amino acid. For example, the set AUG (adenine, uracil, guanine) codes for the amino acid methionine.

Combinatorial Chemistry: A technique for rapidly and systematically assembling a variety of molecular entities, or building blocks, in many different combinations, to create tens of thousands of diverse compounds that can be tested in drug discovery screening assays to identify potential useful candidates.

Complementary DNA (cDNA): cDNA is DNA that is synthesised in the lab from mRNA by reverse transcription. A cDNA is so-called because its sequence is the complement of the original mRNA sequence.

Deletion: in the process of DNA replication, a deletion occurs if a nucleotide or series of nucleotides is not copied. Such deletions may be harmless, may result in disease, or may in rare cases be beneficial.

Deoxyribose: A type of sugar which is a component of DNA (Deoxyribonucleic Acid). DNA is a molecule formed of two strands, each of which includes deoxyribose.

DNA (Deoxyribonucleic Acid): the molecule that encodes genetic information. DNA is a double-stranded helix held together by bonds between pairs of nucleotides. See base, base pair, and double helix.

DNA probe: a piece of single-stranded DNA, typically labelled so that it can be detected (for example, a radioactive or fluorescent label can be used), which can single out and bind with (and only with) another specific piece of DNA. DNA probes can be used to determine which sequences are present in a given length of DNA or which genes are present in a sample of DNA.

DNA repair genes: genes which code for proteins which correct 'mistakes' in DNA sequences. When these genes are altered, mutations may be able to accumulate in the genome, ultimately resulting in disease. See genetic mutation, p53 and suppressor gene.

DNA replication: the process of making copies of strands of DNA. Existing DNA is used as a template for synthesising the new strands.

Electrophoresis: A method of separating large molecules (such as DNA fragments or proteins) from a mixture of similar molecules. An electric current is passed through a medium containing the mixture, and each kind of molecule travels through the medium at a different rate, depending on its electrical charge and size. Separation is based on these differences. Agarose and acrylamide gels are the media commonly used for electrophoresis of proteins and nucleic acids.

Endonuclease: An enzyme that cleaves its nucleic acid substrate at internal sites in the nucleotide sequence.

Exogenous DNA: DNA which has been introduced into an organism but which originated outside that organism (e.g. material inserted into a cell by a virus).

Exon: exons are those portions of a gene which code for proteins.

Expressed sequence tag (EST): a short strand of DNA (approximately 200 base pairs long) which is part of a cDNA. Because an EST is usually unique to a particular cDNA, and because cDNAs correspond to a particular gene in the genome, ESTs can be used to help identify unknown genes and to map their position in the genome.

Full gene sequence: the complete order of bases in a gene. This order determines which protein a gene will produce.

Gene: a length of DNA which codes for a particular protein, or in certain cases a functional or structural RNA molecule.

Gene Expression: the process by which the information in a gene is used to create proteins.

Gene Families: Groups of closely related genes that make similar products.

Gene Library: A collection of cloned DNA fragments which, taken together, represent the entire genome of a specific organism. Such libraries or 'gene banks' are assembled so as to allow the isolation and study of individual genes. Gene libraries are produced by first breaking up or 'fractionating' an entire genome. This fractionation can be accomplished either by physical methods or by use of restriction enzymes. The genome fragments are then cloned (multiplied in number) and stored for later use.

Gene Product: the protein produced by a gene.

Genetic Code: the set of codons in DNA or mRNA. Each codon is made up of three nucleotides which call for a unique amino acid. For example, the set AUG (adenine, uracil, guanine) calls for the amino acid methionine. The sequence of codons along an mRNA molecule specifies the sequence of amino acids in a particular protein.

Genetic Engineering: altering the genetic material of cells or organisms in order to make them capable of making new substances or performing new functions.

Genetic Map: a map of a genome which shows the relative positions of the genes and/or markers on the chromosomes.

Genetic Mutation: a change in the nucleotide sequence of a DNA molecule. Genetic mutations are a kind of genetic polymorphism. The term 'mutation', as opposed to 'polymorphism', is generally used to refer to changes in DNA sequence which are not present in most individuals of a species and either have been associated with disease (or risk of disease) or have resulted from damage inflicted by external agents (such as viruses or radiation).

Genetic Polymorphism: a difference in DNA sequence among individuals, groups, or populations (e.g. a genetic polymorphism might give rise to blue eyes versus brown eyes, or straight hair versus curly hair). Genetic polymorphisms may be the result of chance processes, or may have been induced by external agents (such as viruses or radiation). If a difference in DNA sequence among individuals has been shown to be associated with disease, it will usually be called a genetic mutation. Changes in DNA sequence which have been confirmed to be caused by external agents are also generally called 'mutations' rather than 'polymorphisms'.

Genetic Predisposition: susceptibility to a disease which is related to a genetic mutation, which may or may not result in actual development of the disease.

Genomic DNA: The basic chromosome set consisting of a species-specific number of linkage groups and the genes contained therein.

Genome: all the genetic material in the chromosomes of a particular organism; its size is generally given as its total number of base pairs.

Genomic Library: A collection of clones made from a set of randomly generated overlapping DNA fragments representing the entire genome of an organism.

Genomics: the study of genes and their function. Recent advances in genomics are bringing about a revolution in our understanding of the molecular mechanisms of disease, including the complex interplay of genetic and environmental factors. Genomics is also stimulating the discovery of breakthrough healthcare products by revealing thousands of new biological targets for the development of drugs, and by giving scientists innovative ways to design new drugs, vaccines and DNA diagnostics. Genomics-based therapeutics include 'traditional' small chemical drugs, protein drugs, and potentially gene therapy.

Genotype: the particular genetic pattern seen in the DNA of an individual. 'Genotype' is usually used to refer to the particular pair of alleles that an individual possesses at a certain location in the genome. Compare this with phenotype.

Hepatocytes: liver cells.

Hepatotoxicity: toxicity to the liver.

Heterologous Expression Systems: systems that allow expression of a gene in a different organism.

Human Genome Project: an international research effort aimed at discovering the full sequence of bases in the human genome. Led in the United States by the National Institutes of Health and the Department of Energy.

Human Genome Initiative: Collective name for several projects begun in 1986 by DOE to (1) create an ordered set of DNA segments from known chromosomal locations, (2) develop new computational methods for analyzing genetic map and DNA sequence data, and (3) develop new techniques and instruments for detecting and analyzing DNA. This DOE initiative is now known as the Human Genome Program. The national effort, led by DOE and NIH, is known as the Human Genome Project

Hybridization: The process of joining two complementary strands of DNA or one each of DNA and RNA to form a double-stranded molecule.

Idiosyncrasy: specific (and usually unexplained) reaction of an individual to e.g. a chemical exposure to which most other individuals do not react at all. Examples: some people react to their first aspirin with a potentially fatal shock. General allergic reactions do not fall into this category.

In Situ Hybridization (ISH): Use of a DNA or RNA probe to detect the presence of the complementary DNA sequence in cloned bacterial or cultured eukaryotic cells.

E intron: a length of DNA which is interspersed among the protein-coding sequences (exons) in a gene. Introns are transcribed (see transcription) into mRNA but are then cut out of the mRNA sequence before protein synthesis occurs.

Kilobase (kb): a length of DNA equal to 1000 nucleotides.

Knockout Animals: genetically engineered animals in which one or more genes, usually present and active in the normal animal, are absent or inactive.

Library: a set of clones of DNA sequences from an organism's genome. A particular library might include, for example, clones of all of the DNA sequences expressed in a certain kind of cell, or in a certain organ of the body.

Marker: a sequence of bases at a unique physical location in the genome, which varies sufficiently between individuals that its pattern of inheritance can be tracked through families and/or it can be used to distinguish among cell types. A marker may or may not be part of a gene. Markers are essential for use in linkage studies and genetic maps to help scientists to narrow down the possible location of new genes, and to discover the associations between genetic mutations and disease.

Messenger RNA (mRNA): the DNA of a gene is transcribed (see transcription) into mRNA molecules, which then serve as a template for the synthesis of proteins.

Metabonome: constituent metabolites in a biological sample.

Metabonomics: techniques available to identify the presence and concentrations of metabolites in a biological sample.

Murine: of the mouse.

Mutation: A change, deletion, or rearrangement in the DNA sequence that may lead to the synthesis of an altered inactive protein the loss of the ability to produce the protein. If a mutation occurs in a germ cell, then it is a heritable change in that it can be transmitted from generation to generation. Mutations may also be in somatic cells and are not heritable in the traditional sense of the word, but are transmitted to all daughter cells.

Nephrotoxicity: toxicity to the kidney.

NMR: Nuclear Magnetic Resonance, a technique to identify atoms in a sample by measuring the signal given off by the relaxation of e.g. protons previously aligned in a strong magnetic field.

Non-genotoxic Carcinogen: a substance that causes cancer, not by primarily damaging the genetic material, but by mechanisms that stimulate cell proliferation, thus increasing the chances for natural mutations to be reproduced, and/or selection of specific cell populations that may derange in a later stage.

Nucleic Acid: one of the family of molecules which includes the DNA and RNA molecules. Nucleic acids were so named because they were originally discovered within the nucleus of cells, but they have since been found to exist outside the nucleus as well.

Nucleotide: the 'building block' of nucleic acids, such as the DNA molecule. A nucleotide consists of one of four bases - adenine, guanine, cytosine, or thymine - attached to a phosphate-sugar group. In DNA the sugar group is deoxyribose, while in RNA (a DNA-related molecule which helps to translate genetic information into proteins), the sugar group is ribose, and the base uracil substitutes for thymine. Each group of three nucleotides in a gene is known as a codon. A nucleic acid is a long chain of nucleotides joined together, and therefore is sometimes referred to as a 'polynucleotide'.

Nucleus: the membrane bound structure containing a cell's central DNA found within all eukaryotic cells.

Null Allele: inactive form of a gene.

Oligonucleotide: A molecule made up of a small number of nucleotides, typically fewer than 25. These are frequently used as DNA synthesis primers.

Oncogene: a gene which is associated with the development of cancer.

Pharmacogenomics: The science of understanding the correlation between an individual patient's genetic make-up (genotype) and their response to drug treatment. Some drugs work well in some patient populations and not as well in others. Studying the genetic basis of patient response to therapeutics allows drug developers to more effectively design therapeutic treatments.

Phenotype: a set of observable physical characteristics of an individual organism. A single characteristic can be referred to as a 'trait', although a single trait is sometimes also called a phenotype. For example, blond hair could be called a trait or a phenotype, as could obesity. A phenotype can be the result of many factors, including an individual's genotype, environment, and lifestyle, and the interactions among these factors. The observed manifestation of a genotype. The phenotype may be expressed physically, biochemically, or physiologically.

Plasmid: A structure composed of DNA that is separate from the cell's genome. In bacteria, plasmids confer a variety of traits and can be exchanged between individuals - even those of different species. Plasmids can be manipulated in the laboratory to deliver specific genetic sequences into a cell.

Polymerase Chain Reaction (PCR): a method for creating millions of copies of a particular segment of DNA. If a scientist needs to detect the presence of a very small amount of a particular DNA sequence, PCR can be used to amplify the amount of that sequence until there are enough copies available to be detected.

Polymorphism: in this context, the existence of inter-individual differences in DNA sequences coding for one specific gene. The effects of such differences may vary dramatically, ranging from no effect at all to the building of inactive proteins.

Primer: Short pre-existing polynucleotide chain to which new deoxyribonucleotides can be added by DNA polymerase.

Probe: Single-stranded DNA or RNA molecules of specific base sequence, labelled either radioactively or immunologically, that are used to detect the complementary base sequence by hybridisation.

Promoter: a segment of DNA located at the 'front' end of a gene, which provides a site where the enzymes in involved in the transcription process can bind on to a DNA molecule, and initiate transcription. Promoters are critically involved in the regulation of gene expression.

Proteome: total protein complement expressed by a cell, tissue or organism.

Proteomics: study of protein properties on a large scale to obtain a global, integrated view of cellular processes including expression levels, post translational modifications, interactions and location.

Recombinant DNA: DNA molecules that have been created by combining DNA more than one source.

Regulatory Gene: a gene which controls the protein-synthesising activity of other genes.

Reverse Transcriptase: An enzyme used by retroviruses to form a complementary DNA sequence (cDNA) from an RNA template -usually the genome of the retrovirus. The enzyme then performs a complimentary template of the cDNA strand such that a double stranded DNA molecule is formed. This double stranded DNA molecule is then inserted into the chromosome of the host cell which has been infected by the retrovirus. Reverse transcriptase is one of the key components that HIV uses to mount its attack.

RNA (ribonucleic acid): a molecule similar to DNA, which helps in the process of decoding the genetic information carried by DNA.

Serum-responsiveness: cell proliferative reaction to the addition of serum to tissue culture medium after prior deprivation.

Sequencing: determining the order of nucleotides in a DNA or RNA molecule, or determining the order of amino acids in a protein.

Signature Sequencing: sequencing of a short stretch of cDNA close to the end of the complementary mRNA. Sequence stretches of some 20 nucleotides are sufficiently discriminative to identify the transcript of an individual gene in a mammalian tissue.

Single Nucleotide Polymorphism (SNP): Inter-individual variations in the genetic code at the level of one nucleotide.

Southern Blotting: Transfer by absorption of DNA fragments separated in electrophoretic gels to membrane filters for detection of specific base sequences by radiolabeled complementary probes.

Splicing: the removal of introns from the sequence of mRNA. When an mRNA molecule is synthesized from a DNA template, introns are transcribed (see transcription) along with exons. In the splicing process, this material is cut out and the exons are joined together to form a continuous coding sequence.

Suppressor Gene: a gene which helps to reverse the effects of damage to an individual's genetic material, typically effects which might lead to uncontrolled cell growth (as would occur in cancer). A suppressor gene may, for example, code for a protein which checks genes for misspellings, and/or which triggers a cell's self-destruction if too many genetic mutations have accumulated.

Toxicogenomics: a new scientific subdiscipline that combines the emerging technologies of genomics and bioinformatics to identify and characterize mechanisms of action of known and suspected toxicants. Currently, the premier toxicogenomic tools are the DNA microarray and the DNA chip, which are used for the simultaneous monitoring of expression levels of hundreds to thousands of genes.

Transcription: the process during which the information in a length of DNA is used to construct an mRNA molecule.

Transcriptomics: techniques available to identify mRNA from actively transcribed genes.

Transcriptome: mRNA from actively transcribed genes

Transcript Profiling: see transcriptomics

Transfer RNA (tRNA): RNA molecules which bond with amino acids and transfer them to ribosomes, where protein synthesis is completed.

Transformation: A process by which the genetic material carried by an individual cell is altered by incorporation of exogenous DNA into its genome.

Transgenic: An organism whose genome has been altered by the inclusion of foreign genetic material. This foreign genetic material may be derived from other individuals of the same species or from wholly different species. Genetic material may also be of an artificial nature. Foreign genetic information can be added to the organism during its early development and incorporated in cells of the entire organism. As an example, mice embryos have been given the gene for rat growth hormone allowing mice to grow into large adults. Genetic information can also be added later in development to selected portions of the organism. As an example, experimental genetic therapy to treat cystic fibrosis involves selective addition of genes responsible for lung function and is administered directly to the lung tissue of children and adults. Transgenic organisms have been produced that provide enhanced agricultural and pharmaceutical products. Insect resistant crops and cows that produce human hormones in their milk are just two examples.

Transgenic Organism: an organism whose genome has been altered by the incorporation of foreign, or exogenous DNA.

Translation: the process during which the information in mRNA molecules is used to construct proteins.

Vector: [1] An organism which serves to transfer a disease causing organism (pathogen) from one organism to another. [2] a mechanism whereby foreign gene(s) are moved into an organism and inserted into that organism's genome. Retroviruses such as HIV serve as vectors by inserting genetic information (DNA) into the genome of human cells. Bacteria can serve as vectors in plant populations.

Xenobiotic(s): substances not normally present in the reference organism

Session 1: Presentation

Dr George Orphanides of the Syngenta Central Toxicology Laboratory, Alderley Park, Macclesfield, Cheshire, on The Use of Genomics and Proteomics in Toxicology.

The large scale sequencing of the genomes of a number of species, and the identification of their entire complement of genes, will have a major impact on the toxicological sciences. Accompanying technological advances have led to the development of procedures that allow the expression levels of thousands of gene transcripts and proteins to be measured simultaneously. These techniques have been termed transcriptomics and proteomics, respectively. Analyses of mRNA and protein expression levels are helping to unravel the molecular bases for toxicity. A cell depends on a multitude of interacting regulatory pathways for its survival; therefore practically all mechanisms of toxicity are accompanied by altered gene and protein expression. With the advent of these new technologies, it is possible to identify rapidly and holistically the molecular alterations associated with adverse health effects. However, the increase in the rate at which these data can be generated has not been accompanied by corresponding advances in our ability to interpret them into biologically meaningful information. Therefore, with these technological breakthroughs comes the significant danger that data will be misinterpreted.

The power of transcriptomics and proteomics can be harnessed in either 'mechanistic' or 'predictive' modes of analysis. In the 'mechanistic' mode, these techniques are used to implicate specific genes or proteins in the mechanism of action. Such research leads are then subjected to further investigation using conventional techniques in order to clarify the roles, if any, they play in toxicity. Alternatively, the technologies can be utilized in a 'predictive' context, with the hope that biological responses induced by toxins can be described based on comparison of global patterns of gene and protein expression. In this way, the mode of action of a novel toxicant may be identified by comparing the expression pattern it elicits with established expression 'fingerprints' of reference toxicants where mechanisms are understood. The application of these expression profiling methods in mechanistic studies has already met with some success. However, it remains to be seen whether the predictive capacity of these methods can enhance our ability to detect compounds with the inherent potential to induce adverse health effects.

The challenge to toxicologists will be to correlate gene and protein expression profiles with phenotype. This will require that expression data be collected alongside classical toxicology information - including biochemical and pathological data - over a comprehensive range of doses and times of exposure. Experimental design and data interpretation must be carefully considered so that the relationships between molecular events and histopathological alterations can be defined. Some of these considerations are listed below.

  • Which experimental model should be used (in vivo, ex vivo or in vitrosystem)?
  • Which range of compound dose and exposure should be used?
  • Which analytical platform is most suitable (transcriptomics, proteomics or both)?
  • How can molecular biomarkers be identified?
  • Which types of 'classical' toxicology data should be collected together with expression data?
  • How can genes/proteins directly related to the mechanism of toxicity be distinguished from those that represent adaptive change?
  • How can experimental and biological noise be identified and filtered?
  • How can relationships between different data sets be identified, visualised and compared between different experimental platforms?
  • How can mechanistic data be used to extrapolate between species for human risk assessment?

It is likely that transcriptomic and proteomic data will soon be used to accelerate and enhance the process of risk assessment. Mechanistic data generated using these techniques will facilitate extrapolation of risk from laboratory animals to humans. While these new technologies offer great potential, their use in risk assessment must be approached with caution. At present, in the absence of classical toxicological data, alterations in gene or protein expression cannot be taken as evidence of an adverse effect and, therefore, should not be used to set NOAELs. Instead, alterations in gene and proteins believed to be involved in toxic mechanisms can be used as the bases for further detailed investigative research using conventional approaches.

References

ECETOC Document No.42 (2000) Genomics, Transcript Profiling, Proteonomics and Metabonomics (GTPM) An Introduction. Ed Carpanini.

Pandey A and Mann M, (2000). Proteomics to study genes and genomes. Nature, 405, 837.

Pennie WD, Woodyatt NJ, Aldrtdge TC, Orphanides G, (2001) Application of genomics to the definition of the molecular basis for toxicity. Tox. Lett. 120,353

Smith L.L, (2001) Key challenges for toxicologists in the 21st century. Trends in Pharm. Sci. 22, 281.

Working Group 2: Use of proteomics in screening

Speaker: Dr Cliff Elcombe Facilitator: Dr Sandy Kennedy

Questions Suggested by Secretariat

1) What applications can toxicoproteomics currently be applied to with regard to hazard identification or studies of toxicological mechanisms?

2) Could toxicoproteomics be applied to specific toxicological endpoints such as immunotoxicity, neurotoxicity, DNA damage, carcinogenesis, and developmental effects?

3) What are the current difficulties regarding methods and data analysis (e.g. identification and quantification of individual proteins) which limit potential for its use in hazard identification?

4) What are the likely developments in technology and potential uses of toxicoproteomics in the future? What further research and development is required?

Applications of Proteomics in Toxicology

Dr Cliff Elcombe, Biomedical Research Centre, Ninewells Hospital and Medical School, University of Dundee, Dundee DD1 9SY

The word 'proteome' was first introduced to describe the total protein complement of a genome. In contrast to the genome which can, except for repair and replication processes, be considered as fairly constant, the proteome of a cell is dynamic and changes under various conditions of disease or stress.

To analyse proteomes from cells or tissues, newly developing technologies generically termed proteomics are available. Perhaps the most frequently used methodology is two dimensional-polyacrylamide gel electrophoresis (2D-PAGE). This involves analysis of cell or tissue extracts using isoelectric focussing (IEF) followed by SDS-PAGE.

The proteins are separated initially in the first dimension gel on the basis of their electric charge. This gel is then mounted onto a denaturing gel to separate proteins in the second dimension on the basis of their molecular mass. Staining of the gel yields a characteristic pattern of spots - the 'proteome profile'. An overview of changes in expression of nearly all cellular proteins under varying conditions can be obtained by comparison of the intensity of particular spots, e.g. of carcinogen-exposed versus non-exposed tissue. Qualitative characterisation of protein spots of interest is possible by comparison of the obtained 2-D gel pattern with World Wide Web-based protein and 2-D gel databases. Identification of proteins of interest can be achieved, following their excision from the gel and digestion, by mass spectrometry. Alternatively selected proteins can be visualised after the gels are subjected to western blotting with specific antibodies. In addition, proteomics can reveal post-translational protein modifications such as glycosylation and phosporylation.

Other technologies dispense with the need for 2D-PAGE and involve platforms such as Ciphergen's SELDI ProteinChip® system. With this technology, proteins are affinity-captured onto special chemical surfaces and then subjected to laser desorption/ionization Time Of Flight mass analysis. This allows identification of the proteins by precise mass comparisons to the databases.

Proteomics enables the quantitative and qualitative analysis of the expression of all (in theory) proteins existing within a cell. Hence, proteomics may be a better predictor of functional changes during (patho)physiological or toxicological processes than genomics (determining changes at the mRNA level), because the latter may not necessarily be translated into changes in the concentrations of functional proteins.

In recent years, proteomics has been used in a limited number of toxicological studies. For instance, changes in proteome profiles of rat liver following exposure to methapyrilene, a mitochondrial proliferator, have been described. In addition to the discovery of novel mitochondrial protein modifications as a result of drug treatment, quantitative changes in non-mitochondrial proteins were also observed. Others workers have shown that cyclosporin-A-mediated nephrotoxicity is due to down-regulation of calbindin, a 28 kDa protein involved in renal calcium metabolism. Quantitative and qualitative changes in proteome profiles have also been observed in various other systems in response to chemicals, including several peroxisome proliferators. Such examples illustrate that 2-D gel technology and proteomics can be used to document quantitative and qualitative changes in protein expression induced in response to toxic agents.

In summary, proteomic technologies are proving their potential in elucidating mechanisms of toxicity, the identification of potential hazard and the estimation of risk.

References

Aicher L, Wahl D, Arce A, Grenet O, Steiner S, (1998). New Insights into cyclosporine A nephrotoxicity by proteome analysis. Electrophoresis 19, 1998 -2003

Anderson L, Steele VK, Kellof GJ, Sharma S, (1995) Effects of oltipraz and related chemoprevention compounds on gene expression in rat liver. J. Cell. Biochem. Suppl. 22, 108 -116

Steiner S and Anderson NL (2000). Expression profiling in toxicology - potentials and limitations. Toxicology Letters, 112-113: 467-471

Applications of Proteomics in Toxicology

Dr Sandy Kennedy, Oxford Glycosciences (UK) Ltd
Screening/predictive toxicology

The use of proteomics in screening and predictive toxicology has two principal applications: establishing relationships between toxic effects and protein molecular markers, i.e. identifying toxicological biomarkers; and recognition of patterns, e.g. class effects and structure-activity relationships. In addition, proteomics offers several potential practical benefits. It should be possible to screen for toxic effects more rapidly with the advent of the newer proteomic methodologies (e.g. ICAT - isotope coded affinity tags, and antibody chips) than with conventional methods. The highly sensitive analytical techniques used in proteomics can potentially detect toxic effects at lower doses than methods such as histology and clinical chemistry. Proteomics has already been applied in a variety of different settings.

Mechanistic toxicology

Proteomics, especially when combined with conventional methods, offers the prospect of new insights into toxic mechanisms. Such insights allow recognition of effects that may be species-specific, giving a more accurate assessment of likely human toxicity. Furthermore, understanding the mechanisms of toxicity of compounds may enable selection of derivatives with lower toxicity.

Non-invasive biomarker identification

A particular advantage of proteomics is that not only tissues but also body fluids can be assayed to investigate the molecular correlates of disease and toxicity. This is possible because many proteins, unlike mRNA, are secreted, in profiles that vary predictably with physiological state. As a result, proteomic analysis can be carried out in large numbers of samples on the basis of simple blood or urine tests.

The proteomic evaluation of body fluids can be of particular value in the search for non-invasive biomarkers and they are representative of the final secreted protein. This capability is enhanced by the ability to remove high abundance proteins such as albumin, IgG, haptoglobulin and transferrin from a 2-D gel. An immunoaffinity based enrichment technique is carried out that reveals hundreds of proteins in the gel that would previously have been masked from detection. This is therefore a rich source of biomarker identification for toxicity, efficacy of a drug or exposure to a xenobiotic in man or wildlife. Once a biomarker protein or group of proteins is identified, standard methods such as immunoassays can be used for screening.

The techniques of proteomics will make a considerable contribution not only to research but also to regulatory toxicology. However, proteomics methods are likely to complement rather than replace older methods of testing for regulatory purposes in the short term. The great potential is that protein biomarkers will be identified that will improve the predictivity of animal studies and in particular, provide that valuable commodity, the bridge between animals and man. Thus giving more assurance to the interpretation of data from animal studies and their predictivity for effects on man.

Until a greater body of toxicoproteomic data has been acquired, it is unwise to use such evaluations to do a primary identification of target organ toxicity. Although identification of more sensitive biomarkers can be envisaged as enhanced clinical pathology tools in toxicity studies.

Working Group 3: Use of Genomics and Proteomics in Risk Assessment

Speaker - Dr Tim Gant Facilitator - Dr Andy Smith

Questions suggested by Secretariat

1) Can data from genomics/proteomics be applied to identification of NOAELs/LOAELs and be used for risk assessment purposes? (Issues to consider include dose-response analysis and species extrapolation)

2) What are the problems in data analysis (e.g statistical evaluation of cluster responses) which impact on the use of these data in risk assessment?

3) Can toxicogenomic/proteomic technologies be applied to human epidemiology investigations?

4) What further research and development would assist in the application of these technologies to toxicological risk assessment?

Challenge and Potential of Genomics in Risk Assessment

Dr Tim Gant and Dr Andy Smith, MRC Toxicology Unit, University of Leicester

In the last few years a large number of gene probes have become available as either Expressed Sequence Tags (ESTs), formed as a consequence of automated sequencing of cDNA libraries, or oligonucleotides. Binding these to solid substrates such as glass in an array format has produced a powerful methodology to measure the simultaneous expression of thousands of genes in biological systems. So far the greatest published use of gene expression pattern recognition has been in the analysis of human tumours, leading to the identification of new pathological subtypes, crucial knowledge for chemotherapy. The use of this technology in toxicology started early in companies and institutes with the rationale that products with unwanted side effects would give rise to recognisable gene expression patterns that would allow them to be removed early in product development. Additionally, potentially genomics could lead us to a much greater understanding of the mechanism of a toxicant, and therefore to a more mechanism based assessment of its likely risk in man. One of the challenges in genomics analysis is deciding the significance of gene expression changes with regard to NOEL and LOEL values. In some ways this is not dissimilar from current questions posed by clinical chemistry, hepatic drug metabolism enzymes and immunohistochemistry, but on a much greater and more sophisticated scale as a result of the quantity of data generated. Critical assessment of this data at the regulatory level will depend on assessors expanding their knowledge of biochemical function and bioinformatic analysis techniques. This will need to be underpinned with extensive experience of gene expression resulting from xenobiotic insult, adaptive pathological change and the influence of genetic differences between strains and species including humans. A critical knowledge of bioinformatics is essential because, though apparently simple, the quantitation of gene changes involves a number of mathematical and statistical steps, and the manner of their application can be subject to wide variation between investigators. Clustering techniques are vital for pattern recognition but these bring their own problems, particularly relating to which clustering method is most appropriate. Another important development will be the international archiving of toxicological and related microarray data. However, for this to be truly useful as a data mining resource some large challenges have to be overcome, particularly the development of an ontology.

For organisations such as the FSA and advisory committees, often it is not screening of new products that is the most important issue but the significance to humans of food contaminants and exposure to environmental agents. The assessment of gene expression profiles in human blood cells, or from lung lavages will be possible with the development of reliable PCR amplification techniques. For other organs however, it will be difficult to obtain samples and human in vitro studies will have to be pursued with great attention paid to experimental design in order to avoid emphasising multiple gene expression that has no relevance to in vivo exposure or adverse effects. Using genomics in experimental investigations of a toxicant to identify novel gene targets as biomarkers for further analysis using other techniques might be the best way forward, especially for epidemiological studies.

For risk assessment purposes it will be important to monitor the progress of the proposed UK Population Biomedical Collection looking at environmental and genetic factors in disease, although it is difficult at this stage, to envisage direct interaction pertinent to generation of gene expression data. One should not forget however, that genomics does not measure protein levels. Proteomics is still in its infancy and future progress in the resolution of proteins, perhaps by array-type technology, may well lead to fruitful analysis of archived material from large population collections.

References

Alizadeh AA, Eisen MB, Davis RE, Ma C, Lossos IS, Rosenwald A, Boldrick JG, Sabet H, Tran T, Yu X, Powell JI, Yang LM, Marti GE, Moore T, Hudson J, Lu LS, Lewis DB, Tibshirani R, Sherlock G, Chan WC, Greiner TC, Weisenburger DD, Armitage JO, Warnke R, Levy R, Wilson W, Grever MR, Byrd JC, Botstein D, Brown PO, Staudt LM. (2000) Distinct types of diffuse large B-cell lymphoma identified by gene expression profiling. Nature ;403:503-511.

Kerr and Churchill (2001), Experimental design for gene expression microarrays, Biostatistics, 2:183-201.

Lockhart DL, Winzeler E. (2000) Genomics, Gene expression and DNA arrays. Nature; 405:827-836.

Turton NJ, Judah DJ, Riley J, Davies R, Lipson D, Styles JA, Smith AG, Gant TW. (2001) Gene expression and amplification in breast carcinoma cells with intrinsic and acquired doxorubicin resistance. Oncogene; 20:1300-1306.

References

A set of references for the seminar documents.

Afshari CA, Nuwaysir EF and Barrett JC (1999). Application of complementary DNA microarray technology to carcinogen identification, toxicology, and drug safety evaluation. Cancer Research, 59, 4759-4760.

Aicher L, Wahl D, Arce A, Grenet O and Steiner S. (1998). New insights into cyclosporine A nephrotoxicity by proteome analysis. Electrophoresis 19, 1998-2003.

Alizadeh AA, Eisen MB, Davis RE, Ma C, Lossos IS, Rosenwald A, Boldrick JG, Sabet H, Tran T, Yu X, Powell JI, Yang LM, Marti GE, Moore T, Hudson J, Lu LS, Lewis DB, Tibshirani R, Sherlock G, Chan WC, Greiner TC, Weisenburger DD, Armitage JO, Warnke R, Levy R, Wilson W, Grever MR, Byrd JC, Botstein D, Brown PO, Staudt LM. Distinct types of diffuse large B-cell lymphoma identified by gene expression profiling. Nature 2000;403:503-511.

Anderson L, Steele VK, Kellof GJ, Sharma S, (1995) Effects of oltipraz and related chemoprevention compounds on gene expression in rat liver. J. Cell. Biochem. Suppl. 22, 108 -116

Bartosiewicz M, Trounstine M, Barker D, Johnston R and Buckpitt A. (2000). Development of a toxicological gene arrays and quantitative assessment of this technology. Arch Biochem Biophys. 376 (1), 66-73.

Blanchard K, DiSorbo O, Burris R, Dunn R, Farr S and Stoll R. (2000). Toxicogenomics: Understanding the use of microarrays for toxicology studies in vivo. Toxicologist, 54, 195.

Bowtell., D. D. L. (1999) Options available - from start to finish – for obtaining expression data by microarray. Nat. Genet. Supp. 21, 25.

Brown CS, Goodwin PC, Sorger PK. Image metrics in the statistical analysis of DNA microarray data. Proc Natl Acad Sci USA 2001;98:8944-8949.

Bulera SJ, Eddy SM, Ferguson E, Jatkoe TA, Reindel JF, Bleavins MR and De La Iglesia FA. (2001). RNA expression in the early characterization of hepatotoxicants in Wistar rats by high-density DNA microarrays. Hepatology, 33 (5), 1239-1258.

Burchiel SW et al. (2001). Analysis of genetic and epigenetic mechanisms of toxicity: Potential roles of toxicogenomics and proteomics in toxicology. Toxicological Sciences, 59, 193-195.

Burczynski ME et al. (2000). Toxicogenomics-based discrimination of toxic mechanism in HepG2 human hepatoma cells.Toxicological Sciences, 58 (2), 399-415.

Ciphergen Biosystems Ltd. and SmithKline Beecham (2001) Correlating ProteinChip¿ Protein Profiles with Drug Treatment: A Powerful New Strategy for Safety Assessment.

Corton JC and Stauber AJ. (2000). Toward construction of a transcript profile database predictive of chemical toxicity. Toxicological Sciences, 58, 217-219.

ECETOC Document No. 42 (2000) Genomics, Transcript Profiling, Proteomics and Metabonomics (GTPM) - An Introduction. Ed. Carpanini, F. M.

Farr S and Dunn RT (1999). Concise Review: Gene expression applied to toxicology. Toxicological Sciences, 50, 1-9.

Fielden MR and Zacharewski TR (2001). Challenges and Limitations of gene expression profiling in mechanistic and predictive toxicology. Toxicological Sciences, 60, 6-10.

Fountoulakis M, Berndt P, Boelsterli UA, Crameri F, Winter M, Albertini S and Suter L. (2000). Two-dimensional database of mouse liver proteins: Changes in hepatic protein levels following treatment with acetaminophen or its nontoxic regioisomer 3-acetamidophenol. Electrophoresis 21, 2148-2161.

GlaxoWellcome and Ciphergen Biosystems, Inc . (2000) ProteinChip¿ Case Study: Detection, Identification and Validation of a Protein Marker in Urine for TMPD Induced Skeletal Muscle Toxicity.

Greene LA. (2001). New Centre a Stroke of Gene-ius. Environmental Health Perspectives, 109 (1), A22-A23.

Harries HM, Fletcher ST, Duggan CM and Baker VA. (2001). The use of genomics technology to investigate gene expression changes in cultured human liver cells. Toxicology In Vitro (in press).

Hughes, T. R., et al., (2000) Functional discovery via a compendium of expression profiles. Cell, 102. 109.

Iannaccone PM. (2001). Toxicogenomics: The call of the wild chip. Environmental Health Perspectives, 109 (1), A8-A11.

Kane MD, Jatkoe TA, Stumpf CR, Lu J, Thomas JD, Madore SJ. Assessment of the sensitivity and specificity of oligonucleotide (50mer) microarrays. Nucl Acids Res 2000;28:4552-4557.

Kennedy S. (2001). Proteomic profiling from human samples: the body fluid alternative. Toxicol Lett. 120, 379-384.

Kerr and Churchill(2001), Experimental design for gene expression microarrays, Biostatistics, 2:183-201.

Khan J, Wei JS, Ringner M, Saal LH, Ladanyi M, Westermann F, Berthold F, Antonescu CR, Peterson C, Meltzer PS. Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nature Medicine 2001;7:673-679

Lauerman JF. (2001). Arrays cast toxicology in a new light. Environmental Health Perspectives, 109 (1), A20-A22.

Lewis, T. S., Hunt, J. B., Aveline, L. D., Jonsher, K. R., Louie, D. F., Yeh, J. M., Nahreini, T. S., Resing, K. A. and Ahn, N. G. (2000) Identification of novel MAP kinase signalling targets by functional proteomics and mass spectrometry. Mol. Cell, 6, 1343.

Lobenhofer EK, Bushel PR, Afshari CA and Hamedeh HK (2001). Progress in the application of DNA microarrays. Environmental Health Perspectives. 109 (9), 881-892.

Lockhart DL, Winzeler E. Genomics, gene expression and DNA arrays. Nature 2000;405:827-836.

Lovett RA. (2000). Toxicologists brace for genomics revolution. Science, 389, 536-537.

Medlin JF. (1999). Timely toxicology. Environmental Health Perspect, 107(5), A256-8.

Moller A, Soldan M, Volker U and Maser E. (2001). Two-dimensional gel electrophoresis: a powerful method to elucidate cellular responses to toxic compounds. Toxicology 160, 129-138.

Nadadur SS, Schladweiler MC and Kodavanti UP (2000). A pulmonary rat gene array for screening altered expression profiles in air pollutant-induced lung injury. Inhalation Toxicology, 12(12): 1239-1254.

Newsholme S.J, Maleeff BF, Steiner S, Anderson NL and Schwartz LW. (2000). Two-dimensional electrophoresis of liver proteins: characterization of a drug-induced hepatomegaly in rats. Electrophoresis 21, 2122-2128.

Nuwaysir EF, Bittner M, Trent J, Barrett JC, Afshari CA.(1999). Microarrays and toxicology: the advent of toxicogenomics. Mol Carcinog, 24(3), 153-9.

Pandey, A. and Mann, M., (2000) Proteomics to study genes and genomes. Nature, 405, 837.

Pennie WD (2000). Use of cDNA microarrays to probe and understand the toxicological consequences of altered gene expression. Toxicology Letters, 112-113: 473-477.

Pennie WD, Tugwood JD, Oliver GJ and Kimber I. (2000). The Principles and Practice of Toxicogenomics; Applications and opportunities. Toxicological Sciences, 54, 277-283.

Pennie WD, Woodyatt NJ, Aldridge TC and Orphanides G (2001). Application of genomics to the definition of the molecular basis for toxicity. Toxicology Letters, 120, 353-358.

Rockett JC, Dix DJ. (1999). Application of DNA Arrays to Toxicology. Environ Health Perspect Aug;107(8), 681-685.

Rockett JC, Esdaile DJ, Gibson GG. (1999). Differential gene expression in drug metabolism and toxicology: practicalities, problems and potential. Xenobiotica Jul;29(7), 655-91

Rockett, J. C. and Dix D. J., (2000) DNA arrays: technology, options and toxicological applications. Xenobiot. 30, 155.

Rodi CP, Bunch RT, Curtiss SW, Kier LD, Cabonce MA, Davila JC, Mitchell MD, Alden CL, Morris DL. (1999). Revolution through genomics in investigative and discovery toxicology. Toxicol Pathol , Jan-Feb;27(1), 107-10.

Schulze, A. and Downward, J., (2001) Navigating gene expression using microarrays – a technology review. Nat. Cell Biol. 3, E190.

Smith, L. L., (2001) Key challenges for toxicologists in the 21st century. Trends, Pharm. Sci. 22, 281.

Steiner S and Anderson N L (2000). Expression profiling in toxicology – potentials and limitations. Toxicology Letters, 112-113: 467-471.

Turton NJ, Judah DJ, Riley J, Davies R, Lipson D, Styles JA, Smith AG, Gant TW. Gene expression and amplification in breast carcinoma cells with intrinsic and acquired doxorubicin resistance. Oncogene 2001;20:1300-1306.

Waring JF, Ciurlionis R, Jolly RA, Heindel M and Ulrich RG (2001). Microarray analysis of hepatotoxins in vitro reveals a correlation between gene expression profiles and mechanisms of toxicity. Toxicology Letters, 120, 359-368.

Waring JF, Jolly RA, Ciurlionis R, Lum PY, Praestgaard JT, Morfitt DC, Buratto B, Robers C, Schadt E and Ulrich RG. (2001). Clustering of hepatotoxins based on mechanism of toxicity using gene expression profiles. Toxicology and Applied Pharmacology, 175 (1), 28-42.

Wittes J, Friedman HP. Searching for evidence of altered gene expression: a comment on statistical analysis of microarray data. J Natl Cancer Institute 1999;91:400-401.

Joint statement on the symposium

External link to the national archive for the joint statement on a Symposium held by the Committees on Toxicity, Mutagenicity and Carcinogenicity of Chemicals in Food, Consumer Products, on the use of Genomics and Proteomics in Toxicological risk assessment.

Meeting summary

Joint meeting of the COT/COC/COM to discuss the use of genomics and proteomics in risk assessment.

Members of the three Committees, other independent advisory committees, independent scientists and interested stakeholders were among the 90 delegates who attended the open meeting on 8 October in Skipton House to discuss the use of proteomics and genomics in risk assessment. The meeting gave delegates the opportunity to hear presentations from leading authorities in the UK on these topics and for delegates to participate in a selection of working groups.

A full write up is currently being drafted for publication in a peer review journal. Conclusions reached at the meeting, and in particular on the significance of these new technologies for toxicological risk assessment will also be published on the Committees' websites.