Read-across – State of the art and next level!
In light of the limitations of existing non-animal based hazard assessment methodologies particularly for systemic endpoints, read-across is currently considered to be the most applicable strategy in the short- to medium-term in support of innovation. The scientific pre-requisite of an acceptable read-across approach is the transparent and reproducible establishment of chemical and biological similarity between the analogue and target chemical. Great advances have been made lately in reducing uncertainties, which are inherent to any read-across strategy, by strengthening the similarity justification with additional mechanism-based data to support not only chemical but also biological similarity.
Read-across is currently seen as the most actionable short/medium-term strategy to assess systemic toxicity endpoints following repeated exposure including carcinogenicity, reproductive and developmental toxicity. Conceptually, read-across appears simple – it basically assumes that two chemicals are similar and hence, toxicological data available on a toxicological endpoint for one of the chemicals (‘source’) can be used to predict the same endpoint of the other (‘target’). In reality, the process to read-across is much more complex, and, to some extent, subjective as the burden of proof is placed on providing evidence to demonstrate in a transparent way that the target for which a prediction is needed is indeed similar to the source chemical (Ball et al., 2016).
The current state of the art is that the chemical and thus toxicological similarity of two chemicals or a category of chemicals are based on providing evidence for similarity on the following 5 parameters: chemical structure and functional groups; structural alerts and reactivity; physico-chemical properties; metabolites/breakdown-products and endpoint specific properties. Accordingly, a systematic and expert-driven process for identifying and evaluating the suitability of analogues is being applied. The approach involves categorising potential analogues based on their degree of structural, reactivity, physico-chemical properties and toxicokinetic/metabolic similarity to the chemical with the missing data (Wu et al., 2010). It extends beyond structural similarity and includes differentiation based on chemical reactivity as well as addressing the potential that an analogue and target could show toxicologically significant metabolic convergence or divergence. In addition, it identifies differences in physicochemical properties which could affect bioavailability and consequently biological responses observed in vitro or in vivo. The approach involves a stepwise decision tree for categorising the suitability of analogues, which qualitatively characterises the strength of the evidence supporting the hypothesis of similarity and level of uncertainty associated with their use for read-across.
There is a range of publicly available IT tools and databases that support assessors in the identification of analogues and their suitability evaluation for read-across to the target chemical. It is, however, important to consider the dependence of structure similarity and activity on the recognition of structural characteristics that, in turn, define chemical and biological reactivity. Hence, the ability to draw conclusions about the toxicity of an unknown chemical on the basis of an analogous chemical largely depends for example on the quality and applicability domains of the databases underlying the IT tools and also, importantly, on professional judgment. It therefore requires substantial experience in all aspects critical to the similarity on behalf of the risk assessor. For each aspect critical to the final analogue suitability evaluation – be it chemical reactivity, activity, or known/predicted metabolic pathways – criteria and rules should be established and assessments thereof documented to allow for a consistent and reproducible assessment of the proposed read-across case.
Despite the numerous scientific advancements that have been made – particularly in the area of cheminformatics – and the systematic way each of the aspects is being evaluated to illustrate toxicological similarity, the scientific support of every statement made in the read-across justification is still one of the most challenging barriers to the acceptance of read-across. This is mainly due to the fact that there are typically gaps in the support of the read-across, be it mode of action, metabolism or other aspects of a substance that drives toxicity. For many chemicals for which in vivo toxicity data is available, the exact mode of action has not been further elucidated or established, and neither has the predicted metabolism been experimentally confirmed.
The existence of uncertainty goes hand in hand with the use of read-across and, in many cases, there is a need to first appraise and describe the degree of uncertainty before reducing uncertainty wherever possible in a second step. To date, published approaches for systematically evaluating uncertainty are scarce. Of note is the qualitative framework for characterising read-across uncertainty by Blackburn and Stuart (2014) as well as that of Schultz et al. (2015). Applying these uncertainty evaluation approaches reveals that many read-across cases are strong on establishing chemical similarity, but often lack strength in establishing biological similarity of the analogue and target chemical which, in turn, creates uncertainty. Adding or generating biological data may provide extra strength to the read-across and thereby reduce uncertainty.
Recent advancements in understanding the sequence of events from the chemical exposure and molecular initiating event (i.e., the interaction of the toxic compound with a cellular target molecule) to adverse effects in the organ, organism and finally population level – the so-called Adverse Outcome Pathway (AOP) – allows for augmenting read-across by carrying out selected in chemico or in vitro tests along the AOP. Specific biomarkers allow for the detection of these toxicities. Likewise, the key mechanistic events underpinning the AOP leading to skin sensitisation in humans has been published by the OECD in 2012. Accordingly, uncertainty regarding the skin sensitisation endpoint in a read-across case can be reduced by in vitro approaches which address the key events in the AOP in an integrated testing manner. Biological similarity can be further established by using ‘big data’ for comparative profiling of chemicals. The term ‘big data’ is used for computer assisted usage of large data sets which come from high-throughput screening and research initiatives such as the US EPA’s ToxCast and Tox21; providing that data changes in the expression of different genes are selected as indicators or biomarkers of disruption in certain important cellular pathways.
Lastly, an often talked about but rarely consistently applied aspect in the context of read-across is related to the toxicokinetic profile. While powerful computer models, toxicokinetic or physiologically based pharmacokinetic models (‘PBPK models’) are used to simulate movement, metabolism and excretion of the analogue compound relative to the target compound to establish similarity. These models can also be used to convert findings in aforementioned in vitro models into concentrations that are present in the body such as blood or liver concentrations. In other words, in the context of a biological data enhanced read-across based safety assessment, these PBPK models may be used to reduce uncertainty by determining the amount of a chemical substance a person would need to apply to their skin, inhale or ingest in order to achieve a toxic concentration in their blood or liver.
In light of the limitations of existing non-animal based hazard assessment methodologies – particularly in the area of repeated dose as well as developmental and reproductive toxicity – read-across is currently considered to be the most applicable strategy on the short to medium term in support of innovation. The development of new chemicals used in consumer products is thereby supported in the wake of regulatory requirements aimed at reducing animal testing, or even following actual animal testing bans. The scientific pre-requisite of an acceptable read-across approach is the transparent and reproducible establishment of chemical and biological similarity between the analogue and target chemical. Great advances have been made lately in reducing uncertainties, which are inherent to any read-across strategy, by strengthening the similarity justification with additional mechanism-based data to support not only chemical but also biological similarity. Starting with in silico data and working up to in chemico or in vitro assays, combined with high-throughput and/or toxicokinetic data, there is already a range of science-based approaches available to strengthen read-across in such a way as to reduce in vivo testing to an absolute bare minimum, if necessary at all.