The COVID-19 lead discovery project at PPSC started by identifying a well-established target that governs viral entry into host cells. We developed a novel screening assay in 1536-well format and optimized several parameters to enhance the signal window as well as to maximize the robustness of the assay. I admit that it is tempting to use conditions resulting in a highly robust assay with a large signal window. However, relevant lead compounds might be missed if conditions are not as close as possible to the in vivo situation. For example, the activity of our target is dependent on conditions like salts, pH, and temperature. Changing these parameters in the in vitro assay might result in irrelevant leads during the screen. Consequently, we did not screen COVID-19 using conditions resulting in the best performance, but depicted more relevant in vivo conditions leading to more potent compounds.
Figure 1. Formula to calculate Z’ based on the average between positive (µp) and negative (µn) controls as well as their standard deviations (σp and σc).
During our last drug discovery meeting, I presented an article by Bar and Zweifach published in SLAS Discovery, 2020 that discusses the use of robustness as a strict cutoff to either reject an assay for screening or to optimize an assay for screening. In 1999, the assay quality metric Z’ was introduced by Zhang et al. as a measure for the variance within an experiment comparing the positive and negative controls (Figure 1). If there is no variability caused by, for example, manual handling, a perfect fit of Z’ = 1.0 will be achieved. In reality, we are faced with external factors that interfere with our assay, causing variability and leading to a lower robustness Z’ < 1.0. This can even result in Z’-values below zero. It is generally accepted that an assay with Z’ below zero has too much overlap between the positive and negative controls and that screening of such an assay is essentially impossible. Furthermore, assays having Z’-values between 1.0 and 0.5 are considered to be excellent assays for screening as there is a large separation band between the positive and negative control.
At PPSC, we were discussing a project that was based on an assay with a Z’ between 0 and 0.5. Can we still screen this assay or should we try to optimize the assay to reach a Z’ over 0.5? Searching in literature, I could not find a clear answer to this question. Yet, the cutoff of Z’ > 0.5 is often used by scientists without a reason to reject an assay or to further optimize an assay before performing the screen. On the other hand, we have experienced at PPSC that assays can be successfully screened even without fulfilling the requirement of the Z’> 0.5 (read more about our dedicated assay development team).
Of course assays with a Z’ > 0.5 have less variability and will result in more reliable hits than assays with Z’-value between 0 and 0.5. However, using this as a strict cutoff for our project might lead to screening using unrealistic conditions or we have to reject a potentially valuable assay from being screened. In the article of Bar and Zweifach, data derived from power analysis and computer simulations showed that useful hits can be identified using an assay with 0 < Z’ < 0.5 without finding too many false positives if setting the appropriate compound selection parameters. Although this was only shown by theoretical calculations, I agree with them that relevant assay conditions are more important than an optimized assay having the best signal window and performance in order to find valuable leads. From the scientist perspective, isn’t it worth to screen a less-optimal assay resulting in maybe less actives or even missing some than not performing the experiment at all? In the end, we decided at PPSC to continue the screen for this project with a Z’ between 0 and 0.5, I am curious what the outcome will be!