Jeff Elton (00:00):
Hi, this is Jeff Elton, CEO of ConcertAI. Welcome to the ConcertAI podcast series. Today I’m here with Dr. Andy Beck, the Co-founder and CEO of PathAI. PathAI has emerged rapidly as one of the leading digital pathology companies that also operates a digital pathology laboratory. Digital pathology allows you to take the H&E slides that have been traditional pathology, create digital representations of them, and bring pathomics or AI-based model insights derived off of the digital image.
This is a leading modality that’s changing the accessibility of state-of-the-art decision rules and decision augmentation, and also starting to create translational assets in changing how clinical development occurs. I have a unique pleasure today. I’m here with Andy Beck, who is the Co-founder and CEO of PathAI. PathAI itself is doing some really, no pun intended, path breaking work actually. For me, I’d almost love to understand what was the inspiration for the business, but also what kind of brought you to this sort of business?
Andy Beck (01:18):
Great. Thanks Jeff. And super excited to be here. So, the inspiration for PathAI comes from a few places. So, I personally trained as a pathologist after medical school, which pathology is the practice of medicine that’swithin the laboratory that’s tasked with making definitive diagnosis of diseases based on tissue specimens, the analysis of those tissue specimens. And while I was in my training, there were tremendous advances in many areas of pathology, particularly in areas like genomics where you could begin to precisely and accurately and scalably sequence genetic alterations, which increasingly were being linked to treatment decisions that could cure patients of serious diseases.
And that was all kind of happening in the pathology lab. There were big advances in the analysis of things like RNA expression, using things like first gene expression microarrays, and then RNA-Seq where, again, you could kind of transform these biological samples into precise, quantitative, scalable type of data. And it really seemed like those two types of data could help unlock identifying specific biologically relevant classes of disease and then linking those with specific treatments. But then as I did my training, it also turned out that the vast majority of diagnostic information for many serious diseases like cancer, inflammatory bowel disease, different liver diseases like NASH or nonalcoholic steatohepatitis, that they’re diagnosed in a very different way through the analysis of microscopic slides by pathologists looking at a microscope and looking at very complex images.
At the time, and honestly even today, it’s very mysterious how they do that, in contrast to what you can do with DNA and RNA where you essentially can read out a code. This is a very, in a way, old-fashioned way of training people where people sort of say, this is an example, “This looks like cancer. This doesn’t look like cancer. This looks like a really bad cancer. This looks bad, but it’s actually totally normal,” and they’re doing all this manually by eye. So, even at the time it was very clear to me that this is one super important data, very, very difficult and mysterious, and one where there could be tremendous advantages to making it scalable, quantitative, and really a key part of the future of precision medicine.
So, it was kind of that experience a long time ago that got me interested in this area. And then I’ve been kind of working in various ways on this problem ever since. I did a PhD in this area, I started doing research after that in this area about how can we better use things like image processing to process pathology images. And then in terms of the immediate impetus for PathAI was a lot of the big advances that are just beginning to transform healthcare today, things like more availability of cloud computing, huge advances in artificial intelligence, particularly around deep learning for things like image processing.
And then the increasing availability of digital data through being able to not just use microscopes but using things like whole site imaging systems, technology started to come together. And when we formed PathAI, it was really thinking that the pathology lab 20 years from now will look very different than the one today and will really have the bedrock foundation of it will be around data algorithms and the scalable way to provide patients with accurate reproducible interpretation of what’s happening from histopathology to really be able to then link this data with say, underlying molecular data, but also with the right treatments to hope to improve outcomes for patients. So, that’s kind of altogether how we got interested in this field and really the origin of PathAI.
Jeff Elton (04:54):
As I’m listening to your description, you’re describing a field in anatomical pathology, kind of moving into some other ones, which sounds like there was, I have a mentoring process, and it’s through the read after read after read refinement that in fact that mentoring process says you’re released to now become a full practicing independent pathologist. But what I was also hearing you say is inside that and inside that mentoring and that repetition is almost effectively I’m turning that human into a model recognition and I’m forcing them through that learning process, and I’m using the word learning quite intentionally. How did you come away from that experience? And obviously you’re seeing the art of the possible, where did your own technical like technology here is going to now meet high expertise to maybe potentially get more consistent improved outcomes?
Andy Beck (05:48):
Sure. So, one, there is a total and a super interesting parallel track between how you train a pathologist and how you train a machine learning algorithm. And I think that’s really interesting. And you always hear about machine learning algorithms being black boxes. Well, humans are even more black boxes in terms of what’s actually going on and when someone is sufficiently trained to then be deployed in the world. Many people have seen this parallel for a long time. So, people have been trying to use computation in many areas that were historically done manually for many, many years, and in certain areas that really caught on quickly and those fields were transformed. By the time I was in training, things like email, a lot of financial services, certainly the way DNA and RNA were being processed were largely computationally driven.
So, it was kind of like many people, including me, saw the power of if you make something machine-readable, you can do a ton more with it with machines. And I kind of was like, “Well, what’s the next frontier-” and me and others, “…going to be in this area?” So, really started looking into image processing as a tool. And when I started, I think the field was a little too early to be as impactful as it can be today, because it really was just before the deep learning revolution, the cloud computing revolution, as well as increasing availability of digital data sets. I thought this would be an exciting next frontier and where you can make a big impact beyond what people are already doing with say, DNA and RNA data, that the next frontier is really going to be how cells and tissues interact.
And that one thing I haven’t talked about, but your question suggested, is the problems with the manual way of doing it. The mentoring being the bedrock, it’s not very scalable, it’s extremely heterogeneous, it’s very hard to standardize, and you’re literally almost always starting from scratch. So, I could go through five years of training pathology that goes on top of say 16 or something years before, and then I’m out in the field for a number of years and my knowledge is not really being systematically transferred to anyone else. And then someone else starts, they’re starting from scratch. Whereas we all know when you have the best machine learning algorithm and it’s gathering data from all over the world, it’s constantly getting better and you’re never starting from scratch.
So, with a machine learning based system, you know it should be better two years from now than it is today. Whereas with the manual mentor/mentee relationship, you’re kind of always starting from scratch with each new person. So, there were a lot of clear advantages to if you could make this machine-readable and then utilize machine learning, and those kind of principles had been seen in other fields, and here we were seeing, well, can we apply it to this new field? And I would say early on, the answer was largely no. And I think today the answer is largely yes.
Jeff Elton (08:21):
That’s tremendous. So, accessing the images in a digital form such that the algorithmic approach and the learning based approach could actually take place. From some of what I’ve read in the literature, digital pathology itself, which requires its own unique infrastructure, has had its own lower rate of deployment, et cetera. But I’m starting to take away from your comments the digital infrastructure, which in fact may be now more broadly enabling, is starting to become a bit more ubiquitous in existing out there. I was almost hearing in your statement things that hit tipping points because of one factor, it’s usually a confluence of factors that now can all kind of come together.
Maybe you can give us a little bit of the history of what was digital and why did it take a long time for digital to start to become embedded in more pathology in clinical workflows and departments and institutions. Where is it today from your perspective? And then we can kind of come back to this pace of change, which looks like you’re predicting is going to take off.
Andy Beck (09:23):
Yeah. No, great question. So, maybe starting with thinking specifically coming from pathology and why it’s not digital today, whereas other areas in some sense are more native digital, something like radiology where to some degree it’s native digital and it was in some way artificial to be, I don’t know, analyzing scans versus analyzing a digital image. So, pathology is not native digital, it’s glass slides and microscopes. So, the first thing to say is microscopes work pretty darn well for a individual pathologist who’s well trained. And if they have access to their microscope, if they live near their lab, any pathologist, they spend all their training using it, so they can get through slides very quickly.
A great user interface, super high resolution images, they can zoom in, up and down, and it provides them the visual imagery that the human needs very efficiently to, in many cases, come to a diagnosis. So, to some degree the microscope itself for a single pathologist signing out in isolation who has access to it and access to the lab works quite well. And I think that’s the first thing to say is not everything that’s old needs to be changed. It takes a lot, more than just a point solution, to really change something to change a workflow in a major way. So, one, you’re starting with the microscope. So, what would be the reasons to want to digitize, and what has changed?
So, one, you need good hardware for actually making the digitization happen at scale, fast, relatively cost-efficient, and I would say there have been tremendous advances in the past 10 years there. So, there have been whole slide imaging systems available for over a decade, but it’s really only been in the last five or six years that we’ve had access to relatively inexpensive high throughput, very, very high quality scanners. So, one is the availability of hardware. The second is just cloud computing and network bandwidth and connectivity have increased tremendously in the past several years, such that a lot of the technical challenges can be abstracted to the cloud in a kind of SaaS, software as a service based way.
Whereas before you might have to think you’d have to do all of this on premises and have a whole team whose job it is to support this local hardware. And many healthcare institutions are now very comfortable making other types of data digital. So, it makes sense to also be doing the same thing with pathology images that you’re doing with EHR and radiology and other aspects that have become digital. So, those are just really tailwinds that help the digitization of pathology. The third is we all know pathologists don’t sign out in isolation. Even if they’re not using algorithms, they are part of a complex health ecosystem and there can be value, because the data can be moved much more quickly digitally than it can be through sending glass slides.
So, for example, when many pathologists did not want to be going to the lab during COVID and the period where there was advantage to being able to sign out from home, that was a major impetus for many labs and academic medical centers to put in the workflows to enable the slide to be made locally in the lab, but then that image could be digitized and could be sent literally anywhere around the world to access pathologists. So, this decoupling of the physical creation of the slide from the analytical interpretation of the slide is a powerful force in itself, and that’s only going to increase in the coming years. And then the biggest one and the one where a ton of our attention is focused, is then being able to augment the pathologist with algorithms and with artificial intelligence.
And something that we’re only at the very beginning of, and we’ll just get more powerful with every new slide that’s analyzed, learned from, and then makes the system better year over year. So, we’re only at the very, very beginning of that transformation. We think all of those factors are important, but the most transformative for patients and for pathologists in their work we think will ultimately be the algorithms on top of the digital images.
Jeff Elton (12:56):
So, I know you made an analog to radiology and as you described the digital presence now, and you could even have a remote read on a digital slide. Are you seeing that change now in workflows within the pathology community start to emerge?
Andy Beck (13:12):
Yes, we really are.
Jeff Elton (13:14):
Okay.
Andy Beck (13:14):
Even faster I’d say in the past year than I would’ve expected. And the reason I would say faster than I would’ve expected is because it had been relatively slow up to then. But even in our own work in our laboratory in Memphis, we’re up to about 20 to 25% of the work is now being on slides that are digitized, whereas it was far, far lower in the past. This is all due to these factors.
Jeff Elton (13:36):
So, when you think about, and oftentimes as new technologies and new approaches come in, there’s non-inferiority studies that are done between say a direct physical inspection of the slide with the biomaterial through standard optics by the individual versus a digital interpretation. Has the field had those comparability in non-inferiority studies?
Andy Beck (14:00):
Absolutely. So, now there’s a strong set of approvals or clearances for primary diagnosis being done digitally. The first in the field was by Phillips. There have been several follow-ons including one from PathAI where we obtained 510(K) clearance for primary diagnosis on our AI site DX platform. We obtained that last summer, and it was exactly doing what you said, Jeff, is showing non-inferiority for primary diagnosis such that you can sign out digitally what used to have to be done manually. And then that really forms the foundation and the basis for then augmenting human interpretation alone with pathologist interpretation plus an algorithm.
Jeff Elton (14:41):
I’ll make that statement tipping point again, because it seems like it’s coming to bear as you start thinking these pieces coming into place for doing it. So, you said something a moment ago and used the word decision augmentation. I’ve always really liked that word as a transition where AI in different machine learning based approaches start to interplay with human interpretation. How do you see that occurring in the field of digital pathology now?
Andy Beck (15:08):
Yeah, I think that’s a super important concept to keep in mind, is this idea of how do we make physicians provide even more impactful service for the physicians they’re supporting, but most importantly for patients at the end of the day. And as we all know, there’s still a ton of work to do there and I think pathology is a great place to think about it. So, the pathologist has historically been called the physician’s physician, so less involved particularly on the therapeutic side with direct patient impact in terms of directly consulting patients on the right therapy, but really the most trusted partner for the treating physician on what are the characteristics of this patient’s disease that are most important for then helping to recommend the most effective therapy to patients?
There’s a lot of judgment, it’s very consultative, it’s very multimodal. It very much should be integrating the best from science, the best from the literature, the best from this patient’s personal history and the medical record, even concepts around benefits and risks of different therapeutic approaches that could be used for different diagnoses, plus of course the ground truth of what the diagnosis is. So, I see a real, in a way, up-leveling of the work of a pathologist away from tasks that can be done just as well, if not better, by an algorithm such as counting hundreds of thousands of cells per slide when you have to read hundreds of slides per day. That is a task that today we’re asking trained pathologists to do, but we know that we should be able to have validated machine learning products that can do these low level tasks like cell counting, counting different types of cells, saying, “Are any of the 250,000 cells on this slide a cancer cell?”
That’s a good computer type task, but the higher level task of having judgment, “Well, what does that mean for this patient? What does that mean based on the medical literature? And how do I best advise this physician?” So, I think it’s going to be a shift from low level, manual, tedious tasks that human judgment is not the best equipped for, to higher level judgment, integration of multimodal data types, and really being a therapeutic and diagnostic advisor for treating clinicians where we’ll see the field shift over the coming years.
Jeff Elton (17:22):
Fascinating description, and I’m starting to get this layered architecture of the world around the pathologist where the pathologist is an integration point of that data multimodal, but it’s a contextualization of that particular data and then it’s bringing interpretation to that particular data, but then very specific for the medical function that they’re interacting with, such that a lot of the work that would’ve gone as a interplay back and forth or a suspecting this is the right way to initiate care, you’ve actually got a much more robust foundation for that kind of decision process than ever would’ve been there. It’s almost like clinical decision support and decision augmentation being integrated together. I’m always hesitant to use that word clinical decision support, because it conjures up, but the way you laid it out is it’s really kind of narrowing down the area within which we should really be focused for the treatment decision to ultimately be made.
Andy Beck (18:19):
Yep, absolutely. And then I think that the idea of democratizing this knowledge because it’s a highly scalable platform. And also not just democratizing to physicians, but potentially opening up the data to patients, which is also a major push in laboratory medicine and diagnostics. And now much of this data is just very opaque and not available. The glass slides are difficult to get, the images aren’t obtained. The reasoning that goes into a diagnosis is obviously difficult to obtain, and often it’s a report with a few words on it, and that’s not that useful for getting second and third opinions.
You have to figure out how to get the slides sent to the other institution. Whereas if this were digitized and the reasoning was clearly laid out why the machine learning system came up with certain metrics and then why that fed into the pathologist judgment, you could imagine then distributing for second and other opinions much more quickly, which is something that’s very important to patients being diagnosed with serious diseases where we know there’s quite a bit of inter- and intraobserver variability and you really have to get the diagnosis right for the bedrock for subsequent treatment decisions.
Jeff Elton (19:18):
I can definitely see you as the pathologist interacting with the rest of the downstream clinical communities around a patient. For a moment, and this is probably not a role that pathologists are always placed into, if you were speaking to a patient for a moment and it’s kind of in the backdrop of all of your statements, how would you explain to them and what would you tell them about the power that digital pathology really presents for them and their care?
Andy Beck (19:44):
Sure. One, I think the most important thing for every patient is to have the right diagnosis that they can rely on for helping to make the most informed treatment decision. So, I think every patient who who’s undergone a biopsy and waited to get the result back knows that getting the right answer quickly that’s clearly communicated is extremely important. And also the potential to get multiple interpretations if there’s any kind of ambiguity is really important. So, one, I think every patient can relate to that, and I think what patients may surprise them is how different the types of results you get back from laboratory tests are. So, you get like a CBC count of your blood and it’s very precise and accurate and you tend to get basically the same exact answer no matter where you send it.
Whereas with the interpretation of tissue biopsies, there’s often significant variability based on the observer, based on which pathologist it was sent to. So, I think, one, exposing that problem that it really is a problem for many difficult areas, and them how in the future and what we are building for is a real technical solution such that you can be confident that when you send your biopsy out, that the pathologist is not just operating in isolation but is operating with the support of a well-validated system such that the diagnosis you’ll get back is the most accurate possible and the most likely to lead to the right treatment decisions.
It’s a message that really does resonate well with patients, because no patient wants uncertainty and was that the right diagnosis? Because they’re making big decisions on treatments. Do I need surgery? Do I need radiation? Do I need chemotherapy? Can I just do watchful waiting without much risk? So, these are incredibly critical decisions.
Jeff Elton (21:17):
They definitely are. ConcertAI does a lot of work with medical oncology in a clinical context, not just the academic, but in a community context where 70% of the patients receive their care. And as I’m listening to your description, you almost could bring a standard of interpretation that is absolute and very high, almost independent of where that patient actually may be living, et cetera, as long as the methodologies and the approaches come out, which is actually not something that all patients necessarily could kind of feel confident in, in all locations in the country. So, back to your point about democratization, but at a much higher level.
Andy Beck (21:56):
Yep, absolutely. This idea of standardization across community versus say academic medical center settings, but even globally long term.
Jeff Elton (22:04):
As I was listening to your comments and I was thinking about the body of knowledge that can maybe coming through learning algorithms and different approaches, medicine is practiced through evidence. Evidence goes through peer review publications. Evidence is generated in registrational clinical trials. Evidence kind of gets from a variety of areas, it’s the evidence that ultimately starts to informing the treatment guidelines, et cetera. And behind all your comments was actually a different lens and a different mechanism of starting to generate the insights that will go through our process of validation to become that kind of evidence and start to change patterns of practice.
And I know that PathAI itself has really been pioneering in certain approaches which have been called pathomics from some of the team working on it, which almost becomes a language of pathology and a language around which some of this can be brought together. Can you explain a little bit about PathAI’s work in that area?
Andy Beck (23:07):
Absolutely. So, when a pathologist looks at an image, there might be a handful of features of that image that a pathologist can reliably extract, such as what is the diagnosis potentially, does it look like there’s many immune cells, few immune cells? But they’re very much semi-quantitative, there’s a very small number of them, they’re unstandardized, they’re not reproducible even within one observer, let alone across many pathologists. Whereas as I started kind of you think about exome data from the genome or transcriptome data from RNA, the field of bioinformatics and genomics have worked hard on standardized names of genes, standardized, normalized way of having features and how that can really form a major input into the biological interpretation of disease.
Well, at PathAI we’ve aimed to put cellular and tissue phenotypes on a similar footing through a set of what we’re calling standardized human interpretable features or HIFs, that characterize what we expect to be biologically important phenotypes that explain in a quantitative, reproducible, standardized way how malignant cells are interacting with normal cells, how immune cells are interacting with tumor cells, how those interactions differ based on region within the tumor. Is this at the epithelial stromal interface, or at a different portion of the tumor? So that this really can become a standard nomenclature to really drive advances in exploratory research, clinical trials, as well as potentially future companion diagnostics and really provide a new window into how we understand disease beyond just DNA and RNA, but supplementing that with these human interpretable features that are cellular and tissue phenotypes.
And I think something even more exciting in the future is once you have this really comprehensive deep understanding of the biology from DNA to RNA to how cells and tissues that are interacting, to really determine the signatures and the features that are most critical, it’s absolutely essential to then link these multimodal data with longitudinal treatment and outcome data. Which really brings us to the potential power of the partnership between organizations like ConcertAI and PathAI to really bring all of those things together for really deep genotyping and phenotyping linked to treatment and outcome data, to better understand what’s driving a patient’s disease and which treatments may benefit them the most.
Jeff Elton (25:29):
The ability to do it, but also to do that at scale. One of the things that our two organizations have been able to take a look at is there’s such a substantial and nice overlap between the relative data sets that allow us to then create multimodal data sets that represent these characteristics, but at a scale that really do facilitate research. Even when we go down and we’re actually selecting different inclusion exclusion criteria, we still get to a very meaningful sample to allow and drive those insights actually, which is super exciting, has not been done at scale before.
Andy Beck (26:03):
No, I totally agree, because every patient diagnosed with cancer has an H&E image generated, and that’s really all we need, or an H&E slide generated. So, all we need as input is an H&E glass slide or digital image to then create this structured set of human interpretable features to then link to ConcertAI data. So, I do think the already ubiquity of the microscope and of H&E staining has really allowed us to create these very large data sets at scale, combined with the machine learning methods that we’re applying, which are trained on very large data sets and are also super scalable.
Jeff Elton (26:37):
You once made a comment to me around how even clinical trial activity could in fact begin to take some of these same features. And now that you’ve actually got this map of features and characteristics and you can associate certain disease processes or I would also imagine resistance to treatments and other characteristics which you now have. Let’s think about how could that be deployed, say for clinical researchers. I’m selecting patients and I actually may want to formally exclude those that I know can already be a beneficiary of current medicines and I want to go to the ones that are the non-responders as the focal population kind of doing that. How do I think about bringing this more into realtime workflows for clinical trials and clinical development?
Andy Beck (27:25):
Absolutely. So, I think there’s two big pieces and it reminds me of kind of the training and the deployment phase that are both really powerful. So, one huge advantage we have with our technology is that it’s based on a data type that’s been ubiquitous for many, many years, for the whole 20th century. So, one, we can really look backwards to say every completed clinical trial, if you just picture this at scale, and access the H&E slide, the treatment and outcome data, and then the subsequent treatment and outcome data for that patient. So, you can really begin to use machine learning to gain insights on which human interpretable features, for example, from these H&E slides are predictive of response and resistance in an exploratory retrospective setting.
So, the fact that we don’t need new fresh tissue or a new biological sample, that this is entirely non-destructive and can be deployed well on archival historical samples is incredibly powerful for the learning part. And then for the deployment part, what we’re working on tremendously at PathAI and working with partners is how do we get this technology widely distributed? So, if you imagine if this were in every lab and hospital starting in the US and then globally, then for each new patient in the real world sample that comes in and H&E image is generated, we compute these human interpretable features, and we can make predictions in a non-destructive way to respond or resistance to say different therapeutic modalities.
And then you could use that as the initial data type to then decide which patients should be enrolled in which trials. And in the example you gave, you could, once this technology is deployed widely, potentially very quickly generate a cohort of patients who are predicted to be resistant to a given therapy. And then you could test, say a new therapy, in that subgroup of patients. So, there’s going to be tremendous power to, one, deploying this at scale retrospectively, and two, having it widely available to healthcare providers such that for every new patient who comes in, even before the pathologist looks at the slide, the AI system is already analyzing, creating things like HIFs, identifying subsets of patients, and then feeding that information back to the pathologist.
So, you could imagine even at the point of diagnosis, they’re already being recommended for certain clinical trials. And today the worlds are quite disconnected, the way pathology is practiced in the real world, very fragmented and the way clinical trials are enrolled. No one’s yet really brought those two things together, and we think this technology could really play a role there.
Jeff Elton (29:49):
I’m almost imagining in, I’m sure your own thought process is well beyond the spirit of this question a little bit, but you can imagine almost a HIF signature associated with say a new medicine coming through, which that HIF signature would be associated with those that actually were the highest responder, greatest beneficiary around doing that, which in a way could almost become its own digital biomarker deployable in a standard of care setting going forward. Do you see some of these starting to become part of that decision augmentation layers that you mentioned at the outset?
Andy Beck (30:23):
Absolutely. We’re actively working on these areas, and in particular it’s now well known that tissue phenotypes, including phenotypes that we’re measuring, are critically important in areas like predicting response to antibody drug conjugates. We’re often knowing precisely how much of a target protein is expressed is very important. That’s the type of task machine learning is very good at and we’re actively working in that area. The entire area of immuno-oncology where knowing precisely how the immune system is interacting with the tumor can very much predict which patients will benefit most from monotherapy or may benefit most from combination therapy.
And even in the setting of targeted small molecule therapies, we and others have published on the fact that you can use signatures of HIFs or other components of tissue pathology to predict which patients are most likely to harbor a rare genetic mutation and might actually really benefit from downstream confirmatory testing via NGS and subsequent treatment on a targeted therapy. So, I think because of the way therapeutics have evolved in oncology where it’s a combination of small molecules, immuno-oncology approaches, as well as ADCs, all of these can benefits from single human interpretable features or certainly signatures of human interpretable features.
Jeff Elton (31:38):
So, what’s really exciting is if I go back to our collaboration and go back to your point about the digital pathology, the HIFs, the clinical data, the longitudinal record, your last description could almost become the foundation to a translation medical capability that’s actually informing the design of those first in human, follow on studies that actually start being part of a pan-tumor strategy. It could be part of a post-approval understanding that only 20% of the population are actually super responsive relative to the standard of care. How do I actually expand the addressable population? But it’s generating those insights that can guide those. And back to your earlier point about precision medicine, this is where precision comes from, and this is how we can start bringing those tools into the research phase and have them follow-on into the standard of care area as well.
Andy Beck (32:28):
Yep, absolutely.
Jeff Elton (32:29):
So, Andy, one, you’ve been super generous with your time, super exciting area, and you’re obviously in an area where the field is moving quickly. I mean even your own description of what infrastructure has enabled digital pathology to scale, cloud infrastructure’s ubiquitous, the models can be more effectively trained and moved forward. But if you were going to look out two years, and I won’t take you any further than two years, even two years feels like you’re racing towards it, what are you most excited about and where are you seeing the most meaningful advances will come forward in the next couple years?
Andy Beck (33:01):
What I’m most excited about, and I’ve seen this already just in the history of PathAI in the past five years, kind of digital digitization of pathology being an afterthought or not done at all, being mission critical to clinical operations, clinical development, that this is just a standard part of every clinical trial. We really are starting to see this, because it creates data that’s just lost. It makes an inexhaustible resource that you can continue to extract more and more knowledge out over time. So, one is that this is absolutely routine. So, everyone running clinical trials will be generating large sets of digital images to continue to learn from and continue to provide benefit to everyone. So, one is just the increasing ubiquity of use of digital supplemented by AI. One, I think that will just lead to big new insights with large scale data that were just not available with smaller scale data.
And then what I’m really excited about, I’d say like to build on that is we have really been leaders in creating a platform to enable the deployment of models prospectively in clinical trials to not just learn from completed trials, but to really improve the execution and the success rate of new prospective trials. And that’s through our clinical trial services platform as well as through our end-to-end clinical development services offering from our biopharma lab in Memphis. So, I think that will also continue to scale, and more and more folks will be using digital pathology and AI powered pathology to improve patient enrollment, quality control, assessment of histologic endpoints.
And I think the areas where histologic endpoints are critical areas like NASH, areas like neoadjuvant treatments in oncology, this will become very standard because those are both incredibly important to assessing as early as possible whether treatment is working and they’re very, very difficult to do manually. So, you’re basically asking someone to look by eye and say, how well do you think this drug worked based on the proportion of viable tumor left, the amount of necrotic tissue, the immune response to treatment? These are all things that are very, very difficult, if not impossible to do manually, that we are building algorithms to do well.
So, I think the increasing use of this technology for scoring primary endpoints and enrolling patients in areas where pathology is super important, like NASH and neoadjuvant treatment of cancer. I think those are some of the things we’ll be seeing the next two years.
Jeff Elton (35:16):
So, multiple ideas circulating in my mind. Sounds extraordinarily exciting. And Andy, I can’t thank you enough for being here today. It’s been great to get PathAI and ConcertAI, get the two AI companies here together. So, really appreciate it and really appreciate all your insights and contributions.
Andy Beck (35:33):
Great. Thank you so much, Jeff. And really excited for the partnership between ConcertAI and PathAI.
Jeff Elton (35:40):
Hearing the progress and the whole life of digital pathology and how digital pathology itself is going to transform healthcare, and how clinical interpretation is going to move far earlier and the level of data and insights available to the medical professionals administering that treatment are going to be ever improving. That combined with the implications for clinical development really create a sense of profound optimism for cancer patients everywhere. So, again, I want to thank all of our listeners and wherever you are, good morning, good afternoon, and good night.