Gabriel Eichler: Welcome to Under the DataScope, examining how data and analytics are transforming medicine. This is a podcast by Harvard Business School’s Kraft Precision Medicine Accelerator. I’m your host Gabriel Eichler, joined here today by my cohost and the co-chair of the Kraft Precision Medicine Accelerator, Richard Hamermesh.
Gabriel Eichler: I can remember the morning of May 2014, when two enormous events coincided. The first was when American Society of Clinical Oncology launched their CancerLinQ effort and the second event was the announcement of a then little known company, Flatiron Healthcare, which had just raised $130 million from top tier venture capital investors. These two organizations were setting themselves up to innovate and one of the most exciting and important developments in oncology this decade. The systematic collation and analysis of electronic medical records, so significantly accelerate oncology, R&D and patient care. Both ASCO’s CancerLinQ and Flatiron were focused on a challenge of tremendous unmet need, extracting, cleaning, and codifying the data stored in dozens of electronic medical records systems in an attempt to understand how patients experienced cancer care. This critical gap in our understanding of how our medical system attempts to serve patients has led to wasteful treatments, unnecessary tests, and poor outcomes for many patients receiving cancer care. Understanding these care dynamics are critical in improving and maximizing the benefits of the treatments we already have.
Gabriel Eichler: If we fast forward a bit, in 2018 in February, Flatiron was acquired by the global pharmaceutical giant Roche for nearly $3 billion. On the other hand, CancerLinQ remained an independent entity with limited uptake and support. This all changed when ASCO decided to license their CancerLinQ asset to two for-profit companies: Tempus, and the second, ConcertAI. These two organizations have been tasked with bringing ASCO’s CancerLinQ to the forefront of this critical space and have bold visions for how to do that.
Gabriel Eichler: It’s with this background that I can say we’re extremely fortunate today to have the CEO of ConcertAI, Dr. Jeff Elton. Dr. Elton has over 25 years of experience as a global executive in health care and life sciences. Previous to ConcertAI, Dr. Elton was the Managing Director of the Accenture Strategy, patient health. He’s a member of the board of the Massachusetts Biotechnology Council and has an MBA and a PhD from the University of Chicago’s Booth School of Business. Thanks for joining us today.
Jeff Elton: Thank you very much. And a Richard and Gabriel, it is a super pleasure to be here. And I think particularly given the mission and the mandate of actually why this is set up, I think it’s a particularly a pleasure to be here and a privilege as well.
Gabriel Eichler: Richard, do you want to start us off with some background on the Kraft Precision Medicine Accelerator?
Richard H.: Sure, Gabriel. It’s a pleasure to be here and it’s a pleasure to have Dr. Elton here with us as well and to learn more about ConcertAI. The Kraft Accelerator really started four years ago with a gift from the Kraft family with a mission, “accelerate progress in precision medicine”. The family had been touched by the death of Myra Kraft to ovarian cancer. Towards the end of her treatment, she had been able to receive some targeted therapies. It was too little too late. The family became aware that there were things in addition to the science of developing molecules, they could help accelerate progress. Thus the gift to the Harvard Business School and the formation of the Kraft Accelerator with myself and Kathy Giusti leading it.
Richard H.: We have focused our efforts in four areas. Very briefly, direct to patient, investment, clinical trials, and data and analytics, which we’re talking about today. Precision medicine doesn’t move forward without data. That’s how you get precise, and what do you do with that data, it needs to be analyzed. ConcertAI came up very high on the list. This is very fitting in the next step in what we’ve been doing.
Gabriel Eichler: Yeah, absolutely. ConcertAI has been on my radar for a number of years now ever since they licensed the ASCO CancerLinQ asset. When we did that comparison of these real-world data assets, indeed they ranked incredibly high in terms of the quantity and quality of the data that they’ve gathered together. Dr. Elton, tell us a bit about what is ConcertAI and what’s your vision for the company?
Jeff Elton: Okay. It comes directly to this notion of precision medicine. You’ll find we’re going to use this term precision evidence a lot, and we believe that the generation acceleration of precision evidence is what actually brings precision medicine to life in patient benefit. In January 9th, we were presenting at the J.P. Morgan Healthcare Conference, American Cancer Society published data that indicated that there had been a 27% reduction in cancer deaths over the last 25 years. And as we were walking into that conference and thought about that number, it echoed to ourselves that one of the reasons why we’re there is that was good but not good enough. That’s not sufficient and in fact we can do better.
Jeff Elton: Our mission and what we’re here to do and the reason why we bring together what we term engineered real-world data products and technologies and human expertise, is to see if we can do substantially better. We’ve set a goal for ourselves to actually see if we could actually achieve a 25% in 10 years, and actually believe that that’s going to be a combination of technologies now that are possible to deploy at scale, combined with traditional therapeutic approaches and an integrated approach.
Gabriel Eichler: That’s very bold. Now reflecting on that for a second, it feels like for a technology and data company, which doesn’t touch patients, doesn’t have any drugs on the periphery of health care, often, sort of an insight generation platform. How would want to go about achieving the goal like that?
Jeff Elton: How you frame and define clinical studies and even design clinical studies can be informed now by real-world data and actually tools. Even like artificial intelligence, machine learning tools, can identify subpopulations that may not have been precedented in the literature, and actually even help form new hypotheses that can actually advance different alternative treatment approaches. So I can actually find the subpopulations that may be super responders. I can find the subpopulations that may not have been beneficiaries. And so instead of looking at a normal distribution, I can use analytic tools to do in a fraction of the amount of time of what it would have taken to do traditional research and kind of bring that forward. So that’s kind of part number one.
Richard H.: Can you do that up priori? Before the trial begins and the design of the trial?
Jeff Elton: Excellent question. So what we are trying to bring to bear actually is tools that help actually inform what studies do want to do and why do you want to do them? So some of these insights we can pull from the real-world data assets. And let me just take a step back. Data in real-world data and electronic medical record derived data have been present for a long time. We’ve been able to put them together. There’s other companies that have come together around this. And in fact, we kind of consider ourselves a third generation in that particular field. The first generations gave you observations of what does care look like, what is the standard of care? Does it vary by practice setting? So it was really very descriptive. The second generation though recognize that they’re now where’s research potential in data that was used around the patient’s treatment. Not intended for research, but actually if I thought about wanting to do research, I could improve the quality of those assets and start making them research grade.
Jeff Elton: The third area now is once I have that, and I can integrate that across multiple settings, now I can integrate data that may not even sit inside the provider but may sit in the lab company and the payer and other areas. And I use the questions to drive what data I want to bring together and actually use those use cases to determine that data. And I bring data assets as appropriate for those questions and I take structured and unstructured data, make that available. But instead of doing it on a per study basis, I do it at massive scale. When I used to do that on a study basis, I had to have the question. Now we want to make sure that we have all that precision and all that scale right up front so that in fact the data with new analytic approaches can inform the right questions. And by doing it at scale, we’re trying to track what used to take weeks to accomplish. Can we get that done in a few hours?
Gabriel Eichler: For our listeners, and for the sake of our conversation today in particular, can you help us get a sense of the different types of data resources that you believe you can pull into this type of an integrated system?
Jeff Elton: Electronic medical record is one source. Claims data, which would be derived from payers of healthcare, and these are public and private, is another source. Inside the electronic medical record, oftentimes there’s laboratory results. Oftentimes those come from a third party laboratory and the raw values or the more quantitative and values of interest may be back at that laboratory entity. So laboratory values may be another. Image data, which are not in the EMR, which you tend to get in the electronic medical record is interpretation. And patient centric sources which can be patient reported outcomes data, other features, ultimately we’ll find a different disease state. Wearable data will actually be material to actually gaining an understanding of how treatment is proceeding under certain cases.
Jeff Elton: We bring together all those, but what we think about is, what’s the problem you’re trying to solve? Am I a clinical development person, and I actually need a depth in science and I need deep genomic data? And so therefore my stratification basis may be lab data, genomic data and EMR data. And at what depth, and in what ways, and with what recency do I need it? And with what frequency of refresh do I need to see that? Let me bring that together that it actually helps their ability to develop next generation clinical study concepts. If I’m a health economist and trying to do market access and show value for a payer to assure somebody gets reimbursed for that, that may be a different set of data. So we may bring together claims data and electronic medical record data, but I don’t need all the depth of a scientist or a clinical developer.
Jeff Elton: So the questions of interest and the areas of interest are going to determine the data elements we try to bring together, which means we have to be super collaborative with everybody in the industry and around the borders. So we almost don’t consider anybody a competitor. It’s more what needs to come together to solve what problems at what scale.
Richard H.: Can you share an example, ground level, where applying these techniques really made a difference either in target populations, synthetic control arms? But I always think for the audience, actual examples help a lot.
Jeff Elton: You used the term synthetic control arm, and let me define what a synthetic control arm is. For one, when I’m treating and running a clinical trial, and the conundrum now facing is that you’re getting very small populations, and you’re trying a therapeutic approach that is being deployed in a cancer which really didn’t have very many alternatives. In fact, the standard of care had very poor outcomes. And so the notion may be that it may not be ethical or even possible to use a control of an actual treated population, that anybody who meets this requirement probably should receive treatment. And in this case the active arm would be an experimental pre-approved therapeutic, but I still need a control and I need a statistical framework that gives me confidence like randomized controlled trial, which the gold standard would have. And so the notion is I can use prior data.
Jeff Elton: Some believe you can use prior data from real-world data sets, from electronic medical records, and some believe you can use controls from prior randomized controlled trials. And so therefore, other subjects that may have participated in other controlled trials and that you do those together. But the notion is that they’re called synthetic because they’re not being recruited to that active arm. This is now being done, Roche has had therapeutics approved over in Europe that actually were in some very rare blood-based cancers that have gone through.
Jeff Elton: The notion here is these synthetic controls provide confidence in interpretation. They also can be completed in a fraction of the time. So if I’m trying to accrue subjects, I’m only accruing subjects to the act of arm, so my accrual process is actually much more rapid as I’m selecting through. So I get a shorter trial completion time, which actually both lowers resource intensity but also gets it through faster. So therefore new medicines gets to patients faster.
Jeff Elton: One of the things that you actually want to be able to do is when you’re looking for very unusual populations and we get into rare mutations. Just to give you an idea, we may start looking at a population of 200,000 patients, but when you’re looking at a design or subpopulation, we may get in certain climatological malignancies down to 100 or 150. And we’re looking at something like close to 20% of the US population sometimes. And so just this notion that, “I can use these tools to model, understand the population, the settings where they’re found, model the design, and then actually understand which clinicians and clinical sites are seeing them. And where can I go to even run the trial?” These are things that even three years ago were actually not even possible to do. Impossible.
Richard H.: I think this is so important for the audience to underscore that typically for a randomized controlled trial, half the people you recruit are in the control arm and are getting a placebo of some sort. So if you need N people to be treated, you need to recruit two times N, and this just eliminates that and the control, which would normally be a group of patients getting a placebo. You’re saying you call your database, you apply your analytics, and you can come up with the natural history of what this would look like.
Jeff Elton: You can. And I want to emphasize there’s no lack of scientific rigor because even if you’re going to do a synthetic control, you’re applying the same statistical framework.
Gabriel Eichler: Interesting. I think of a lot of the applications of these data assets as being used in the medical affairs functions of biopharma and the commercial work of pharma companies. But increasingly as the data gets richer, higher fidelity, it can be used for R&D as well. So are you guys seeing that steady march forward in the drug development process?
Jeff Elton: Gabriel, one of the things I think you’ll see is this is a field where Dr. Gottlieb, who previously was the commissioner and head of the FDA, made a few challenges to industry and indicated, “We don’t think you’re being brave enough to push the models forward.” And this is a circumstance where a regulatory agency is now challenging the traditional notions about how research takes place.
Jeff Elton: I personally give and our firm gives quite a bit of credibility to what the FDA’s been able to advance recently. A lot of this began throughout 2018, it’s really got its start in the end of 2016 in the 21st Century Cures Act. But in reality, most of the agency’s guidance about use of real-world data, new data types and even new statistical methods started approaching in 2018. And couple of things that they indicate, and this may sound like an obvious statement when I make it, is that a study design should be a set of questions that once answered has implications for a community practitioner and how they would actually practice medicine. That sounds like, “Why wouldn’t all clinical studies do that?” But in reality, regulatory endpoints were about meeting regulatory considerations, and we’re not there to have sway over how people practice medicine. And I think the agency started saying, “No, we actually need actually study designs, study concepts to be more reflective of the real-world setting. And the more of the results from a randomized controlled trial shouldn’t depart necessarily from what a real world look like.”
Jeff Elton: And so a lot of guidance came at about change in trial designs, change in trial parameters. In fact, even up to March 13th of this year, so not much more than a month and a half ago, additional guidance came off on something called predictive enrichment of populations using analytic tools to even select the design of patients going into a study design. So this notion about where analytics, real-world data, whole new assets, whole new capabilities are moving into the research and development side of very large pharma, biopharmaceutical companies, but in part guided by the challenges and kind of the fairway being established by the regulatory authorities.
Richard H.: Other examples of questions that get asked where you can illustrate the power of what you’re doing?
Jeff Elton: We tend to move across the continuum of questions. The other thing we’re now doing is we found we can predict who’s going to progress, meaning whose cancer will progress versus who will have a durable treatment. And if we can understand that, back to sort that beginning statement, we can now use that as focal population as an insight into how to further refine those treatment protocols and translate that into new study designs and concepts. So back to this notion of, “How do we get to that 25% reduction in 10 years?” Can we do these things in a fraction of the time? These would have normally required studies, but we’re doing these with machines, and we’re doing these in what I’ll call Silicon labs as opposed to human wet and study clinical labs.
Gabriel Eichler: I’ve long held the notion that our current use of existing therapies and tools in the fight of cancer are so inadequately deployed that even just an optimization of the current toolbox, if you will, could have vast improvements for outcomes and patient experiences.
Jeff Elton: You’re saying something very important. We work very collaboratively, obviously having a relationship with the American Society of Clinical Oncology, and their canceling program is a focusing lens for us. We work with a lot of providers. I fly and both do my commercial work, but also spend time with regional medical societies and oncologists and oncology networks. We listen very carefully to their challenges in treating patients. And I think they’re equally also trying to participate and look forward to these new paradigms becoming accessible to their practice, and their communities, and their particular patients. Partially when we do our work, we’re more in a research phase that’s pre-treatment, but everything we do, we’re being mindful of, “Are we building it in a way that could have utility to go into the clinic later?” Some of that as mundane as every model, every component is done with a fire interface and something that could be deployed back into the informatic infrastructure of a provider setting.
Jeff Elton: So we’re trying to be mindful that the way you have impact is answer questions that are answerable today, but doing it that if you’re successful, make sure that that success can carry forward into clinical settings in the future, even if it’s not quite validated yet. And that also challenge you to make sure, “Are you working on the right questions? Will people care? Will they care to bring it into those clinical settings?” So we try to bring that as well.
Gabriel Eichler: Fascinating. It feels like the FDA has been very progressive, you mentioned this earlier, in their views about the importance of real-world evidence and in appointing Amy Abernethy, the former chief medical officer and chief scientific officer of Flatiron into the role of principal deputy commissioner and the acting CIO. I’ve heard you speak recently about the guidance that the FDA has proposed, and I’m just curious to hear your most recent thoughts on how productive that relationship will be and what else we need.
Jeff Elton: So since even Dr. Kelly left the agency, statements have come from the FDA to assure everyone that they plan on continuing the emphasis. In fact, the comments have come from Alex Azar, and I don’t think this came from the president directly, but it was attributed that Alex Azar head of HHS, etc. They’re all encouraging those policies and approaches and what I’ll really call, it’s even more than a renovation. It’s actually a redirection of how certain portions of the agency function would kind of operate.
Jeff Elton: Now, Amy Abernethy herself has been a leader in this field for a long time, and I think they proceeded her going into the startup team that started Flatiron, and Flatiron itself was actually a leader and helped establish and move the field forward considerably. And I think she herself now has clearly made that transition to a neutral role. And I think will continue to bring that leadership. She’s been articulated about what are our standards of data, how do we think that data drive, just in the standard of practice, could have the same veracity as data that was collected in a clinical study?
Jeff Elton: The agency has actually indicated that sometimes you think of randomized controlled trial data as being the gold standard, but there are imperfections and failure points in those data too. And so it’s knowledge of those limitations and how to improve that, that is actually more important than discrediting and saying one is better than the other. In fact, I think then that’s going to bring a more benefit.
Jeff Elton: So I think our view is we don’t see that slowing down. We don’t see that stopping. There’s been approvals of AI-based devices based on retrospective data that are been approved under software and medical device. You probably saw six weeks ago, Paige.AI, which is a company that came out on Memorial Sloan Kettering, got approval on an accelerated basis for doing diagnosis in prostate cancer using AI tools and digital pathology. The agency itself is building up a data science team to actually have this comfort and using novel analytical approaches. Even encouraging that in study design. This is actually a real change. I mean, this will actually change how medicine occurs and actually I think will set forth a context that we’ll be operating in for some time.
Gabriel Eichler: One more question I have for you, which I think is a really, really critical one that we talk about. There’s been increasingly a backlash against the ownership and the fluidity of data. We see folks like Facebook and Google getting hauled up before Congress, and read the Riot Act about how they manage data and there’s been security breaches everywhere at many major organizations. And I think that I see this tremendous promise and hope and the vision that you and ConcertAI are bringing to the market and your peers and the ecosystem that’s supporting it. At the same hand, I feel there’s this sort of discussion around closing up data and making it much more difficult to share. And we’ve seen in Europe the GDPR has changed, the fluidity of data and the ownership model. And I would just be really curious to hear your viewpoint of how do we get this right and ensure that we don’t find too much interference on the productive path we’re on now, interfering with the potential value that we see ahead of us?
Jeff Elton: This is highly relevant and you may even find some polarized views on some of this, but let me advance. The data assets, and if you asked anybody in my company, they would cite how often I come back to this. Which is we are privileged stewards of the data we’ve been allowed to interact with and it brings with it an owner’s responsibility and set of obligations to use it only for the benefit of those who contributed that data. I mean that statement… Let me go back then to what that really means.
Jeff Elton: Now there are health provider entities that provide services to those patients that are allowed under legally the consent the patient signs to actually utilize that for patient care. Oftentimes those consents also say they can use it for research purposes. And that’s also to build future quality into patient care and future benefit into patient care.
Jeff Elton: We believe very strongly, and one of the reasons why even our legal structure of our agreements are called subscription agreements around things. It doesn’t imply ownership, it implies uses according with some limitations around what those uses are. So I think a provider and a patient, and you could argue the patient’s the ultimate kind of owner around that. And oftentimes you’ll find that… In fact, I was out in Washington state last week with a large medical society and this came up as a topic with them. I mean, the answer can only be that these data are actually really belonged to the patient.
Jeff Elton: How you act is one needs to be super transparent at the point in time of consent and post consent about what data were captured, how those data would be used. My personal experience and some of this was pre ConcertAI, is we put together a very large learning health system program in neurodegenerative diseases that involved a biopharma. We were super transparent, created a document available for patients as to how those data are going to be used, and less than 1% of patients once reading it decided to opt out of providing their data for that particular purpose.
Jeff Elton: So generally, when explained with appropriate rationale and limitations and boundaries, I’ve personally not seen patients withhold the ability. Because if you say, “My intent here is to do research that brings patient benefit,” they almost always at a very, very high… Like small single digits will opt out. Almost all of them will provide that.
Jeff Elton: In fact, we’re thinking about creating an annualized patient report of the type of contributions and the things we accomplish, with the data and the assets that we kind of have gotten together. So I do think one actually has to have a high level of business integrity, a high level of use case, almost ethical, intellectual integrity and really transparency to the patient. The greater the transparency, the more the patient’s engaged. Actually the patient tends to be more generous with actually the allocation of their data when they know the intent than some people who think they’re doing it on their behalf, but it needs to be done intelligently and respectfully.
Gabriel Eichler: Dr. Elton, thank you so much for joining this edition of Under the DataScope. It’s been a phenomenal opportunity for us to learn a bit more about ConcertAI and to share that learning with our audience.
Gabriel Eichler: Richard, let me turn to you. What do you think? Anything surprising in here? Anything that you are pleased to hear?
Richard H.: Maybe I’m hearing this from my perspective as a co-chair of the Accelerator, but I was so struck by how many of the comments are dovetailed with what we’re trying to do. Dr. Elton said we can do better. And to me, that’s the whole story in oncology. There are lots of more drug approvals than before. We can do better. They’re being approved faster, we can do better. And we see this in clinical trials. We had the example of the synthetic control arms. The money that you save will allow more investment. So these are all this virtuous circle that we’re trying to accelerate, and of course, AIs help making that happen.
Richard H.: One of the thing I will mention is part of what we’ve been doing at HBS is we run annually a course called Accelerating Innovation in Precision Medicine, a three-day executive education course, all new cases every year. Dr. Elton spoke about asking the right question. The case we used last year was called the answer fund, one of the disease nonprofits putting money out to one are the best questions. And they surveyed not only leading oncologist, but their patients’ population. You start with the right question, then you move forward. We’re trying to stay at the leading edge of this in our educational courses, and obviously, we’re just a school here that can disseminate knowledge and that knowledge is being developed both in universities and by great companies.
Gabriel Eichler: One more observation here, which is really fun to think about. For years, real-world data was in a sense playing second fiddle to the gold standard medical evidence of clinical trials. Something I’ve observed here is that it’s kind of crossing this chasm now into this relevance of almost replacing and augmenting existing forms of medical evidence. And eventually, if we imagine, this could also start building a true learning healthcare system for these types of systems. So the classic case of disruption where medical evidence at first our new technology plays catch up, and then eventually surpasses, and then starts defining the segment that it was once trying to join as a new entity.
Gabriel Eichler: So I think that that is about all the time we have for Under the DataScope today. Tremendous opportunity to hear from Dr. Elton. Thank you. This is Gabriel Eichler and Richard Hamermesh signing off from the Harvard Business School. And thank you for joining us.
Gabriel Eichler: We’d like to thank our illustrious and dedicated colleagues and leaders for their hard work on making Under the DataScope possible. Thank you to Richard Hamermesh and Kathy Giusti, the faculty co-chairs of the Kraft Precision Medicine Accelerator, Krys Mroczkowski, executive producer, and of course the profound generosity of Robert and Jonathan Kraft and the entire Myra and Robert Kraft Family Foundation for making this program possible.