Third Plenary Session examined the AI revolution across cancer labs and clinics

4–6 minutes
Jakob Nikolas Kather, PhD, MSc
Jakob Nikolas Kather, PhD, MSc

The third Annual Meeting Plenary Session, “AI Revolution in Cancer Research,” highlighted advances in artificial intelligence (AI) tools for cancer research and care.

The session opened with a quick review from Session Chair Jakob Nikolas Kather, PhD, MSc, of Technische Universität Dresden in Germany. Kather explained that in the 2000s, AI’s role in biomedical research focused on what are now termed classical machine-learning methods. These provided the basis for the deep-learning models of the 2010s, followed by the self-learning foundation models that emerged around 2020; the generalist large language models that burst onto the scene in 2023; and, finally, the new agentic models of 2025.

The agentic aspect of AI, said the first speaker Jure Leskovec, PhD, of Stanford University, now has the capacity to offer scientists a human-machine collaboration with “AI co-scientists.”

Jure Leskovec, PhD
Jure Leskovec, PhD

One of the key problems with the practice of science as such, Leskovec said, is the intrinsic limit of human bandwidth. But AI agents offer the fundamentally new possibility of automating not just data analysis but literature review, hypothesis generation, software creation, experimental design, and even experimental execution—the latter being made possible by already-existing robotic wet labs, which can perform protocols when instructed by either a human scientist or an agentic AI co-scientist.

Leskovec showcased an AI co-scientist named Biomni, which, in one example, allowed a student researcher to complete an analysis that would have taken her about three weeks of work in 35 minutes. By working alongside scientists and adapting to their research, AI agents like Biomni, he said, have the potential to be “the new operating system for biomedicine.”

Bo Wang, PhD, of the University of Toronto in Canada, followed to speak about AI’s potential for next-level modelling. One of his lab’s technologies, called X-Cell, models cell biology in a virtual environment where scientists can test how cells will react to various perturbations, including possible therapeutic interventions.

Wang and colleagues trained the model using a vast dataset they assembled called Pisces, comprising nearly 26 million single-cell transcriptomes of perturbed cells.

Bo Wang, PhD
Bo Wang, PhD

“We find that as we increase the number of screens and increase the number of contexts, the datasets start to recover lots of known biological protein-protein interactions,” he said, illustrating the power of AI to reason its way through fundamental biology. “This really gives us hope that with such high-quality, diverse datasets, our model will eventually learn some causal biology.”

Wang concluded with a preview of his lab’s next project: BioReason-Cell, a virtual cell model that he hopes will have the capability to reason in natural language, which would allow for even more accessible and expansive model-based analyses.

Turning to the clinic, Suchi Saria, PhD, of Johns Hopkins University, reviewed her work adapting AI to use in real-world health care settings—a proposition that, she said, necessarily requires a rigorous approach to build trust, considering the high-stakes environment of direct patient treatment.

“Care is reactive by default today,” she said. “When a patient is entering a room, you’re doing the best you can given the little bit of time you have to gather information about this patient’s history and what you need to do. And as a result, you see a lot of variability.”

Suchi Saria, PhD
Suchi Saria, PhD

Saria developed an AI platform to address the problem of existing patient-data-based warning systems. These systems, she said, not only generated counterproductively low signal-to-noise ratios for clinicians (making alerts, according to Saria, something of a nuisance). She said they also had a tendency to warn clinicians of emerging health issues too late or not at all.

Her AI platform integrated a slew of patient data—lab results, imaging data, vital signs, and general notes—to create a dependable system that would provide clinicians with proactive guidance and would “think like a clinical team,” she said.

Saria shared a case study in which the AI platform was used to identify sepsis. The platform achieved an 85% sensitivity for identifying developing sepsis cases, while reducing the number of alerts sent to clinicians by ten-fold. It was also able to warn clinicians earlier than existing warning systems.

The tool, she said, works alongside clinicians to improve their ability to respond to imminent health issues while providing all-encompassing overviews of potentially actionable clinical data. Saria’s platform can also autonomously perform tasks like ordering lab tests and automate some of the compliance paperwork required of clinicians, creating another incentive for adoption.

Faisal Mahmood, PhD
Faisal Mahmood, PhD

Faisal Mahmood, PhD, of Harvard Medical School, closed the session with a discussion of AI’s ability to enhance what scientists can glean from image-based pathology data as represented in high-resolution, whole-slide images.

Mahmood gave an overview of several AI-based tools that he and his team developed, including TITAN, a platform capable of analyzing histopathology slide data and generating text-based pathology reports.

But Mahmood wanted to go further than the slide. He aimed “to represent the entire patient,” he said. “Once we have this representation, we can look at risks, predictions, similarity searching, statistical analysis, [and] other kinds of treatment response predictions.”

Mahmood and team built Apollo to achieve this goal—a health-care-system-scale, whole-patient foundation model created to represent patients virtually across treatment timelines.

Trained on patient data from millions of individuals throughout several hospitals, Apollo, Mahmood said, achieved dependable predictive capacity for patient categorization and outcomes.

Apollo also incorporated linguistic reasoning throughout, making its system searchable. For example, if a clinician wanted to identify patients similar to a case of their own, they could enter text details from the patient record or even one of the patient’s histopathology slide images.

Mahmood concluded with optimism for what Apollo might enable clinically.

“You can study [risk] at the level of the patient, but you can also study this at the level of the large cohorts—so that can lead you to additional information, additional knowledge,” he said. “We will continue to work to validate this model and see what the capabilities are.”

The recording of the full session is available for registered Annual Meeting attendees through October 2026 on the virtual meeting platform.

More from the AACR Annual Meeting 2026 »

View a photo gallery of scenes from San Diego, join the conversation on social media using the hashtag #AACR26, and read more coverage in AACR Annual Meeting News and on Cancer Research Catalyst, the official blog of the AACR.


Precision Partnership Purpose - Advancing Cancer Science to Save Lives Globally
Precision Partnership Purpose - Advancing Cancer Science to Save Lives Globally