Everybody wants to start a podcast these days – even AI. What does this mean for the future of science communication?

When we first heard that Google’s NotebookLM has the ability to generate AI podcasts from documents and websites, we felt a mix of curiosity and unease.

As researchers at the ANU Centre for the Public Awareness of Science, we wanted to see if the Gemini 1.5 language model behind Google’s new tool could pull off something interesting.

Could it take a dense academic paper and turn it into a podcast that people would want to listen to?

Testing the machine

So, Ehsan decided to run his own experiment. He chose one of his papers – on the contemporary history of water regulation in Iran – and fed it into the system. Within minutes, the AI-generated a seven-minute audio segment.

The result? Surprisingly impressive.

The podcast featured two lifelike hosts having an engaging, almost natural conversation. The discussion had a coherent narrative flow and good use of metaphors. What stood out were the human-like qualities of the voices: subtle hesitations, exclamations, bursts of laughter and even pauses for breath as the hosts summarised the 28-page paper.

Except for a few moments, it was easy to forget everything was synthetic.

What surprised Ehsan most was how well the podcast distilled a paper full of academic jargon into a digestible episode, without sacrificing the core message. While it didn’t dive as deep into the topic as the paper or a real conversation, it was adequate for the few clicks it took to produce.

The initial magic faded after we tested more papers.

Sometimes the storytelling is ineffective – unclear summaries, cringy moments and inappropriate emotions. This is particularly the case when given links to reports and articles in Persian and Chinese. While the translations were impressive, these awkward moments showed AI still works better with English resources.

The good news

Over the past decade, podcasts have become a popular medium for breaking down complex ideas and discussing research findings. 

However, producing a podcast can feel overwhelming for researchers. It’s resource intensive, time-consuming and demands a certain level of creative and technical expertise. AI podcasts could offer an easy, appealing solution and could be a game-changer for researchers in fields that rarely catch media attention. The platform’s translation capabilities could also open up more opportunities for knowledge sharing.

NotebookLM, though still in its experimental phase, can already translate documents in 53 languages into English-language podcasts. This could empower researchers from non-English speaking countries, including those from the Global South, to have a voice in global discussions that are often dominated by English speakers.

Should we be worried? Yes.

Trust is at the heart of science communication. There’s something unsettling about how easy it is to mimic, mass produce and disseminate synthetic podcasts.

The synthetic podcast hosts already sound legitimate, making them engaging and easy to trust.

With the impressive progress we’ve seen in generative AI, especially with vocal tone and emotions, AI podcasts will only get better at mimicking existing podcasts, even allowing for customisations such as using a researcher’s own voice or adjusting for emotional tone and speaking speed. This would further blur the line between the real and synthetic, making it harder to distinguish who’s behind the microphone.

This raises a whole new range of questions. Who controls the stories that shape our understanding of science? And how do we assign responsibility when these podcasts lead to societal harm? Just imagine if the technology had been available during the COVID-19 pandemic.

The ease with which we turned documents into engaging content means AI podcasts could easily contribute to widespread misinformation and disinformation by mimicking the credibility of high-quality podcasts, leading listeners to mistake false content for legitimate information.

These platforms could be leveraged by nefarious actors to craft manipulated narratives, where science is wielded to serve agendas. With customisation features, such as one used for scams, bad actors can make misleading content seem credible and go viral at a pace that outstrips any fact-checking, ultimately polluting the information landscape on a large scale.

What does this mean?

Australians trust science and scientists. Four in five people say they want to hear more from scientists about their work. AI podcasts could make this happen at scale.

This could be an opportunity for researchers to lead the use of AI to create accessible content from their research.

But to get there, public institutions, such as universities, also have a responsibility to co-design and direct the development of these technologies. They should advocate for AI systems that ensure scientists and researchers have greater agency over their work, so that AI complements, rather than compromises, the integrity of science communication.

Regulatory standards governing the use of AI-generated content must be established and institutions should invest in organisational training. Collaborations with industry and the media, will be key for managing the transformation of science communication in Australia. This includes formal programs like university-operated certification mechanisms or informal university-media partnerships to sense check AI content.

We also need appropriate funding for public education programs that improve literacy on how to critically engage with AI content.

Ultimately, safeguarding trust in science calls for a holistic approach. Appropriate policies, organisational capabilities and public awareness are required to effectively identify, manage and mitigate the risks and opportunities presented by AI technologies. We also need adequate contingency plans in case negative consequences do eventuate.

Only then can we better ensure that society benefits from AI-supported science communication.

Want to hear how AI turned this article into a podcast? Listen to it here.

Top image: Alex from the Rock/stock.adobe.com

You may also like

Article Card Image

Silicon Valley’s Trump love affair shows it can’t and won’t regulate itself

Silicon Valley has promised more responsible innovation. But high-profile support for Trump shows companies don’t really want the burden of responsibility.

Article Card Image

Tall poppies will continue to grow: scientific achievers recognised

Four ANU researchers who are paving the way for future scientists have been recognised for their remarkable contributions.

Article Card Image

Keeping knowledge alive in wartime

After fleeing Ukraine, Dr Andrey Iljin has found a haven for idea generation at ANU.

Subscribe to ANU Reporter