Blake Shae Kos

I have to say attending SXSW 2024 was a whirlwind of innovation, inspiration, and insight. Which session was most impactful for you, Ben?

Ben Batman

Blake, you're spot on about the whirlwind of ideas at SXSW. There were many amazing sessions but one that definitely stood out to me was the chat between Gary Marcus, Steven Rosenbaum, and Jennifer Risi on AI and Future of Truth Their conversation spanned an array of topics but I'm going to hone in on not just what we might need to worry about with generative AI and truth but how PBS and society at large can help preserve it for the future.

Ben Batman

To begin, we must ask ourselves the question: how do we protect the essence of truth in an era where altering reality becomes as easy as a click? The concept of truth isn't just an abstract idea; it's the very foundation on which our society operates as it establishes a common ground for trust, thus enabling cooperation and effective communication. This makes its protection not just important but imperative, especially for those at the helm of AI development and policy-making. As Rosenbaum remarked, “While Truth was never simple, it’s clear that it was simpler than the amplified, re-mixed, digitized, deep-faked world of digitally manipulated Truth”. This comment aptly highlights the complexity introduced by digital innovations, setting the stage for an essential conversation.

This rapidly changing landscape, marked by the swift advancement of AI technologies, has outpaced the development of necessary regulatory frameworks surrounding the training, release, and output of AI tools. This gap has allowed powerful AI tools with the capability to create media that's indistinguishable from content made by humans and fluently produce misleading information to be in the hands of billions. Despite these technologies being fundamentally rooted in mathematics and statistics, and not inherently malevolent, their potential for misuse—illustrated by incidents like the creation of AI-generated imagery of a Pentagon explosion or the proliferation of AI-driven robocalls—highlights the necessity for regulatory oversight.

The stakes are high, and the response needs to be robust. It's clear that we need a comprehensive framework to regulate genAI's use, aiming to preserve our collective grasp on what's real and what's not. This isn't just about setting rules; it's about cultivating a deep-rooted ethical commitment within the AI community, guided by the principle of “do no harm.”

One promising solution lies in the digital watermarking of content. Initiatives like Adobe's Content Authenticity Initiative (CAI) push for a world where every piece of media comes with a verifiable credential based on the Coalition for Content Provenance and Authenticity (C2PA), ensuring its authenticity from source to screen. Imagine a world where every news clip, photo, or article from trusted sources like PBS comes with a seal of authenticity, a guarantee that what you're seeing is unaltered and real. Our colleagues Nick L. and Useff C. have been experimenting with C2PA, which if rolled out industry wide, will help people quickly tell that something has genuinely come from PBS, and isn’t a misleading piece of faked media made with generative AI. This approach will counter disinformation, AI-generated deepfakes and other forms of fake or manipulated content. It will help audiences differentiate between real and fake PBS content when they see what might appear to be PBS content on external sites.

Now, I want to leave you all with a question to think about: will the line between AI-generated and human-created content even continue to matter? The fading significance of truth in a digital age, reminiscent of our growing indifference to photoshopped images, is a reality we must face today. In this light, I believe organizations like PBS have a pivotal role to play, not just in adopting these new standards, but in leading the discourse on the importance of authenticity in this digital age. As a publicly funded media producer and one of the most trusted news sources in America, PBS is perfectly placed to help lead the charge in preserving what we know as Truth.

Blake Shae Kos

I love that take-away question Ben! You really highlight the potential role of public media in championing truth through the use of C2PA technology amid the ever-evolving media and AI landscape. Along a similar vein, I believe that unlike the giants of Big Tech, PBS stands in a unique position, not constrained by the same financial pressures and instead driven by the needs of the public and local communities. Your points spark a profound realization for me: PBS could indeed become a beacon of trust in this rapidly changing media technology environment.

share

Ironically, my thoughts around C2PA implications weren't initially triggered by any of the sessions I attended but rather during a casual lunch conversation with Mikey Centrella, Director of Product on the PBS Innovation team. Over tacos at Veracruz, we delved into how PBS could leverage C2PA to embed watermarks in future content that would be leveraged in spatial computing platforms, ensuring its authenticity and building trust with users—a concept inspired by my "Designing for New Realities" session. This session, hosted by Daniel Marqusee of Bezi, featured visionaries like Keiichi Matsuda, Michelle Cortese, and Agatha Yu, who collectively explored the intersections of AI and spatial computing. A pivotal moment was the collective call-to-action for the design community to genuinely serve the needs of people amidst these technological advancements. Reflecting on this, I initially doubted my ability to make a significant impact. However, I soon recognized my unique opportunity as a Designer on the PBS Innovation team to shape the future of AI and spatial computing. This epiphany stemmed from contemplating the loyalties of future AI agents and the nature of the content they promote, guided by the values deeply ingrained within PBS's culture. This line of thought was further fueled by the controversy surrounding an AI-created Under Armour commercial and the broader debate over Big Tech's profit-driven model, highlighted by Frances Haugen's revelations about Facebook. The emerging narrative was clear: in a world increasingly mediated by AI and spatial computing, maintaining privacy and trust becomes even more crucial. The discussions at SXSW, especially in sessions like "Designing for New Realities," and the subsequent reflections with the Innovation team, underscore the need to critically examine the fast-paced, impact-agnostic ethos of Silicon Valley. It's evident that PBS, with its commitment to educational and trustworthy content, is perfectly positioned to guide the public through the maze of generative AI content. By adopting standards like C2PA to verify content authenticity, PBS not only upholds its reputation as a reliable source but also sets a benchmark for the industry at large. As we navigate this new era, it's clear that PBS's role is not just to adapt but to lead, ensuring that the future of media remains anchored in truth, education, and the public interest. The journey ahead is both exciting and challenging, and I'm eager to contribute to shaping a landscape where technology serves humanity, not the other way around.