Greetings from Juergen
Hi all,
This week feels like scanning the horizon for what's already happening. We're looking at how 87% of musicians now use AI in their workflow—not as a curiosity, but as infrastructure for everything from production to promotion. Adobe rolled out text-prompt video editing that lets you adjust pacing and fix frames without regenerating entire clips, while their Content Authenticity Initiative tags AI-generated content with visible metadata. And in what might be the biggest practical shift for creators, the U.S. Copyright Office ruled that purely AI-generated work—without meaningful human input—isn't eligible for copyright protection.
Meanwhile, the cultural responses are getting more interesting. Louis Bury's year-end survey argues that some of 2025's most inventive creative work happened outside traditional galleries—in Skibidi toilet videos, NFT collections, and TikTok lore he calls "digital folk art." Caroline Dewison's miniature dioramas are so skillfully crafted that people assume they're AI-generated, which says something about how our baseline for astonishment has shifted. And then there's Marco Rubio banning Calibri at the State Department for being too associated with DEI, replacing it with Times New Roman in the name of "professionalism"—because apparently font choices are now cultural battlegrounds.
What strikes me about these stories together is how they map the actual terrain of 2025: not hypothetical futures, but the practical questions creators are wrestling with right now about tools, ethics, authorship, and what counts as legitimate artistic practice.
Film & Video
Adobe's Firefly AI Now Lets You Edit Videos by Describing Changes
Adobe just rolled out major updates to Firefly AI that transform it from a simple video generator into a full editing workspace, as Manisha Priyadarshini reports for Digital Trends. The standout feature? Text-prompt editing that lets creators make precise tweaks—like adjusting pacing or fixing off-looking frames—without regenerating entire clips. Add timeline editing, transcript-based cutting (delete a line of dialogue, and Firefly trims the corresponding video), and integrations with Runway's Aleph model for precise changes and Topaz Labs' Astra for upscaling to 4K, and you've got Adobe's bid to stay competitive in an increasingly crowded AI video space.
This reminds me of Guillermo del Toro's blanket rejection of AI for his projects. But what's actually happening on the ground tells a more nuanced story. Filmmakers, editors, and production designers are incorporating AI-powered tools—not to generate entire scenes from prompts, but to make small edits, rearrange clips, and upscale low-quality footage. I personally use Topaz for photo enlargements, taking my iPhone photos and creating large-scale prints from them, even from cropped portions, and it does a remarkable job.
The reality is that this isn't a black-and-white question of rejecting or approving "AI" as a blanket term. It's becoming more granular—discussing what AI-powered toolsets might be acceptable for artists to use in their own practice without substituting the creative process or entire work product.
Avoiding AI-powered toolsets is a lost cause at this point, so the conversation needs to shift toward the ethics of usage—preserving human connection, authenticity, intent, and impact rather than letting machines do all the work.
Societal Impact of Art and Tech
The Year in Digital Folk Art: Much of 2025's Creative Innovation Happened Outside the Art World
Louis Bury's year-end roundup in ARTnews surveys the thriving world of digital folk art—from Skibidi toilet videos to NFT collections, from AI-generated political satire to TikTok lore—making a compelling case that some of 2025's most inventive creative work happened outside traditional gallery walls. His list of ten examples spans everything from Dean Kissick's examination of "Vulgar Images" to Aidan Walker's brilliant analysis of meme culture's formal innovations.
It's a somewhat unique perspective to think of NFTs or Skibidi toilet videos or 67 memes as "folk art." The label initially feels odd, almost too generous. But the more I sit with it, the more it makes sense. These works do share that devotional quality—creators making things for niche communities, often for little or no compensation, driven by the pure act of creation rather than art market validation.
What strikes me most is how this reframing challenges our assumptions about what counts as legitimate artistic practice. If folk art has always been work created outside official channels, then meme creators and TikTok artists might be the truest folk artists of our era.
Can the traditional art world learn something from digital folk art's economics of abundance rather than scarcity?
AI in Visual Arts
AI Art Just Caught a Cold — and IT Might Be Contagious
The U.S. Copyright Office delivered what Art-Sheep calls a "juridical chill" to the AI art world: works generated entirely by AI, without meaningful human creative input, are not eligible for copyright protection. If your creative process ends at typing a prompt and clicking download, the legal system now treats that output as unprotectable. The guidance demands demonstrable human authorship—editing, arrangement, compositional choices—before copyright can be claimed.
This is one of the small wins for artists in 2025. We've touched on this theme before—the distinction between AI-generated imagery and actual art. This ruling matters because it makes pure prompt-to-output work harder to monetize. Without copyright protection, those banking on the "prompt-to-profit fantasy" suddenly find themselves without the legal foundation needed to sell or license their outputs.
The ruling defends something deeper than revenue: it protects the cultural idea that creativity involves deliberation, revision, and judgment. AI can generate forms, but it can't shoulder the moral friction and intentional failure that make art meaningful.
Will this stop AI generation? Not remotely—but it forces a question about what we value when we talk about authorship.
Artificial Intelligence and Creativity
International Online Conference: Museums Between AI, Fakes and the Power of Knowledge
The Belvedere in Vienna is hosting its eighth annual conference on "The Art Museum in the Digital Age" this January, bringing together over 500 participants from around the world to tackle questions about AI, fake news, and what it means to be a trusted source of knowledge in an era of algorithmic bias and AI-generated content. Art Daily reports the five-day online event will feature sessions on everything from UNESCO frameworks for AI to the ethics of large language models in museum practice.
But here's what caught my attention in the conference description: the phrase "museums can rethink their role as trustworthy spaces for knowledge dissemination." That presupposes museums exist primarily as knowledge vessels. But is that really what art is about? Knowledge? Dissemination? I'm genuinely curious what roles we think museums should play beyond being trusted information sources.
Sadly, Vienna is a bit far for me to travel just to find the answer, but the question feels worth sitting with: when did we decide that a museum's relationship to truth and knowledge should define its purpose?
Maybe the real question isn't how museums maintain trust, but whether we've been asking them to do the wrong job all along.
Definitely Not AI
Fantasy Art by Caroline Dewison That Looks so Good That People Think It's AI
When you first see Caroline Dewison's miniature dioramas—cascading waterfalls, ethereal ghosts, lush forests all contained in tiny boxes—you might wonder if they're AI-generated. That's the strange compliment skilled analog artists receive now, as Shanilou Perera notes in this DeMilked feature. Dewison works with Jesmonite, acrylic paint, delicate brushwork, and even a 3D-printing pen to create these fantastical worlds.
I've heard "proudly analog" several times this week from people returning from Art Basel, where a rejection of the digital seemed to be a major theme. The irony hits hard: the extraordinary craftsmanship that goes into creating these miniature worlds means Caroline's work gets mistaken for AI at first glance.
It's sort of sad to think that the amazement and wonder we would have gotten from looking at such works 10 years ago has been replaced with amazement and wonder that it's not AI-generated.
Has our baseline for astonishment fundamentally shifted?
Design
Marco Rubio Bans Calibri Font at State Department for Being Too DEI
Secretary of State Marco Rubio has officially declared war on Calibri, banning the font at the State Department for being too associated with DEI initiatives. As TechCrunch's Julie Bort reports, the sans-serif typeface—adopted in 2023 specifically to make documents more accessible for people with dyslexia and vision impairments—has been replaced with Times New Roman in what Rubio calls an effort to "restore decorum and professionalism." Yes, you read that right: a font choice is now a political statement.
Look, I live in Florida, so watching our political leaders wage cultural wars against typography feels oddly on-brand. Part of me was genuinely hoping Rubio would've picked Comic Sans instead of Times New Roman. That would've been perfectly Florida man-like—chaotic, inexplicable, and guaranteed to make everyone equally uncomfortable.
The absurdity here isn't just that we're politicizing letterforms—it's that we're actively choosing to make government documents harder to read for people with disabilities, all in the name of "tradition." Meanwhile, even The New York Times abandoned Times New Roman almost twenty years ago. What does it say about our priorities when accessibility improvements get branded as radical excess?
I wonder what font the next cultural panic will target?
Creator Platforms and Tools
Adobe MAX 2025: Promise and Protection in the Age of AI
Adobe's annual MAX conference in Los Angeles drew ten thousand attendees eager to witness the latest creative tools—from Generative Expand that lets you reshape image compositions to Harmonize, which seamlessly blends lighting across composite images. Aria Lee's coverage for Design Milk walks through six transformative features, but the real story isn't about what these AI-powered tools can do.
What impressed me about Adobe's approach is how they're handling the ethics around AI integration. They're not just throwing these capabilities at users and hoping for the best—they're building in accountability from the start.
Their Content Authenticity Initiative adds metadata tags to AI-generated content, making the provenance visible. As Deepa Subramaniam from Adobe put it, using AI in their applications is absolutely a choice, and they want that choice to be visible in the asset because knowledge is powerful. The fact that creators themselves are designing these features feels less like corporate disruption and more like a coalition for creation.
Can transparency and creator choice actually make AI integration feel less threatening to artists?

