The idea of following and observing how AI tools have been slowly incorporated into our everyday existence has grown recently to occupy a non-insignificant patch of my personal mental real estate. Ever since the release of OpenAI’s ChatGPT in 2022, which shattered the illusion many people had held that the slowly creeping progress of technological advancement, automation and artificial intelligence would by a localized problem concerning blue-collar workers living in faraway lands, the discussion about it has only increased in intensity. Now, AI has become a fully entrenched frontier in a tribal war waged both online and in real life, a potent cultural lightning rod and a new nexus of virtue signaling.

While I have written extensively on this—as I mentioned, this topic has been on my mind for a while now—I think we have arrived at a juncture where even the very idea of having a conversation about AI tools and automation is permanently tainted and that your opinion on the subject—pro or against—may lead your interlocutor to infer a bunch of things about your entire worldview as a result. It is an honestly fascinating phenomenon.

Look, I’m not going to lie here: I have always been fond of new advancements and tools and I have been using AI tools in a large variety of contexts. I use ChatGPT and Claude to proofread my writing, pressure test my ideas, help me narrow down some of the weaknesses of my craft, and underline some of those darlings in need of euthanizing. I use these tools for research in conjunction with other sources and search engines, to generate images and graphics when I can’t find anything suitable in widely accessible libraries and sometimes I just like having fun. Once, I converted a massively unwieldy work report into a 20-minute podcast that tried to summarize it and make it a bit more entertaining, which was hilarious and terrifying in equal measures while it also let me save some precious time. On another occasion I generated a pop song about the importance of wearing personal protective equipment in the lab and shared it with my colleagues as a gauche way of increasing awareness of issues of health and safety. I giggled when I found a YouTube channel where someone posted a whole bunch of rock and metal classics covered in the style of yacht rock. ChatGPT organizes my workout splits and sometimes suggests cooking ideas when I don’t know what to do with a leftover courgette, some mushrooms, a can of tuna and three slices of bacon.

But in 2026 it seems we have moved way beyond the kind of gleeful acceptance of these new tools and apps. Just the fact that I have nothing against using an AI-powered image search to find out the name of a flower I found in the park might make some people think that I belong to an enemy tribe. I’m not sure that I can pinpoint exactly the enormity of this transgression just yet, but it is definitely somewhere between admitting that you don’t recycle and claiming that climate change is a leftie hoax; not quite yet the equivalent of tweeting that “all lives matter” or wearing a MAGA hat in public unironically, but it’s damn near close. And the slide towards extremes continues unabated.

We now hear filmmakers announce that generative AI was not used in the making of their movies and we can surely expect this phrase to join the PETA-calming statements assuring that no animals were harmed during their production. Many artists openly diss AI adoption or boycott companies that chose to develop AI-derived solutions in their workflows. They do have a point. It’s an ethical quagmire when it comes to ensuring that when AI models trained on copyrighted data are used to generate new data that copyright holders would be compensated. This must happen, but nobody really has a good idea as to how to bite into this problem, especially because it will most likely involve litigating against obscenely rich corporations. I will have no objections if I had to pay a subscription fee to use AI tools, which would then finance the process of reimbursing creators whose work contributed to model development, just as I have no qualms paying a fee to access streaming platforms. But this process must start at a structural level, led by national governments and supranational regulatory bodies.

However, we are now at a point where many people’s positions have fortified in a similar way to those held by climate activists or militant vegans. We are on a path away from peaceful coexistence between people who use technology and those who do not. In fact, AI has become an added wrinkle in the climate debate as well as you might sometimes hear that generating an image of Joaquin Phoenix walking a phoenix on a leash in the park consumes as much electricity as charging your phone battery from zero to full. That’s because this magical technology “lives” inside sophisticated chips grouped in data centers that consume energy and water (used as a coolant) like Galactus. It honestly sometimes feels that using AI for whatever reason makes you look like that guy who drives an overgrown diesel truck with a single-digit gas mileage.

It’s not really a result of cultural envy, because everyone has ready access to these tools. Instead, it’s more likely a product of cultural threat. Typically, you’ll find that the most vocal opponents of adoption and consumer use of AI tools are those people who feel the most threatened and potentially impacted by them. Frankly, this has been the biggest plot twist in this narrative because we have been aware of progress in automation for decades now and the main prescription for factory and shop workers at risk of becoming replaced by automated tools coming from journalists and pundits was to roll with the punches, retrain and… learn to code. Now the same people who had thought that their comfortable middle-class jobs in marketing, accounting and journalism were safe are at risk of being affected too and the irony seems completely lost on them.

Admittedly, the reshaping of the job market and the global economy as a result of worldwide incorporation of artificial intelligence solutions by numerous companies is a multi-layered, multi-variate problem that requires legislative solutions, building complex regulatory frameworks and that regardless of what happens, the world is just going to be different. But just as shouting “just stop oil” and throwing soup at works of art might have drawn attention to the issue of climate change while it galvanized opposing voices, shouting “fuck AI” is not going to solve this problem either. In fact, this is one of those problems that won’t go away just because many people don’t like it, much like climate change. We can modulate it to an extent, slow some things down, put pressure on our governments to adopt new regulations, but we won’t turn back time. Without significant technological breakthroughs it will be impossible to extricate excess carbon dioxide from the atmosphere and ship it into outer space. And the AI genie won’t go back into the bottle. Anyone who thinks otherwise is simply dreaming.

At the same time, engaging in digital veganism is not going to affect anything and anyone either. Your personal decision to refrain from using ChatGPT or DALL-E is not going to turn the tide of technological progress. In fact, even if every consumer in the world did the same, still nothing would change because the biggest sway lies with massive companies and when there’s money to be made, there’s nothing you can do to stop it… unless you are a government working hand in glove with other administrations to curb corporate ambitions. And to be perfectly honest, tech companies probably want you to believe that reframing your opposition to AI as a new frontline in a cultural war, because it effectively diffuses the responsibility for addressing the underlying issues, just like we were conned into believing that replacing plastic straws with paper ones, recycling of plastics and going vegan would somehow fix the deteriorating global climate. It won’t. If we all stopped eating meat and recycled all our plastics and drove EVs, the world would be largely where it is now because we’d have solved only a fraction of the underlying issue at a massive cost. Meanwhile, a single sortie of a B-52 bomber burns as much fossil fuel as your car over the period of seven years. And the same logic extends here. If we all stopped making funny yacht rock songs and generating images for no other reasons than because it’s a fun pastime, AI data centers would continue to be built at a similar pace and the reinvention of the global economy would probably not be affected in any measurable way.

What this achieves, though, is deepening division along new lines. Virtue signaling to your personal echo chamber that you shun AI is not going to change the world. Tech companies will still go on their merry way doing what they think will generate the most money for their shareholders. But we might lose friends, allies and companions if we allow for this issue to divide us the way other cultural battle lines did in the past. Whether we like it or not, the world is being reshaped and many of us will be forced to reinvent ourselves on short notice. I don’t know what I will be doing three years from now. But my own personal Millennial experience, which included living through a string of theoretically once-in-a-lifetime calamities all in short succession, has taught me that rolling with the punches like Rocky Balboa and pushing through adversity like Sam Witwicky is the only solution.

The only way out is through. But it’s easier if we all do it together. Apes together strong and all that. So, the take-home lesson is to tone down this tribal rhetoric and focus on deliverable solutions instead of taking a performative stand that will amount to just as much as gluing yourself to the road. Cultural division has become the most reliable way of ensuring that nothing structurally changes.


Discover more from Flasz On Film

Subscribe to get the latest posts sent to your email.

Leave a comment

FEATURED