

Why is it important to push back when an institution we love and trust chooses to promote AI technology?
As our petition against the Calgary Library's AI Collaborative Artist Residency approaches two-thousand signatures (wow!) this petition became a place for the initial backlash to collect into a community. I don't know whether this petition will help Library leadership reconsider their programming, but I am doing everything in my power to continue engaging with their leadership and give voice to the harms that AI adoption inflicts on artists and the Library's own reputation.
Where this petition is most meaningful is for mobilizing a community who are reluctant to pay the human, environmental, and artistic costs for whatever meagre benefit AI companies claim to provide.
This fight matters because libraries are (should be) the antithesis of AI; they should be places where access to information is free, where diverse perspectives are shared, and the human spirit elevated. If the people leading the development of Artificial Intelligence wanted to improve the experience of living, that would be incredible, but at this juncture the primary feature of AI is to replace humans in the workforce, including replacing creatives. That's not hyperbole or luddite mania: this week, Sam Altman, CEO of OpenAI (and man who would absolutely use human beings as batteries for the Matrix) said that the purpose of strip-mining intellectual property for their datasets is so that they can sell intelligence back to us like a utility "on a meter." The dream AI developers offer is a future where education and joie de vivre are luxuries, sequestering thought and creativity behind yet another subscription model.
The technology of AI does not have to be exploitative. It does not need to destroy livelihoods, the environment, or quality of life.
Of course there will be beneficial use cases, but...
There are harms that are being caused by the mass roll-out of a disruptive technology and there is no reason to trust that the people developing it have our best interests at heart. When I research Generative AI, it is clear that producing AI slop 'art' is not the future product. AI in art is the lure. The product is the data that users are feeding into the datasets, pouring their dreams, fears, and thoughts into Large Language Models. AI, more than anything, is facilitating the largest acquisition of private data at a scale unimaginable before, posing immense risks to privacy - everything that gets untrusted to an AI chatbot or image generator in confidence becomes raw material for future outputs. Worst of all, it circumvents any legal accountability until the legal system catches up.
Just because this AI Residency is provided within current "legal" frameworks and operates within the Library's "internal policies" does not imply that the law has been able to keep up with Artificial Intelligence use-cases, or that those internal policies are sufficiently informed by community engagement. The concerns that come with AI are rapidly evolving, and I am equally concerned that the laws and policies the Library cites as guardrails are inadequate protections against the epistemic damage of promoting AI as a creative tool.
This isn't a fight where we can "choose" to use AI or not - to allow whatever artists want to use it to use it and, for the rest of us, to ignore it.
Artificial Intelligence has no "opt out" and implicates everyone in a future where this technology is everywhere. This is a non-consensual exploitation of personal data and intellectual property and it is irresponsible for a public culture centre to say "oh, guess this isn't for you" - there NEEDS to be dialogue with the communities that this effects IN SPITE of their moral resistance.
Thank you for reading this update - please continue to share this petition. In my next update I will layout next-steps for the future of this protest.
Have a lovely day!
Jordan
Reading List:
https://thehumanist.com/commentary/losing-our-minds-how-ai-is-erasing-the-space-of-thought/