The Bureaucracy of Probability
On February 1, 2026, the Stimuleringsfonds Creatieve Industrie enforced its new Richtlijn gebruik Generatieve AI. The policy formally ends the era of "Don't Ask, Don't Tell" shadow IT in the Dutch cultural sector. It rightly shifts liability to the applicant and demands full transparency on the use of generative tools.
From a "Digital = Governance" perspective, this is necessary progress. But a close reading of the text reveals a fascinating asymmetry. While applicants are permitted to use AI (provided they check for bias and errors), advisors are explicitly forbidden from using these tools in their assessment process. Citing confidentiality obligations (Article 2:5 Awb) and the "High Risk" classification under the EU AI Act, the Fund has stripped committees of the very tools applicants are using to generate the content.
This creates a new reality: Applicants armed with probability engines will be pitching to human committees who must rely entirely on analogue intuition to detect the artificial.
The "Beige" Threat
We are not just governing for fraud or copyright; we are governing for cultural survival. Two recent studies confirm what many of us have intuitively felt:
Homogenisation: Researchers Doshi and Hauser (2025) found that while individual AI-generated ideas often score high on quality, they dramatically reduce the collective diversity of solutions. AI pulls everything toward the statistical mean.
The "Illusion of Creativity": A January 2026 study in Nature Scientific Reports demonstrated that while LLMs can produce "divergent" ideas, they struggle with true conceptual leaps—the kind that define avant-garde culture.
If our governance structures (the advisors) are not trained to detect this "beiging"—this statistical smoothing of radical ideas—we risk funding a monoculture of highly competent, perfectly formatted, but creatively dead projects.
The Governance Gap
The current guidelines treat AI primarily as a legal and citation issue ("Did you credit the tool?"). We need to treat it as a Competence Issue. With advisors legally barred from inputting applications into AI tools to "check" them or analyze patterns across thousands of submissions, the committee's role changes. They cannot fight fire with fire. They must rely on a new kind of "Friction Test."
The Friction Test for 2026: If an application feels "frictionless"—if the logic flows too perfectly, the jargon is too standardised, and the risk assessment feels overly balanced—it is likely the product of a probability engine. Human creativity is messy. It has gaps. It has "friction."
A Proposal for Committees
We cannot ban the tools, but we can update the filter. Cultural governance bodies must move beyond the "checklist" assessment of feasibility and start indexing for idiosyncrasy.
Ask for the "Why," not just the "What": LLMs are terrible at explaining personal motivation without sounding generic.
Value the "Rough Edges": We need to stop penalising imperfect writing if the core artistic idea is radical. A polished proposal is no longer a proxy for competence; it is often just a proxy for a Pro subscription.
The Richtlijn is a good start for compliance. But to protect the soul of the sector, we need advisors who are brave enough to reject the "perfectly average" in favor of the "flawed but human."
References
Stimuleringsfonds Creatieve Industrie. (2026). Richtlijn gebruik generatieve artificial intelligence (GAI). Rotterdam: Stimuleringsfonds Creatieve Industrie. Effective Feb 1, 2026.
Doshi, A. R., & Hauser, O. (2025). "The homogenizing effect of large language models on creative diversity." ScienceDirect.
Koivisto, M., & Grassini, S. (2026). "Divergent creativity in humans and large language models." Nature Scientific Reports, 16(1).
Colophon & Transparency Statement
In strict compliance with the 'Richtlijn gebruik GAI' (2026) regarding transparency:
Author & Final Liability: Jorge Alves Lino.
Generative Tool Used: Littlebird (System 2.2).
Scope of Assistance: Research verification (PDF analysis), source synthesis, and stylistic calibration.
Validation: The human author confirms that this text has been reviewed for bias and "hallucinations," and asserts that the Friction Test proposed herein was generated by human intuition, not a probability engine.
