top of page

AI, watch out for the nova language! Preserving the friction of intersubjectivity

ree

Former Professor at Savoie Mont-Blanc University

Business Science Institute faculty member




The rise of generative artificial intelligence (Le Chat, ChatGPT, Note Book Lm, Midjourney, etc.) is profoundly transforming our research, teaching, and communication practices. These tools, with their ability to produce text, images, or analyses in record time, offer new opportunities: time savings, easier access to information, and stimulation of creativity. However, their widespread use raises a fundamental question: what happens to critical thinking when language itself is generated by algorithms optimized for fluidity, consensus, and efficiency, rather than rigor, nuance, and the confrontation of ideas?


1. The risk of an algorithmic "nova language"


Generative AI works by identifying and reproducing the most frequent patterns in the data on which it is trained. Its strength lies in its ability to imitate "average," predictable language, often devoid of the rough edges that enrich scientific and intellectual debate. As George Orwell pointed out with his "Newspeak," an impoverished, standardized language is also a language that limits thought.


  • Standardization of discourse: Studies show that the use of these tools tends to reduce lexical and stylistic diversity, favoring generic and consensual formulations. For example, analysis of millions of scientific abstracts revealed a gradual homogenization of vocabulary and sentence structures after the adoption of AI, to the detriment of original expressions or conceptual nuances.

  • Illusion of neutrality: AI is not neutral. It reproduces—and sometimes amplifies—the biases present in its training data, while giving the illusion of mechanical objectivity. The resulting text, smoothed and optimized, can mask contradictions, controversies, or blind spots that are nevertheless central to the scientific process.

  • Critical disengagement: The ease with which these tools produce "acceptable" texts risks accustoming us to frictionless language, where complex or subversive ideas are watered down and disagreements are erased in favor of smooth, inoffensive discourse.



 Nova language


An artificial language created by the totalitarian regime of Oceania in 1984 (George Orwell, 1949), designed to reduce critical thinking by limiting vocabulary and removing nuances. Its purpose is to make it impossible to express subversive ideas (such as "freedom" or "revolt") by:


-Simplifying grammar and vocabulary (e.g., "bad" becomes "not-good").

-Reversing the meaning of words (e.g., "War = Peace," "Ignorance = Strength").

-Eliminating words related to rebellion (e.g., "democracy" does not exist).


Orwell invented Nova language to illustrate how a totalitarian power can control minds by controlling language. Inspired by Nazi and Stalinist propaganda, he shows that narrowing language narrows thought.


"Orwell's Nova language was a dystopian fiction in which power used language to restrict thought; generative AI, on the other hand, risks doing so without even intending to, by reducing our words to probabilities and our ideas to algorithms."



2. The friction of intersubjectivity: an issue to be preserved



ree

Research and teaching are based on exchange, confrontation, and questioning. This friction of intersubjectivity allows for the construction of meaning, the questioning of certainties, and the emergence of critical thinking. Far from being an obstacle, it is the driving force behind mutual understanding, learning, and the transformation of ideas (inspired by Habermas, Bakhtin, Piaget, and Heidegger). It manifests itself in debates, misunderstandings, or confrontations of points of view, and reveals the living and dialectical dimension of language and thought.


  • It allows us to test the strength of arguments, refine hypotheses, and bring new ideas to the surface.

  • It forces us to clarify our thinking, justify our choices, and accept conflict as a driver of progress.

  • It is the very foundation of scientific ethics, which requires transparency, debate, and controversy.


At the risk of isolating ourselves in a solitary dialogue with the machine, the uncritical use of generative AI threatens this balance. By delegating part of our expression to algorithms, we risk losing what makes our profession valuable:  the ability to exchange ideas, engage in dialogue, question, doubt, and confront different points of view .


3. Recommendations for responsible use


ree

To avoid sliding into an algorithmic "new language," here are some avenues to explore, individually and collectively:


Use AI as a tool, not as an author:


  • Reserve its use for secondary tasks (rewriting, bibliographic synthesis, generating raw ideas), and always read, rewrite, critique, and enrich the texts produced.

  • Require students to justify and discuss AI-generated content, rather than simply accepting it at face value


Preserve spaces for unmediated debate:


  • Maintain face-to-face or videoconference discussion times, without systematically resorting to AI to prepare or summarize exchanges.

  • Encourage teaching formats that promote oral expression, improvisation, and controversy (debates, collective writing workshops, critical expression).


Teach critical thinking in relation to AI:


  • Beyond the art of prompting, develop AI literacy

  • Incorporate modules on algorithmic bias, the limitations of language models, and the ethical issues surrounding their use into the curriculum.

  • Beyond fact-checking (hallucination), teach students to identify the markers of a text that is "too smooth" (lack of specific sources, absence of contradictions, generic style) or too superficial or flattering.


Document and problematize the use of AI:


  • In publications, explicitly mention the use of generative tools and explain how they were useful or limiting.

  • In assessments, emphasize the ability to go beyond what AI offers, rather than conforming to it.


Cultivate practices that resist standardization:


  • Prioritize qualitative methods (interviews, observations, verbatim) that anchor research in reality and its contradictions.

  • Learn to draw on personal experiences and their uniqueness.

  • Encourage handwriting, even if imperfect, as an exercise in thinking.


4. Conclusion: AI at the service of thought, not the other way around


Artificial intelligence is a powerful tool, but it must not become the arbiter of our language or our thinking. Our role as teachers and researchers is to train minds capable of resisting the temptation of algorithmic ease—minds that know how to question, discuss, and embrace the complexity of the world.


As Hannah Arendt said, "Thought itself is a form of dialogue." Let's not allow machines to reduce this dialogue to an optimized monologue. "Language is the home of being," said Heidegger. Let's ensure that this home remains a place of life, not a catalog of prefabricated formulas.



Further reading:


  • Orwell, G. (1949). 1984. Gallimard. (On the dangers of Newspeak.)

  • Habermas, J. (1981). The Theory of Communicative Action. Fayard. (On the importance of intersubjectivity.)

  • Study on the impact of generative AI on scientific language: Forbes, 2025


(With the help of Mistral's Chat prompted and revised by Jean Moscarola, November 2025)



Jean Moscarola's interview during the Business Science Institute impact seminar




bottom of page