16.03.2026 21:54:00
Дата публикации
Researchers from the University of Southern California and other centers published a paper arguing that chatbots based on large language models (LLMs) overly standardize human ways of expressing thoughts.
According to author Jivara Surati, the uniqueness of human thinking is a precious resource. But when millions of people use the same models, their language and reasoning become uniform.
Diversity of language and perspectives is not just a cultural value. It is essential for creativity, innovation, and collective problem‑solving. Standardization threatens these processes.
The scale of use is striking: one‑third of Americans used ChatGPT in 2025, and among teenagers — two‑thirds. Businesses are also actively adopting AI: 78% of companies reported using it.
The authors note that AI differs from past technologies. The Internet accelerated the spread of new social norms and ideas about behavior, GPS slightly weakened spatial thinking, but LLMs generate reasoning and formulations instead of humans.
This creates a “ready‑made way of thinking” imposed on millions of users simultaneously. Such an effect has never been observed with any previous technology.
The reason lies in model training: they rely on statistical patterns, which reinforce the dominance of certain languages and ideologies, narrowing the spectrum of human experience.
Loss of diversity leads to a decline in pluralism — the principle that different viewpoints make society resilient. Without it, collective intelligence and adaptability weaken.
The danger is not only that people write alike. AI can subtly change perceptions of what counts as “proper” speech or “correct” reasoning, forming new standards.
Even those who do not use chatbots feel the pressure: if most around speak in a “median language,” social coercion toward unification arises. This threatens individuality and lowers the quality of public dialogue, researchers warn.
According to author Jivara Surati, the uniqueness of human thinking is a precious resource. But when millions of people use the same models, their language and reasoning become uniform.
Diversity of language and perspectives is not just a cultural value. It is essential for creativity, innovation, and collective problem‑solving. Standardization threatens these processes.
The scale of use is striking: one‑third of Americans used ChatGPT in 2025, and among teenagers — two‑thirds. Businesses are also actively adopting AI: 78% of companies reported using it.
The authors note that AI differs from past technologies. The Internet accelerated the spread of new social norms and ideas about behavior, GPS slightly weakened spatial thinking, but LLMs generate reasoning and formulations instead of humans.
This creates a “ready‑made way of thinking” imposed on millions of users simultaneously. Such an effect has never been observed with any previous technology.
The reason lies in model training: they rely on statistical patterns, which reinforce the dominance of certain languages and ideologies, narrowing the spectrum of human experience.
Loss of diversity leads to a decline in pluralism — the principle that different viewpoints make society resilient. Without it, collective intelligence and adaptability weaken.
The danger is not only that people write alike. AI can subtly change perceptions of what counts as “proper” speech or “correct” reasoning, forming new standards.
Even those who do not use chatbots feel the pressure: if most around speak in a “median language,” social coercion toward unification arises. This threatens individuality and lowers the quality of public dialogue, researchers warn.