Creation Date: 12.05.2026 | 0 Comments

Language and Artificial Intelligence – Interactions and Ambivalences

Language competence is shifting from the ability to produce language to the ability to control and evaluate it

The widespread use of generative artificial intelligence (AI) applications, particularly those based on large language models, has fundamentally transformed human-machine interaction. The programming language of the present and future is no longer a cryptic sequence of code to be learnt, but our natural language. As a result, language competence is becoming a decisive factor for the effective use of AI applications, according to our “Steinwurf” author Dr. Michael Ortiz.

Language no longer functions merely as a medium of communication, but as the primary prerequisite for accessing technological resources and a central control mechanism for many AI applications. The success of these applications depends largely on the quality of the input, i.e. the precision of the prompting. Language proficiency here involves far more than correct spelling and grammar. On the one hand, it is about semantic accuracy: those who choose precise terms and can distinguish nuances minimise misinterpretations by the AI. Contextualisation is also crucial – that is, the ability to frame complex scenarios, problem situations and requirements linguistically in such a way that the AI understands the intention behind the task. The ability to structure language is also important here: a logical and nuanced structure of instructions correlates directly with the coherence of the AI output. Users with high formal language competence can therefore utilise AI tools significantly more effectively than users with lower language competence.

Critical thinking and evaluation are gaining in importance

It is not only the generative dimension of language that is relevant for the successful use of AI applications, but also receptive language competence in the sense of critical thinking and validation. This also involves critically examining the content produced to ensure its accuracy and appropriateness. Recognising ‘hallucinations’ generated by AI is also essential: Only those with a deep understanding of language and specialist vocabulary can identify subtle errors in content or logical inconsistencies in AI-generated texts. Added to this is the stylistic evaluation of these texts: language competence enables the tone of AI-generated output to be assessed and adapted to the target audience, or adapted on their behalf, rather than simply accepting generic results unfiltered.

Language also facilitates mastery of the feedback loops necessary when using AI applications. The use of AI applications is often a dialogue that requires iterative work. Those who are linguistically adept and flexible can refine results through targeted adjustments (iteration). Linguistic competence allows one to precisely articulate discrepancies between the actual and target results – also known as chain-of-thought prompting or dialogic guidance.

From a sociological perspective, the interaction between language and AI thus implies developments of both levelling and differentiation. Whilst AI applications assist people with lower linguistic competence and specialist knowledge in composing well-founded and professionally formulated texts, a new ‘digital divide’ or ‘AI gap’ is simultaneously emerging: users with high rhetorical and analytical linguistic ability achieve disproportionately better application results and thereby secure competitive advantages, particularly in knowledge-based professions. They use AI as a creative lever, whilst users with less linguistic proficiency often remain stuck with superficial results and use AI applications for trivial tasks. Drawing on Pierre Bourdieu, linguistic competence can thus be regarded as embodied cultural and digital capital. Those with a nuanced style of expression can control AI applications more precisely and confidently, achieving superior results. This leads to a new form of social stratification: on the one hand, AI enables the levelling out of linguistic deficits (compensation); on the other hand, those with linguistic privilege gain a significant efficiency advantage through AI (augmentation). The compensatory effect applies primarily to written performance. AI applications enable people whose first language is not the working language, or who have lower formal written language proficiency, to produce texts at a professional level. This can lower the barriers to entry into prestige-bound discourse spaces.

Socio-economic hierarchies run the risk of being reinforced

There is also a certain devaluation of the individually acquired habitus: when the ability to compose a flawless and stylistically sophisticated text can be delegated to AI, this classic marker of education loses its function as a social filter. The attribution of this habitus shifts from form (grammar/style) towards conceptual mastery (idea/structure) in the use of language. The socio-economic status of users must also be considered: recent studies show that language models are often based on data that tends to reflect a more sophisticated sociolect. Users who employ simpler or dialectal language often receive less effective or lower-quality results, which can reinforce existing socio-economic hierarchies in the digital sphere as well. Current regulatory processes such as the EU AI Act or the national ‘Digital for All’ initiative therefore specifically promote digital participation as a central educational goal to counteract this reinforcement.

Within a language community, the widespread use of AI applications can bring about profound structural changes in language and communication. Users tend to unconsciously adopt the linguistic style of the AI models they use (imitative learning). Vocabulary favoured by AI is also increasingly finding its way into everyday language. Furthermore, a stylistic levelling occurs: AI tools generally promote a more formal tone, more precise grammar and often more complex sentence structures. Consequently, this standardisation can weaken the social function of language as a creator of identity.

On the other hand, AI applications also alter communication dynamics and act as ‘accelerators’ and ‘filters’ in human interaction: Features such as ‘Smart Replies’ lead to faster communication, but also make it more uniform. AI-powered emotion filters can filter out negative tones or emotions from messages to meet professional standards, which can, however, reduce the emotional authenticity of texts and language.

Human feedback contaminates AI data

However, the use of AI applications not only influences our language; through our language, we also influence the further development, behaviour and quality of AI applications. The most significant influence occurs via feedback loops. Every time a response is rated or edited, structured signals are generated. These ‘human correction signals’ are used to fine-tune models via ‘Reinforcement Learning from Human Feedback’ (RLHF). Through this, users train the AI to favour patterns that are perceived as useful or likeable.

This human influence on AI can lead to the ‘contamination’ of data. The massive production of AI-generated content presents a paradoxical challenge: as AI systems are increasingly trained on texts generated by other AIs (contaminated data), there is a risk of a decline in quality. Furthermore, high-quality, ‘genuine’, purely human-generated training data could soon be exhausted. Our authentic, creative language thus becomes a valuable ‘premium raw material’ for further AI development.

Ethical and cultural influences, such as societal values and prejudices, are also transferred (often unconsciously) to AI through our language. This can lead to the systematic reproduction of bias: As AI models recognise patterns in large volumes of text, they replicate the assumptions and prejudices present in our historical and contemporary language. Furthermore, the desire for safe and responsible language use – driven in part by regulation – leads developers to incorporate ‘guardrails’ into language models, based, amongst other things, on our current moral and ethical standards.

Furthermore, a trend towards the increasing anthropomorphisation of AI applications can be observed. As AI systems are humanised linguistically, the design of the interfaces is influenced. Developers respond to this human need for ‘brand humanity’ by equipping AI applications with characters and (simulated) emotions to build trust and enhance user-friendliness.

Linguistic performance and cognitive competence are decoupled

The most critical aspect of all these developments is the growing shift in cognitive authority. Until now, linguistic competence has been inextricably linked to individual knowledge. AI applications are decoupling linguistic performance (the result) from an individual’s cognitive competence (understanding). This leads to a crisis of authenticity: in social interaction, it is becoming increasingly difficult to judge whether an interlocutor’s linguistic competence is based on their intellectual capacity or on a skilful algorithmic synthesis.

The relationship between linguistic competence and AI reveals an ambivalent dynamic. On the one hand, linguistic barriers are being broken down; on the other, new, more subtle distinctions are emerging, based on the ability to linguistically dominate and control AI applications. Linguistic competence thus remains a key resource for social positioning even in the age of widespread AI applications, but is shifting from pure productive capacity towards the ability to control and exercise judgement. Consequently, the successful use of AI applications cannot be a substitute for human education, but rather its consistent application within a new, enhanced digital environment.

Contact

Dr. Michael Ortiz (author)

Management
Steinbeis Beratungszentren GmbH (Stuttgart)
www.steinbeis.de
www.steinbeis.de/su/606
 

234218-37
Last changed 12.05.2026

Write a comment

By clicking the 'Send Request' button, I agree to the use of my data in accordance with the Steinbeis privacy policy.
Please fill out the fields marked with * completely.