As large language models (LLM) become increasingly integrated into everyday workflows, a peculiar trend has emerged: the use of so-called caveman prompting. This style strips language down to minimal, imperative fragments - āsummarize thisā, āfix codeā, āexplainā. While efficient on the surface, this approach raises an important question: is this actually the best way to interact with systems that are fundamentally trained on natural human language?
Two common motivations are often cited. First, cost minimization: many professional LLM services are billed per token, incentivizing shorter prompts. This mirrors the early era of telegrams, where brevity was economically rational - yet, no one writes emails to colleagues in telegram style today, because clarity and nuance matter more than marginal savings (it would be more energy efficient and require on the large scale less storage). Second, a deliberate attempt to maintain psychological distance: some users prefer to treat the system strictly as an abstract tool and adopt a terse, command-like tone to avoid anthropomorphizing it.
This short comment explores why - in the opinion of the author - more natural, polite, and complete language - using full sentences, context, and even simple markers of courtesy like āpleaseā and āthank youā - can lead not only to better results, despite the slightly higher energy consumption, but also to a better cognitive and professional experience.

Language Models Reflect the Data They Are Trained On
Modern LLMs are trained on vast corpora of human-generated text including books, articles, discussions and collaborative problem-solving exchanges. These sources are not merely repositories of information, they encode patterns of reasoning, explanation and interaction.
Importantly, high-quality information is often embedded within high-quality communication. Technical explanations, scientific discourse and expert collaboration typically occur in structured, well-articulated and respectful language. In scientific writing in particular, the initial drafting phase is often followed by a much longer period of discussion among co-authors, focused almost entirely on refining phrasing to ensure that statements are precise, unambiguous and technically correct. By contrast, fragmented or hostile language tends to correlate with lower informational density and less cooperative intent.
When prompts mirror the structure and tone of high-quality language, they provide stronger signals to the model. A well-formed request implicitly encodes expectations about depth, clarity and completeness. In this sense, prompting is not only about what is asked, but how it is asked.
Prompting as Cognitive Extension
Interacting with an LLM is not just about issuing commands, it is a form of externalized thinking. For many users, prompts serve as a way to structure ideas, refine questions and iteratively develop understanding. In many cases, the act of carefully writing the prompt already solves a substantial portion of the problem, as it forces the user to clarify assumptions, constraints, and goals before any response is generated.
Reducing this interaction to minimal fragments can disrupt this process. Writing in full sentences supports internal coherence: it allows thoughts to unfold naturally, maintains context, and encourages precise formulation of questions. This is particularly relevant in technical or scientific domains, where nuance and assumptions matter. In this sense, prompting can serve a role similar to discussing a problem with colleagues: the primary benefit is often not the response itself, but the act of articulating and unfolding oneās own reasoning.
From a cognitive perspective, ācaveman promptingā may introduce unnecessary friction. It forces a translation step between natural thought and artificial expression, which can degrade clarity and slow down reasoning rather than improve efficiency.
Professional Spillover Effects
Communication styles are not isolated. The way one interacts with tools can influence broader communication habits, especially in environments where rapid context switching is common.
Adopting an overly terse, imperative style when working with LLMs may inadvertently carry over into human interactions. In collaborative settings - whether in engineering teams, research groups or operational environments - clarity and politeness are not merely social niceties; they are functional requirements for effective coordination.
Maintaining a consistent communication style across both human and machine interactions helps preserve professionalism and reduces the risk of unintended tone shifts. This is particularly relevant in high-interruption environments, where switching between contexts is frequent and often abrupt.
The Role of Politeness Markers
Words such as āpleaseā and āthank youā do not significantly increase the token cost of a prompt, but they can serve as structural and contextual cues. They signal a complete request rather than a raw command and often lead to more natural phrasing overall.
While LLMs may not possess emotions in a human sense (which is actually debatable), they are sensitive to patterns. Polite phrasing tends to co-occur with more detailed and context-rich requests in training data. As a result, including such markers may indirectly steer the model toward more comprehensive and cooperative responses.
More importantly, these markers reinforce the userās own communication habits. They preserve a mental model of interaction that aligns with real-world collaboration rather than command execution.
Efficiency vs. Effectiveness
A common argument for caveman prompting is efficiency: fewer words, faster input, quicker results, lower cost. However, this assumes that the first response is sufficient and requires minimal iteration.
In practice, underspecified prompts often lead to underspecified answers, necessitating follow-up clarifications. The time saved in typing may be offset by the time spent refining the output. A slightly longer, more structured prompt can reduce iteration cycles and improve overall throughput. Additionally, for many skilled users, input speed is not the limiting factor: at typing speeds approaching several hundred key presses per minute, the marginal cost of adding a few extra words - such as a brief clarification or a polite marker - is negligible. In most cases, the dominant factor is the cognitive effort required to formulate the problem, not the mechanical act of entering text.
From a systems perspective, this can be framed as an optimization problem: minimizing total interaction time rather than minimizing input length. In many cases, natural language prompts are closer to this optimum.
Conclusion
Caveman prompting is not inherently wrong, and there are contexts - quick lookups, trivial transformations - where it may be sufficient. However, for complex tasks, technical discussions, and sustained workflows, natural and polite language offers tangible advantages.
It aligns with the training distribution of language models, supports clearer thinking, preserves professional communication habits and can improve overall interaction efficiency. In this sense, using full sentences and simple markers of courtesy is not about formality - it is about leveraging the system in a way that is consistent with how it was designed and how humans think.
As LLMs continue to evolve, the boundary between tool and collaborator may blur further. Regardless of where one draws that line, maintaining high-quality communication remains a robust and future-proof strategy.
References
This article is tagged: