Sohan Basak

Supercharged, Not Superseded: The Unexpected Future of Developers in an LLM World

Sun and sea waves

The Future of Software Development in an LLM-Driven World

Software Development feels as if it’s been around forever. But in truth, programming’s origins are relatively recent, tracing back to the ’60s and only firmly established as a career by the ’80s. Despite its short history, software engineering has revolutionized industries with innovations ranging from airplane autopilots to social media. Today, however, a new technology has emerged — Large Language Models (LLMs) — that could disrupt the field even further.

LLMs can take instructions in natural language and generate code, which has raised questions: Will software development become an obsolete career? If LLMs can turn a few lines of description into executable code, could they replace software engineers altogether?


The Argument for LLM-Driven Development

Some argue that software engineering as a profession is indeed endangered by LLMs, and they make compelling points:

  • Availability and Efficiency: LLMs work 24/7, need no breaks, and can be refined iteratively at exponential scales.
  • Reliability: With predictable performance improvements, LLMs could potentially handle simple coding tasks far faster than human engineers.

But to assess if LLMs will indeed replace software engineers, we need to examine the role of the human engineer and whether LLMs can truly fulfill that role.


Will Programmers Share the Same Fate as Computers?

Historically, a “computer” was a job title — people whose job was to compute data, often aided by specialized devices. Eventually, these roles were taken over by machines, which we now call computers. It seems possible that “programmers” could face a similar fate, yet there are crucial differences that make a direct parallel misleading:

  1. Programming Isn’t Deterministic
    Software engineering involves interpretation, judgment, and extreme specificity. Product requirements are often fluid and imprecise, and only skilled engineers can translate that ambiguity into actionable steps.

  2. Domain Knowledge and Judgment
    A seasoned engineer must deeply understand both functional and non-functional requirements, especially in complex fields like stock trading. The initial “requirement gathering” phase requires human judgment, foresight, and dialogue with multiple stakeholders.

  3. Complexity Management
    Engineers are skilled at managing “spaghetti code” and employing data structures and algorithms to ensure that code is efficient, readable, and maintainable.

  4. A Developed Cognitive Skillset
    Reading and comprehending complex code isn’t innate. It takes years for engineers to develop the cognitive ability to mentally model software systems accurately.

  5. Abstraction and Pattern Recognition
    Engineers don’t just write code — they also abstract, repeat, and evolve patterns that make future development faster and more reliable. This ongoing learning and skill acquisition is difficult for an LLM to replicate.

In short, LLMs may produce code, but they struggle with the broader demands of software engineering, which goes beyond simple syntax generation.


English as a Programming Language?

While LLMs can “speak” English, we should consider why programming languages exist in the first place. Programming languages are designed to be:

  • Readable and Writeable: Easily used and reviewed by humans.
  • Unambiguous: Ensuring that the same expression has a consistent meaning.
  • Efficiently Processable: By both humans and computers.

English, on the other hand, is notoriously ambiguous and context-dependent, making it a poor choice for specifying technical intent.

Consider this example:

Create a program that takes two inputs, sums them, and outputs the result.

On the surface, this seems clear, but in reality:

  • Input Ambiguity: Are the inputs integers? Floats? Strings?
  • Summation Details: Should the program handle non-numeric inputs? What about error handling?
  • Output Requirements: Where should the output go? Should it be formatted?

These details are easily encoded in a programming language but often overlooked in English. In fact, when an LLM interprets a prompt like this, it often fills in gaps without the user’s explicit guidance, leading to unpredictable assumptions.

Example

Let’s compare a human-written Python program to one generated by an LLM:

Human-Generated Code:

a = int(input("Enter number a: "))
b = int(input("Enter number b: "))

print(f"The sum of a and b is {a + b}")

LLM-Generated Code:

# Get input from the user
num1 = float(input("Enter the first number: "))
num2 = float(input("Enter the second number: "))

# Calculate the sum
sum_result = num1 + num2

# Print the result
print(f"The sum of {num1} and {num2} is: {sum_result}")

The LLM has introduced assumptions and extra steps that might be irrelevant or even incorrect, given the lack of context. As such, English is not well-suited to encode precise technical intent.


The Human Factor

Beyond coding, software engineering is a human-centric field. Two psychological principles, Dunning-Kruger Effect and Cognitive Dissonance, illustrate the dangers of LLM dependency:

  1. Dunning-Kruger Effect
    As engineers rely more on LLMs, they might spend less time directly coding, resulting in a decreased understanding of the underlying mechanics. This can create a false sense of confidence and control, as the LLM-generated code is increasingly used without a complete understanding of its assumptions.

  2. Cognitive Biases and Lack of Low-Level Knowledge
    Engineers rely on domain-specific experience and instincts developed through years of practice. Relying on LLMs without direct involvement could prevent developers from asking critical, seemingly irrelevant questions that can prevent major issues.

  3. Lack of Contextual Accuracy
    LLMs do not yet have the nuanced understanding to navigate complex project environments, which involves aligning stakeholders, uncovering hidden assumptions, and managing expectations. These skills are irreplaceable, especially in client-facing roles.

In summary, LLMs might excel at technical output but lack the depth of insight and judgment that human engineers bring.


The Problem of “Dead Reckoning” and Cumulative Drift

Aviation uses a technique called Dead Reckoning to navigate without landmarks, using specific headings and speeds. However, minor deviations can lead to cumulative drift, where small inaccuracies add up over time, requiring constant recalibration.

LLMs experience a similar issue. Minor errors in LLM-generated code can compound exponentially, leading to unanticipated bugs. Even with near-perfect accuracy in isolated elements, the combined output might still deviate significantly from the intended functionality.

Calibrating LLMs in Development

One possible solution is Human-in-the-Loop (HITL) systems, where human oversight is introduced periodically to ensure that LLM accuracy is recalibrated. This feedback loop could allow LLMs to function reliably for limited tasks while being held to higher standards of performance and safety.


Concluding Thoughts

While LLMs are revolutionizing coding, they are far from fully replacing software engineers. At its core, software engineering requires:

  • Contextual understanding
  • Judgment and decision-making
  • Continuous learning and adaptation

Until LLMs can match human engineers in these areas, they’ll remain a powerful tool — but not a complete substitute — for software development. The future of engineering might see more collaborative efforts between humans and AI, rather than full automation.

Subscribe Sohan's weekly Newsletter

One update per week. All the latest news directly in your inbox.