tags: [“Decision Making”, “AI Era”, “Agency”, “Personal Reflection”] cover_prompt: “A solitary figure sitting at a minimalist desk in a softly lit room at dawn, leaning back thoughtfully, gazing at a large monitor displaying streams of luminous code and abstract data flowing like a river. On the desk sits a simple compass glowing faintly gold, symbolizing human direction and judgment. Behind them a vast window reveals a futuristic cityscape with subtle AI neural-network patterns in the sky. Color palette: deep navy blue, warm amber gold accents, soft white light. Cinematic realism with painterly quality. 16:9, high detail.”

A programmer with 20 years of experience, running concurrent subscriptions to three different AI models, shares his most genuine apprehension: Once AI solves all the “how-to” problems, the immense weight of “whether to do it” and “which one to choose” falls entirely on you alone.

At 1:00 AM last night, staring at a complete application framework generated by AI in just three minutes, I was struck by a very strange feeling.

It wasn’t excitement. It was apprehension.

I have been writing code for nearly 20 years. Starting from my master’s studies in Singapore, I’ve written in almost everything: Fortran, C++, C#, Python, Matlab, VB. Hundreds of thousands of lines of code piled up, enabling me to publish dozens of papers and solve numerous practical engineering problems.

To be honest, I genuinely love programming. That feeling of deep cognitive focus, flowing like water, watching lines of logic grow on the screen. That state of “flow” is an unspoken understanding shared among many developers.

Recently, I have been running concurrent subscriptions to Claude, Gemini, and ChatGPT, and I have developed a few AI-based applications myself.

AI applications I developed

But the more I use them, the more uneasy I become.

This unease does not stem from a fear of being replaced. It comes from a deeper realization: when AI solves all the “how-tos”, “whether to do it” and “which one to choose” suddenly become the heaviest burdens on your shoulders.

We Used to Stay Up Late Fixing Bugs; Now We Lose Sleep Making Choices

It is difficult to explain this feeling to someone who hasn’t used AI extensively.

Previously, the pain of coding was highly concrete—a boundary condition could stall you for an afternoon; a memory leak could leave you staring blankly at the screen at 3:00 AM. You spent 90% of your bandwidth on “execution,” with no energy left to agonize over anything else.

And now? You give AI a single prompt, and within three minutes, it provides three solutions, each accompanied by an analysis of pros and cons, impeccably structured and clear.

Then you freeze. All three solutions look beautiful. Which one do you choose?

The problem goes further. You select a solution, and the AI generates the code. Staring at what it produced, a deeper question surfaces: Is this thing actually correct?

The code runs, and on the surface, there are no issues. But relying on 20 years of experience, you faintly sense that something is off—perhaps the modeling assumptions are flawed, or the data processing logic skipped a critical step.

You aren’t certain, but you also don’t dare to trust it completely.

You ask the AI: “Are there any issues with this approach?” It says no. You ask from another angle: “Could it have missed certain boundary conditions?” It computes for a moment, then adds a pile of highly professional-sounding explanations.

Yet, in your mind, you know: it is not making independent judgments; it is “performing” judgment by catering to your line of questioning. AI is an exceptionally capable executor, but it is also an extremely slick, “people-pleasing assistant.”

What is worse—everything it produces looks “correct.” You must possess sufficient mastery yourself to distinguish the true from the false.

In the past, what blocked you were bugs. Now, what blocks you are choices, and something much more insidious—trust.

After 20 Years, What Is My True Core Competency?

When AI writes in three minutes what used to take me a week of late nights to complete, for the first second or two, there is a trace of loss.

But upon calming down, I began to audit the past 20 years.

In those critical moments that enabled me to publish papers and secure projects, what did I actually rely on? Was it my typing speed in C++? Was it my fluency in Python?

It was neither.

It was me staring at a batch of output data, weighing it repeatedly in my mind: “Does this model actually reflect reality reasonably?” It was facing an anomalous data point and having to make an on-the-spot judgment: “Is this noise that should be cleaned out, or a clue worth betting three months to dig into?”

In those moments, there are no standard answers. Previously, these decisions were submerged beneath the daily grind of writing massive amounts of code, appearing inconspicuous. Now that AI handles all execution-level work, these decisions are shoved right in front of you—starkly exposed, with nowhere to hide.

Your true core competency was never the speed at which you write code; it is the quality of your decisions.

It has always been there. It was just previously obscured by the code.

AI Didn’t Make Your Work Easier; It Made It “Pure”

This is what I want to emphasize most.

Many people think that with AI, work becomes easier. At the execution level, it certainly has. However, the density of decisions you must bear has multiplied exponentially.

Previously, in a project, you made a directional judgment at the start, then spent three months slowly executing it. You had plenty of time to course-correct during implementation.

Now, AI executes it for you in three days. What happens to the saved time?

It turns entirely into a continuous stream of judgments: Is the direction right? Is the interaction logic reasonable? Should we cut this feature? What do the users actually need?

You used to make one critical decision a month. Now you might make ten a day.

A recent report published in the US points out: In AI-driven job structures, professionals equipped with “structured decision-making” and “systems thinking” capabilities command a salary premium of up to 23%.

The logic is highly straightforward: execution can be outsourced to machines. But determining whether a solution is genuinely reliable, taking responsibility for a strategic direction, and paying the price for the ultimate consequences—that can only be done by a human.

AI hasn’t rendered you obsolete. It has pushed you onto a higher stage. And on this stage, only one task remains: making decisions.

Final Thoughts

I am not trying to talk down AI. On the contrary, I use it every day, and I am astounded by it every day.

But precisely because of this, I understand one fact more clearly than most: It can write code, run analyses, and generate reports for you, but it does not know what kind of person you want to become, nor does it know where your bottom line lies. Even less does it know which parts of its own answers are trustworthy, and which parts are merely beautiful nonsense.

Tools will constantly evolve. C++ became Python, IDEs became chat interfaces, and in the future, we might not even need to write prompts.

But the person who has sat in front of the screen making every critical judgment for the last 20 years, and the steadfast resolve they held in those moments, has never changed.

You can let AI help you formulate plans, conduct analyses, or even let it make decisions for you. But when that decision goes wrong—when the project direction is misguided, the model assumptions are skewed, or the users reject the launched product—the one who steps up to bear the consequences is not AI; it’s you. AI faces no consequences. It does not lose sleep, it is not held accountable, and it certainly will not clean up the mess for you. All the costs ultimately fall on the person who made that decision.

You can let AI make the decision, but ultimately, the one who bears the results is you.

The most vital lesson 20 years of coding has taught me is not mastery of any single language. It is the capacity for decision-making—the courage to make the final call and bear the responsibility when there are no standard answers.

[ A Moment of Reflection · Special Feature ]