The Incomplete Script

Reflections on burnout, disillusionment, and questioning the stories we were told

A publication of first-person essays naming what work feels like — without hero framing. These are lived reflections, not advice.

Empty office conference table with notebook, papers, and laptop in a subdued modern workplace

What It Feels Like Competing With AI-Enhanced Colleagues





What It Feels Like Competing With AI-Enhanced Colleagues

Quick Summary

  • Competing with AI-enhanced colleagues often feels less like direct rivalry and more like working under uneven conditions that no one fully names.
  • The real strain is not only speed. It is the feeling that effort, polish, and pace are being compared across different levels of tool leverage.
  • That comparison can quietly reshape confidence, motivation, and how fair collaboration or evaluation feels.
  • Workers are already more worried than hopeful about AI in the workplace, which helps explain why ordinary differences in output can start feeling personally destabilizing.
  • A healthier response requires more than telling people to adapt. It requires clearer expectations, cleaner norms, and more honest discussion of what still counts as human contribution.

I do not always experience this as competition in the obvious sense. No one is necessarily trying to beat me. No one is openly hostile. We may all still be polite, collaborative, even supportive. But something shifts once I realize that some of the people around me are producing work under conditions that no longer feel the same as mine.

That is what makes this hard to explain cleanly. On paper, we are still peers. We are still on the same team, still doing the same kind of work, still being judged inside roughly the same system. But once AI enters the workflow unevenly, the emotional experience of comparison changes. The output may look similar enough to be compared directly while the process behind it becomes much less comparable.

What does it feel like competing with AI-enhanced colleagues? It feels like trying to keep up in an environment where the visible results are easier to compare than the invisible conditions that produced them. Their draft comes back faster. Their summary looks cleaner. Their response sounds more complete on the first pass. And even when I understand intellectually that tools are involved, my nervous system still interprets the difference as information about my own adequacy.

That direct answer matters because the strain is not always about whether AI is “allowed.” It is about what happens when people working under different cognitive and workflow conditions are still read through a shared standard of speed, polish, and apparent effortlessness.

This is why the article belongs closely with what it feels like trying to keep up with AI at work and what it feels like when AI introduces unspoken expectations. The real pressure often starts before formal replacement and before anyone clearly states that the benchmark has changed. It begins when the surrounding pace and polish become harder to interpret without comparing myself to them.

Key Insight: The problem is not only that coworkers may produce faster with AI. It is that their faster output can start feeling like evidence about my own worth even when the conditions are no longer equivalent.

Why this starts feeling like competition even when nobody says it is

Most people do not need a formal contest to feel comparison pressure. Workplaces create comparison naturally. We notice who seems quick, who sounds sharp, who gets praised, who finishes first, who writes cleanly, who appears calm under load, who seems to generate ideas without visible strain. That was already true before AI entered the picture.

What changes with AI is that the signal becomes harder to read. A polished result used to reveal more about a person’s pacing, drafting process, revision time, and cognitive style. Now the same polish may reveal some of those things, but it may also reflect stronger prompting, faster iteration, AI-assisted structuring, or comfort using tools that make early drafts look more complete than they would have before.

This means the comparison becomes emotionally unstable. I still see the result. I still feel the gap. But I am less certain what the gap actually measures. Is it skill? Tool fluency? better workflow design? willingness to use AI more aggressively? a genuine difference in judgment? some combination of all of them?

This is one reason the topic overlaps with how AI makes me doubt my existing skills. Once the meaning of strong output becomes blurrier, confidence has less firm ground to stand on. I start responding not only to the work itself, but to what I think the work may now imply about me.

A concise definition helps here. Competing with AI-enhanced colleagues means experiencing performance comparison in a workplace where some peers are using AI in ways that alter speed, polish, drafting, synthesis, or communication quality, while the social and evaluative systems around them still invite direct comparison.

The direct answer is that it starts feeling like competition because visible output stays legible while invisible working conditions become less equal.

  • I compare my slower draft to someone else’s faster polished version.
  • I compare my effortful thinking process to their apparently smoother output.
  • I start interpreting their speed as my inadequacy even when tools changed the workflow.
  • I feel pressure to match visible standards without always knowing what produced them.
  • I begin treating ordinary differences in output as possible evidence that I am already behind.
The hardest part is not only that other people move faster. It is that I cannot always tell how much of the difference I am supposed to experience as a reflection of me.

Why uneven tool use feels different from ordinary workplace inequality

Workplaces have never been perfectly equal. Some people are naturally faster writers. Some are better improvisers. Some have more experience, better context, stronger memory, or more confidence in the room. But AI changes the feel of inequality because it alters the relationship between visible output and visible effort.

That matters psychologically. If someone has more experience than I do, the comparison may sting, but it still feels somewhat interpretable. If someone is faster because they have been doing the job ten years longer, I may not like the gap, but I can understand it. With AI enhancement, the gap becomes stranger. A result may look like superior ease even when what I am actually seeing is partly a different tool environment.

This is why the topic also belongs beside why I feel behind even when I’m experienced and why I feel pressure to work faster because of AI tools. The destabilizing part is not just that I am comparing upward. It is that the comparison itself feels less stable while still remaining emotionally powerful.

The result is a strange double message. Intellectually, I may know the field has changed and that tools are now part of normal work for many people. Emotionally, I can still feel the older reflex: if someone else looks more capable, maybe I am less capable. If someone else sounds cleaner, maybe I am falling behind. If someone else generates more polished output with less visible strain, maybe my process is becoming harder to defend.

The Uneven Conditions Trap
A pattern where workers are still compared on visible output even though the conditions shaping that output have changed unevenly through AI use. The person recognizes that the comparison is not clean, but still feels its emotional force as if it were a direct measure of personal adequacy.

What happens to confidence when coworkers start looking machine-accelerated

Confidence often changes before anything official does. I may still have the same role, the same experience, the same domain knowledge, and the same responsibilities. But if the people around me begin sounding faster, cleaner, or more instantly articulate because AI has become part of their process, my own competence can start feeling slower in a newly exposed way.

That is what makes the comparison so corrosive. It does not have to end in job loss to start reshaping how I interpret myself. I may begin second-guessing my pace. I may start feeling embarrassed by how long careful thought takes. I may begin noticing each rough draft as if roughness itself has become a weakness rather than an ordinary stage of thinking.

This is exactly why the article belongs near what happens to motivation when AI feels smarter than me and why I worry that AI could replace more than my job. The threat often reaches beyond performance and into identity. Once colleagues start appearing AI-enhanced, it becomes easier to treat my own unassisted process as something not just slower, but faintly less legitimate.

The concise direct answer is that confidence weakens when visible comparison outruns my ability to interpret what is being compared fairly.

Key Insight: Confidence erodes fastest when I cannot tell whether I am losing to superior judgment, stronger tool use, different standards, or simply a workplace that now rewards a different kind of visible output.

What most discussions miss

What most discussions miss is that the emotional problem is not only “AI makes people faster.” The deeper problem is that faster output becomes socially meaningful. It starts affecting how capable someone appears, how current they seem, how much space they take in meetings, how often their drafts become the starting point, and how quietly other people begin orienting themselves around their pace.

That means the issue is not only technical. It is relational and evaluative. It changes who feels ahead. It changes whose timing becomes the informal benchmark. It changes which kinds of work start looking dated, too effortful, or insufficiently optimized. And all of that can happen before anyone says openly that the standard has changed.

This is why the stronger adjacent links are not only AI-fear pieces, but also evaluation and morale pieces like what it feels like when AI undermines team morale and why I feel less trusted when managers use AI for evaluation. The damage is not simply inside the worker. It is inside the social meaning of work once some outputs start feeling machine-supported and everyone still has to figure out what counts as fair comparison.

The deeper structural issue is that organizations often normalize AI use faster than they normalize new norms for interpreting output. So workers are left inside a comparison economy whose terms changed faster than its language did.

The strain comes not only from faster coworkers, but from being asked to live inside a comparison system whose rules changed without becoming fully discussable.

What the research helps clarify

The broader worker mood helps explain why this experience can feel so charged. In February 2025, Pew Research Center reported that 52% of U.S. workers said they were worried about the future impact of AI in the workplace, while only 36% said they felt hopeful. Pew also found that 33% felt overwhelmed and that 32% believed AI would lead to fewer job opportunities for them in the long run, compared with only 6% who thought it would create more. Pew’s 2025 survey on worker views of AI matters here because it shows that comparison with AI-enhanced coworkers is happening inside a broader emotional climate of uncertainty, not neutral experimentation.

The OECD’s workplace AI reporting adds another relevant layer. OECD notes that AI can improve performance and working conditions in some settings, but it also highlights concerns about worker trust, agency, and uneven outcomes when implementation is poorly governed. It further reports that training and worker consultation are associated with better worker outcomes. The OECD’s findings on employers and workers and its broader AI and work overview matter because they support a mixed picture: performance may improve while trust, fairness, and the worker experience remain unsettled.

The American Psychological Association’s 2025 Work in America survey is also relevant because 54% of workers said job insecurity had a significant impact on their stress levels at work. APA’s 2026 coverage of workplace uncertainty further emphasized that younger and mid-career workers were especially affected by that insecurity. APA’s 2025 Work in America report and its 2026 summary on work uncertainty help explain why a coworker’s visibly accelerated output can feel like more than a neutral process difference. It lands in a context where many workers already fear diminished security and shifting standards.

The research does not say that all AI-enhanced coworkers are threats. It supports a narrower and more grounded conclusion: when workers are already uneasy, unequal tool leverage and unclear comparison standards are likely to feel psychologically significant even if no formal harm has occurred yet.

Why this changes teamwork as much as it changes self-worth

It would be easier if the issue stayed private, but it usually does not. Once I start feeling the gap between my own pace and someone else’s AI-accelerated pace, the emotional tone of teamwork changes too. I become more self-conscious in meetings. I hesitate more before sharing half-formed ideas. I become more aware of how polished something sounds when it reaches the group. Collaboration starts feeling more evaluative and less relaxed.

That is why this topic connects directly to how AI changes relationships with my team and why transparency about AI use doesn’t always reduce anxiety. The problem is not only my private insecurity. It is that the group itself begins to reorganize around a new kind of output and a new kind of visible competence.

The more that happens, the harder it becomes to tell whether I am simply adapting more slowly, whether the norms are genuinely unfair, or whether everyone is privately struggling with the same comparison pressure while trying not to show it. That ambiguity is part of why the strain can feel so persistent. It is hard to respond proportionately when the conditions keep shifting faster than the workplace can explain them.

Why “just use AI too” is usually too shallow

A common response to this kind of discomfort is obvious: just use the tools too. Sometimes that is reasonable. In many settings, learning the tools is a practical necessity. Refusing to engage with them at all may create avoidable disadvantage.

But “just use AI too” is often too shallow because it treats the problem as purely instrumental. The problem is not only access. It is also meaning. Even if I start using the same tools, I may still feel unsettled by what the comparison culture is doing to the value of effort, rough thinking, and human pacing. I may still feel less sure about what counts as distinct contribution. I may still resent that visible competence is increasingly shaped by tool fluency in ways no one fully knows how to talk about honestly.

This is why the article also belongs near why fear of automation affects how I approach career planning and why transparency about AI use doesn’t always reduce anxiety. The issue is not only what I can technically do to keep up. It is also how much of myself I feel I have to reorganize around a changed environment before the environment becomes emotionally livable again.

Access to the tool does not automatically solve the deeper question of what the tool is doing to the meaning of skill, effort, and fairness around me.

A misunderstood dimension

A misunderstood dimension of this problem is that it is not simply envy. It can look like envy from the outside because another person is visibly faster or more polished. But what is often happening underneath is something more unstable: uncertainty about what is fair to compare, what counts as excellence now, and whether my own slower process is still a legitimate way to work.

That is a more serious problem than ordinary competitiveness. It means the worker is not only struggling with another person’s success. They are struggling with the breakdown of a stable frame for self-evaluation. Once that frame weakens, even ordinary coworker differences can start feeling more threatening than they objectively are.

The risk is not only discouragement. It is motivational distortion. I may begin working harder for worse emotional reasons: not out of curiosity, contribution, or mastery, but out of fear of looking increasingly obsolete next to outputs that are shaped by conditions I do not know how to interpret cleanly.

Key Insight: This often feels worse than normal competition because it is not only about losing. It is about losing clarity on what the competition is now measuring.

What steadier adaptation would actually require

I do not think the answer is pretending the pressure is imaginary. In many workplaces, the pressure is real. The pace is changing. Output norms are changing. Tool use is changing how work is seen and evaluated. Denying that would be shallow.

But steadier adaptation needs more than personal hustle. It requires clearer expectations around tool use, more honest conversation about comparison, and better boundaries around what counts as evaluation-worthy output. It requires leaders and teams to admit that if some people are working with stronger AI leverage than others, then visible results cannot always be interpreted as though nothing about the process changed.

It also requires a more believable answer to a human question: what still counts as contribution when polish, speed, and first-pass completeness are increasingly easier to generate? Until that question is addressed more directly, workers will keep absorbing these differences as private inadequacy instead of seeing them as partly structural changes in the conditions of comparison.

Because in the end, competing with AI-enhanced colleagues does not only feel hard because they move faster. It feels hard because the comparison keeps landing in the oldest, most vulnerable place: the part of me that still wants to believe my effort means something stable about my value. And once that meaning starts wobbling, even ordinary workplace differences can begin to feel much larger than they look from the outside.

Frequently Asked Questions

What does it mean to compete with AI-enhanced colleagues?

It means working in an environment where some peers are using AI in ways that affect speed, polish, drafting, idea generation, or communication quality, while everyone is still being socially or professionally compared through visible output.

The key issue is that the results remain easy to compare even when the conditions behind them are no longer equal in the same way they once were.

Why does this feel worse than ordinary workplace competition?

Because ordinary competition usually has more interpretable inputs. You can often understand differences in experience, role scope, or skill. AI changes the relationship between visible output and visible effort, which makes comparison harder to read fairly.

That uncertainty is psychologically destabilizing. You still feel the pressure of the gap, but you are less sure what the gap actually means about you.

Can AI-enhanced coworkers hurt confidence even if they are not trying to compete?

Yes. Intent is not the only factor. A coworker does not have to be hostile for their visibly accelerated output to affect how you interpret your own pace, rough drafts, or process.

Much of the strain comes from ambient comparison rather than open rivalry. The nervous system often responds to visible difference before it has a fully rational explanation for what the difference means.

Are workers broadly uneasy about AI at work?

Yes. Pew Research Center reported in February 2025 that 52% of U.S. workers felt worried about the future impact of AI in the workplace, while only 36% felt hopeful. A third also said they felt overwhelmed.

That matters because comparison with AI-enhanced coworkers is happening in a workforce that is already emotionally primed for uncertainty, not one approaching the issue with calm neutrality.

Does this mean using AI is unfair?

Not automatically. In many workplaces AI use is becoming a normal part of the job, and tools can genuinely improve efficiency or reduce repetitive effort. The problem is not necessarily the tool itself.

The bigger issue is whether organizations still interpret output fairly once tool use becomes uneven, and whether workers are left to absorb structural changes as personal failure instead of discussing them openly.

Why doesn’t “just use AI too” fully solve the problem?

Because the issue is not only access to the tool. It is also the changed meaning of effort, skill, pace, and legitimacy in the workplace. Using AI may reduce some disadvantage without resolving the emotional disruption caused by new comparison standards.

A person can adopt the tools and still feel uneasy about what those tools are doing to the culture of evaluation and to their own sense of distinct contribution.

What would make this feel less corrosive?

Clearer norms, better communication, and more honest acknowledgment that visible results are no longer always clean indicators of comparable process. Workers need a more believable explanation of how output will be interpreted and what still counts as distinct human contribution.

Without that, people are likely to keep translating structural changes into private inadequacy, which is one of the fastest ways this kind of pressure becomes emotionally corrosive.

Leave a Reply

Your email address will not be published. Required fields are marked *