I talk about using AI in writing in a very public way. I believe that AI can make us better communicators if we know how to use it. But it’s not a risk-free experiment. There’s still a part of me that wonders: Will people think less of me for using AI? Will they stop taking me seriously? Am I risking my reputation as a thoughtful, careful writer?
Using AI in your professional life comes with very real stakes. Reputation is only part of it. A misstep doesn’t just affect you; it can affect your team, your organization, and the people who trust and rely on your work.
Some of these downsides are avoidable, but others aren’t. So, how do we respond? I don’t think dismissing AI entirely is the right approach. Instead, let’s think through some of the risks that come with using AI in this moment, as well as ways to minimize them.
Personal and professional risk: What this means for your credibility
If your job involves writing, your words support your credibility. Your finished product is what readers are evaluating. And now that “AI slop” is becoming so common, readers are scrutinizing text even more closely.
If you’re known for being a good writer, you don’t want your leadership or clients to think that your AI-supported work somehow seems…off. Like, more generic or rushed, or less creative than they expect from you.
There’s also the risk of small mistakes slipping through. AI is very confident, even when it’s wrong. It’s easy to miss errors you’d normally catch. If someone asks where a claim came from, or why something is phrased the way it is, “that’s what ChatGPT gave me” is not going to fly.
Even if your readers don’t call out these issues explicitly, they start to doubt your judgment. They may see you as someone who let AI do thinking you should have done yourself.
None of this means you shouldn’t use AI. It means that you need to be aware that readers’ expectations haven’t changed. Personal risk comes from erosion of trust, expertise, and credibility. Your name is still on the work. Your judgment is still what people are evaluating.
Team and organizational risk: When there are no shared rules yet
At the level of teams and organizations, the risks of using AI are broader.
Many professional teams don’t roll out AI with a plan. Instead, people just start using it. Risk at the team level isn’t necessarily that someone uses AI “wrong,” it’s that everyone uses it differently and doesn’t talk about it.
That lack of ground rules shows up as uneven quality and writing that can feel impersonal or off-brand. The content doesn’t reflect the organization’s voice or standards. And not knowing what was AI-generated makes writing harder to review internally. So, when mistakes or missteps happen, they can show up publicly in client communications and deliverables.
Over time, these problems can add up to real exposure for the organization: reputational risk, legal risk, and credibility problems that leadership has to deal with later, often without having realized how the work was being produced.
None of this is calculated or malicious. It’s what happens when new technology arrives faster than guidance, policy, and expectations can keep up.
The risk you don’t see right away: Who else ends up carrying it
When teams talk about AI risk, they usually focus on the people doing the writing. That makes sense, because it’s where AI use is most obvious. But exposure to risk is uneven, and some of the biggest consequences affect people you may not be thinking about.
Organizational leadership
When something goes wrong, like a public mistake, a legal issue, or a partner asking uncomfortable questions, it often falls to leadership to explain what happened and why.
Leaders are accountable for the consequences of AI use, but they may not be fully aware of or understand it yet. By the time they’re looped in, the conversation has shifted from How should we be using this? to How do we respond?
Pro tip: Leaders do not appreciate this kind of surprise. It can degrade trust between leadership and staff. When AI use stays invisible, it’s hard for leaders to support and defend thoughtful use because they don’t know what’s happening.
Partners, clients, and collaborators
The people that pay you care about the outcome of the work, not the process. They want a quality product, and they compensate you for your skills and judgment. If AI use is disclosed late (or discovered indirectly) it can feel like a surprise, even if the work itself is solid. Questions about authorship, originality, and diligence arise quickly once trust is shaken.
In collaborative work, risk can cross boundaries. One person’s AI misstep can affect everyone attached to the project. That’s especially true in fields where credibility is shared, like research, health, policy, and advocacy. Now it’s not just your reputation that’s at risk, but theirs as well.
Customers and end users
When people read something your organization publishes, they’re doing so for a reason. They’re trying to understand, decide, or change. They assume what they’re reading is correct and relevant to them.
Problems arise when people act on information that sounds more certain than it is. AI is good at producing language that feels authoritative even when it’s not accurate or missing nuance. Readers may rely on information that looks solid but hasn’t been fully checked for accuracy.
AI tends to compress and generalize ideas, which can erase important distinctions. Guidance that works for a “typical” case may not fit a particular person’s situation at all. For some readers, this can lead to confusion, exclusion, or being misrepresented without realizing it.
Professional standards
The real danger isn’t that AI breaks your work overnight. It’s that it slowly changes what “good enough” feels like. Across professions, there may be a gradual lowering of standards, increasing tolerance for vagueness, and normalization of shortcuts that affect quality.
But that’s not inevitable. If it’s inattention that lets standards drift, we can uphold quality by paying conscious attention to how we want AI to be part of our professional lives.
How to manage risk thoughtfully
After laying all of this out, it would be understandable to think, Maybe we just shouldn’t use AI at all. But for many teams, that ship has already sailed. People are using it, and there’s an increasing expectation that organizations will be adopting it.
A more useful question is how to incorporate AI in ways that don’t undermine trust, quality, or judgment. Instead of trying to manage every possible risk, it helps to focus on a few concrete actions that minimize the downsides.
1. Keep AI in a supporting role, not a deciding one
If a piece of writing requires explanation or accountability, a specific person needs to be able to stand behind it. They should be able to explain where the language and information came from. Not, “This is what we got from ChatGPT.”
2. Add a pause before anything goes out the door
Treat AI-assisted drafts as unfinished until someone has reviewed it for accuracy, tone, and audience fit. Rework anything that feels “too smooth” or vague.
3. Be careful about what you feed into AI tools
Legal, intellectual property, and trust issues often start with the input, not the output. Unless you’re working in a proprietary AI system, don’t paste in confidential or sensitive material. If you’d hesitate to forward it to the wrong person, don’t share it with a publicly available AI chatbot.
4. Be more conservative when trust is on the line
Not all writing carries the same weight. It’s reasonable to use AI more freely for internal drafts or early thinking. But you should be more cautious with high-stakes and nonpublic content (like client materials or unpublished work), where trust would be hard to rebuild.
5. Talk about AI use instead of hiding it
A lot of the risk around using AI comes from silence. Create some basic expectations, guidelines, or policies so people understand where it’s okay and where it’s not. Fewer secrets = fewer surprises = fewer crises.
Putting AI risk into perspective
Risk exists at every level — in the tools we adopt, the policies we write, and the decisions we make day to day. Most professionals and organizations already build safeguards to mitigate these risks. Their writing goes through editorial review before publishing. They get legal sign-off on sensitive decisions. They require multi-factor authentication to access confidential systems.
AI isn’t fundamentally different. It’s another tool with real benefits and real risks. The goal isn’t to make it risk-free; that’s not possible. It’s to use the same care and accountability you apply everywhere else.
When you look at it that way, the response doesn’t have to be all-or-nothing. Yes, AI is moving quickly, and it can cause legitimate problems if misused. But alarm rarely leads to durable, helpful policies. Those come from clear thinking about where AI can make things better, where it shouldn’t be used, and who is accountable for the outcome.