Actually, it’s more like a herd of elephants in the room.

I’ve been struggling lately thinking about the moral and ethical issues surrounding AI use. Am I making things worse for the planet, society, and the economy by using ChatGPT every day? And if so…what should I do about it?
Using AI benefits me personally, but when millions of us do it, major problems emerge.
Here are a few of the biggest issues right now:
- Environmental impact: The data centers that power AI use large amounts of energy.
- Reinforcing corporate power: Using AI strengthens a handful of already dominant tech companies and makes billionaires even richer.
- Copyright and fairness: AI tools are trained on massive amounts of creative work, often without the creators’ permission.
- Privacy and data exposure: Your prompts, uploads, and interactions may be logged and analyzed without your knowledge or consent.
- Bias and stereotyping: AI can produce outputs that reinforce harmful assumptions about groups of people.
- Impact on other people’s work: automation using AI tools can threaten the careers of writers, artists, translators, freelancers, and others.
AI use is an example of a classic problem economists call the “tragedy of the commons,” a situation in which everyone doing what’s best for themselves ends up hurting the group. My daily use of ChatGPT is like that: I’m doing something that’s good for me, but bad for all of us.
That sounds kind of evil when stated so plainly, but we live by this logic all the time. I drive a car, which gets me where I need to go but also contributes to environmental degradation. I shop on Amazon, which is convenient and often cheaper but makes Jeff Bezos even more obscenely rich.
The benefit I personally receive from these things is greater than the cost I personally take on. It’s not realistic to ask or expect people to stop using AI, just as it’s not realistic to ask them to stop driving cars. So now what?
Here are some questions I’ve been chewing on:
- If I use ChatGPT less, I’m missing out on productivity, opportunities, and improvement. Is this fair to myself and the people who depend on me?
- If I tweak my personal AI use, will it actually change anything?
- We signal what matters by our choices (think: company boycotts). Should I be using my AI use to make a statement about my values?
- If we ask individuals to use AI less, are we letting tech companies and policymakers off the hook for creating system-level solutions?
I don’t think there are black-and-white answers. These issues are complex and changing quickly. But that doesn’t let us off the hook from thinking about them.
So, let’s get practical. Here’s a way to consider the ethics of AI in your own life:
- Learn about the environmental, social, and economic impacts of AI, and help others understand the impacts.
- Clarify your own values and what matters most to you (e.g., privacy, fairness, creativity, sustainability).
- Think about what to do when your AI use clashes with your values. Does it make sense to limit your use in certain ways? Choose one tool over another? Use your voice to advocate for change? What feels right to you?
We all need to speak up for better rules, tools, and practices that make AI fair and sustainable for everyone.
If you’re feeling inspired to learn more, may I suggest checking out:
- The AI Now Institute: An influential research group on AI’s social, political, and labor impacts
- The Algorithmic Justice League: An organization focused on addressing real-world harms and making AI more equitable and accountable
- The Markup: A nonprofit journalism organization that advocates for technology serving the public good
- The AI Ethics Brief: An accessible weekly digest summarizing top research, developments, and debates in AI ethics from the Montreal AI Ethics Institute.
I’d love to hear your thoughts on the ethics of AI use and how you’re making your own decisions.