top of page

Impact Reports: Biases in Generative AI + Climate and Environmental Impacts

Updated: Sep 18



Generative AI is transforming how nonprofits and advocacy organizations work—but it also raises urgent questions about bias, equity, and environmental impact. At a recent Center for Digital Strategy session, we brought together two leading voices to share new research and practical guidance:

  • Catharine Montgomery, Founder & CEO of Better Together Agency, who led the Biases in Generative AI impact report.

  • Jim Pugh, CEO of Share Progress and founder of AI Progress Hub, who presented new findings on AI’s Climate and Environmental Impacts.

Both reports were originally presented at the AI for Nonprofits & Advocacy Summit. We’re excited to share them more broadly as free, public sessions—and both Catharine and Jim will be joining us again as collaborators in the upcoming AI for Nonprofits & Advocacy Cohort launching this fall.


Biases in Generative AI: Why Nonprofits Must Pay Attention

Catharine Montgomery has been at the forefront of calling out bias in generative AI systems. What began with her noticing obvious issues—like image generators repeatedly misrepresenting cultural figures or producing narrow stereotypes—has grown into a multi-year survey of how these biases affect nonprofit work.

Key findings from the 2024 Biases in Generative AI study:

  • 33% of consumers will abandon biased AI tools.

  • 92% expect organizations to address AI bias.

  • Excluding communities in AI-driven messaging undermines trust and damages your mission.


Catharine highlighted how nonprofits using AI for fundraising, communications, and community engagement risk amplifying harmful assumptions if they aren’t vigilant. For example, she shared how MidJourney repeatedly generated only images of Asian women when asked for “FinTech” visuals—showing how bias can creep into seemingly neutral prompts.

Practical steps for nonprofits:

  • Audit AI-driven donor segmentation and engagement tools.

  • Build diverse review panels for AI-generated content.

  • Develop bias reporting mechanisms for staff, volunteers, and community members.

  • Create checklists for responsible AI use across teams.

Catharine closed with a 5-step framework for addressing AI bias:

  1. Commit and set vision at the leadership level.

  2. Diversify data and teams to expand perspectives.

  3. Test, audit, and repeat outputs regularly.

  4. Implement real-time filters with clear checklists.

  5. Engage users and continuously improve based on community feedback.

Her takeaway: AI can be a powerful partner, but only if nonprofits train it responsibly and actively mitigate bias to maintain trust.



Climate and Environmental Impacts of AI

Jim Pugh turned to another critical issue: AI’s environmental footprint. While much of the public debate has focused on ChatGPT queries “using too much water or energy,” Jim emphasized the importance of context and systemic solutions.

The reality vs. the myths:

  • A single ChatGPT query uses roughly 0.3–3 watt hours of energy—comparable to streaming video for two minutes.

  • Watching one hour of Netflix equals hundreds (or thousands) of AI queries in energy and water use.

  • Training and running large AI models does consume substantial power, but the bigger issue is the overall growth of data centers, which are projected to account for 20% of global electricity use by 2030.

Opportunities for climate-positive AI:AI isn’t only a drain—it can be part of the solution.

  • Energy forecasting: Google’s DeepMind increased wind farm efficiency by 20%.

  • Building energy management: Smart AI systems have cut consumption by nearly 20%.

  • Traffic optimization: Google’s “Greenlight” program reduced emissions at intersections by 10%.

  • Eco-routing: Google Maps’ AI-powered directions prevented 2.4 million tons of CO₂ emissions.

Advocacy and policy priorities:

  • Push for clean energy powering data centers.

  • Ensure responsible water sourcing to avoid harming local communities.

  • Demand transparency from tech companies on environmental impact.

  • Focus advocacy on systemic change rather than individual opt-outs.

Jim’s message was clear: Avoiding AI use won’t solve the climate problem. Instead, nonprofits should engage, advocate for systemic reforms, and leverage AI to strengthen climate solutions.


Why This Matters

Both sessions underscore a core truth: AI is not neutral. It reflects biases in society and consumes real-world resources. But with intentional use, it can also expand nonprofit capacity, strengthen trust, and accelerate climate progress.

As Brad Caldana noted during the discussion, nonprofits are facing shrinking budgets and greater demands. Opting out of AI entirely may feel principled, but it risks leaving organizations underpowered in a deeply unequal system. Instead, the path forward is to use AI responsibly—mitigating harm, addressing bias, and pushing for systemic accountability.




Continue the Conversation

Both Catharine and Jim will be contributing to the AI for Nonprofits & Advocacy Cohort, a 12-week program launching this September. The cohort combines strategy, tactical application, and peer learning to help organizations adopt AI responsibly and effectively






 
 
 

Subscribe for strategy articles, events, and more

Comments


bottom of page