Do AI’s promises justify Big Tech’s bad behavior?
Lizzie Irwin from the Center for Humane Technology explains why AI companies owe us transparency—and what we can do about it.
When tech companies race to build AI data centers, they rarely mention that doing so will raise your electric bill, strain the electric grid, deplete local water supplies, and increase carbon emissions. Lizzie Irwin, policy communications specialist at the Center for Humane Technology, is sounding the alarm on the environmental and economic toll that comes with rapid data center deployment.
“I candidly don’t think the promises of AI justify this bad behavior,” said Irwin.
In this episode of Plugged In: The Full Conversation, Kristina Zagame sits down with Irwin to discuss why the public is paying for AI through their utility bills, how AI's fast deployment mirrors social media's unchecked growth, and what policymakers—and regular people—can do to push back against an industry moving at full throttle with little regard for the consequences.
Kristina Zagame: Okay, so today we're really talking about AI, trying to understand data centers, energy use, electricity bills, how it all comes together. And where you work, the Center for Humane Technology, focuses on ensuring that technology like AI—like social media—really serves humanity.
And of course, AI has a ton of benefits. A lot of people are finding great use for it. But considering the toll AI data centers have alone—just like high CO2 emissions, power grid strain, high electric bills for consumers—would you say that this technology is truly serving us?
Lizzie Irwin: So I think it's first important to take a step back and think about this industry as a whole. As we know, it's an industry known for developing full speed ahead without much concern for the externalities. And in this case, that's coming at the expense of our communities, consumers, and the environment. But this is not, in some sense, unlike the way that social media was impacting our society, and there wasn't much regard for that until we took a big step back.
And what we know now is that these LLMs consume an enormous amount of energy, often impacting nonrenewable resources. And it's also coming at the expense of communities who live near data centers and everyone who is experiencing raises to their energy bills. And that certainly has a direct impact on individuals.
But I don't think it's that the tech itself is at fault. It's sort of like the paradigm we're in, and that the way that things are being innovated right now have to be considering environmental protection and sustainability in order to be protecting people long term. And you think about this also in a business planning sense, that the way that things are kind of operating right now, it's threatening the way that we think about long-term planning, both for businesses, but also, how is this impacting our energy, our water, our land sustainability? And those are really open questions that we're not certainly getting answers to.
So in thinking about a better way forward, there's this real responsibility that we need policymakers to be thinking about, the tech developers to be thinking about. And so people kind of raise this alarm around—this is not the inevitability of where things are going. And it's not impossible to think about this tech being developed concurrently with human needs, but there needs to be a real shift in how it's being developed. And it's clearly not operating like that on its own, you know?
Kristina Zagame: Just to get an idea—a lot of these LLMs loom large. Language models like ChatGPT, like Grok, they're becoming increasingly mainstream. People are sort of using them in place of what they would have done for Google search. So we can get a grasp on it, can you compare the amount of energy used in a ChatGPT search to something that we do daily, like boiling water or leaving your lights on? How does it compare?
Lizzie Irwin: So it's sort of difficult to say on the individual level, like how do these compare, apples to oranges? That's partly because there's not really a lot of good publicly available data. And the data that we do have right now has a considerable lag to the tech that we're using today. But what we do know is that this sort of ecosystem of tech use does require a lot of energy, and it's only increasing.
And thinking specifically about the way the labs are hooked up to these data centers—some things we've seen is that the largest grid in the country says they're facing their biggest power shortage ever as a direct result of the development of data centers. The Department of Energy projects that data centers could account for about 12% of energy consumption by 2028, and that's up from less than 4% in 2025.
And when we think about the bottom line, The New York Times reported that nationally, average electricity rates for people have gone up more than 30% since 2020, when it was otherwise modestly increasing year over year. So while we can't say how much an individual ChatGPT query takes, we can see in the aggregate that the ecosystem of how these things are operating is really stressing our resource supply.
And with the way the tech is being developed and deployed, it's happening at a rate that is really fast, and it's sort of unfair and maybe unreasonable to ask people to just simply adapt. And I think about this a lot in the parallels to earlier movements around climate activism, where we put a lot of scrutiny on people's individual choices. You should be recycling, or people should be going vegetarian, when we know that a greater impact on the system would be the big movers and shakers in the industry, and particularly those who are making decisions in this industry.
How are they considering energy efficiency in their design plans for the LLM itself as well as the wider infrastructure at play? And how is this making a measurable impact on their bottom line? Is the environmental concern making an impact on the bottom line? And if not, how are we going to be shifting those incentives to make sure that the industry is actually serving the best for the people? And then people shouldn't have to feel bad for their individual choices—these are the cards they're dealt. And that should really be alleviated for an individual consumer.
Kristina Zagame: As you kind of mentioned earlier, AI is going full throttle. And there are a lot of benefits they promise—the technology companies promise to revolutionize healthcare, education, scientific research. I've even seen some things about helping with climate change. But also keeping that in regard with this massive energy strain and infrastructure, in your opinion, is that worth the trade-off, or how do we even begin to weigh the competing factors?
Lizzie Irwin: We're kind of put in this false dichotomy in some sense that innovation has to happen in a certain way for things to turn out well and for us to really gain those benefits. But I candidly don't think that the promises justify this bad behavior. The environmental concerns are real, and people's pocketbooks—that is real.
And so we sort of have to reject the premise that industry is selling us, that we can only achieve this in a certain way. And I think there are past historical examples where we've seen that what the industry is providing us is not good and that there's a real public reckoning needed. And that innovation isn't coming at the expense of safer, better technology, but almost—it can produce better technology when people are thinking of how this is impacting everyday people and those externalities that I was touching on earlier.
Kristina Zagame: And when these companies are marketing their AI products, they usually don't talk about the energy cost. Do you feel like that should be part of the conversation?
Lizzie Irwin: I think we're definitely in an era where it would be helpful for something to be publicly made available. There are some concerns around, okay, what does it mean then? How does an everyday person make sense of what it means to be efficient, how it's being measured, and what sort of information are we actually being given access to?
And I think it really speaks to a greater need for transparency across the board when it comes to this industry. They are really operating in a black box manner, often hiding behind this sense of NDAs, corporate secrecy for protection of their proprietary information. But I do think there is an opportunity for some sort of certification or making a public scorecard could be a useful tool in shifting the industry incentives by applying this public pressure so that companies would be compelled to innovate and to be the most efficient, or at least the least detrimental to the environment. And I think that could be a real—that is an option on the table.
Kristina Zagame: People can choose whether or not they want to use these AI tools, but they can't opt out of being a part of the grid infrastructure, especially for the people who live near these data centers. And we're all subject to the electric bill increases. What do we think about that dynamic? We're almost—for a lot of us—paying for something that we might not even be using in our day to day.
Lizzie Irwin: Yeah, I think this is really the opportunity for policymakers to come in and have a real show of force. If we take a step back, voters obviously are concerned about the cost of living. That was clear over the past two election cycles, that people's bottom line is top of mind, and it is only likely to continue to be an election issue if things don't change.
So I think, in some sense, by making these connections between the AI products themselves and their impacts on the environment, it's slowly becoming a kitchen table issue. And people are starting to make that connection. And in doing so, it raises its caliber so that when politicians are making decisions on AI policy in another domain, they're weighing the cost of living crisis or the impacts on the environment as an additional consideration when they're thinking about new builds or if they're thinking about incentivizing research another way—that this is a competing interest of the same caliber.
Kristina Zagame: Do you think that we need this much? I mean, ChatGPT, AI image generators—I feel like I see it being used for a lot of just silly laugh content. But is all of that worth destabilizing the electric grid?
Lizzie Irwin: It's hard to say if we need it or not. I think we have to somewhat accept the reality that this is the technology that exists, and kind of that the cat is out of the bag. And it shows a real need for us to be thinking about how we're innovating responsibly. So that's both of the product itself, but then, like I said, thinking more—how do we make these physical infrastructure ecosystems more efficient? And how can we compel the market to make this a priority?
But also generally, I think it's a good rule of thumb not to turn to AI for everything. There are all of these other social costs that are coming with AI. So it's definitely—like I said—not something that we want to be bogged down on the individual use cases of things. And that's why it's good to be thinking about this in a more ecosystem way. But yeah, a general rule of thumb is not using AI for everything.
Kristina Zagame: You kind of touched on this earlier, but CHT has been warning about social media's harms for years. Drawing a parallel between that and this new age with AI—are you worried that we could be making the same sort of mistakes with deploying it first, having it go mainstream, and then asking these questions kind of after?
Lizzie Irwin: Yeah, I think that we're seeing the consequences of this playing out in real time, not just these environmental impacts, but also seeing the social costs of rapid deployment without guardrails, without clear lines of accountability, of who's liable when these things go amok. And with social media, it took us some time as a public to fully grasp what the impacts were.
But given the rate that AI is developing, we're seeing these almost happen in real time at scale. And so I'm worried that we cannot have that same mistake repeated with social media, where it took maybe a decade or so to look back and see the way that things are being disrupted. But there's this opportunity right now, as we're seeing things develop in real time, to say, wait a minute, how can we course correct to avoid this continuing to happen? And to avoid calcifying this path that we're on.
And that means bringing in different policy guardrails to set the rules of the road and incentivizing tech developers to think a little differently in how they're shaping their products, because it is possible to course correct. And we're not really too far into the game to say, oh, you know, we're too far in.
Kristina Zagame: Right. Yeah. And it's especially important where, in the social media age, coupling AI and social media together, there's so much fake content out there that can obviously be extremely harmful. You sort of touched on this earlier, but the federal government especially, is really pushing AI fast data center deployment, compressed timelines for data center approvals. What do you think are the benefits and the risks of trying to do this as quickly as possible?
Lizzie Irwin: There's this real gold rush mentality among big tech companies and the developers that's leading them to acting this way. And it's not really considering the effects on the environment and the communities and the way that it's impacting costs. So I think it's important to take a step back and also kind of involve more players in this conversation.
So it's up to policymakers to make the decision of how we're going to redirect, but also how we're going to incorporate constituents' concerns. So while they can listen to the urgency that's coming from the industry side, that's saying go, go, go, they can also be saying, hey, wait a second, how's it going to impact my community, whether it be the physical community of, how's it going to look? How's it going to sound? What's it going to do to our resources? But also, how's this going to impact community feelings? How are they going to be able to contribute if a lot of their money is being spent on their bills, or they have to contend with the air pollution, the noise pollution, and things like that, and how is this going to disrupt everyday people's lives? So I think there's a real need to recalibrate who we're listening to in this conversation.
Kristina Zagame: What do you feel like is the right balance between the industry input and the public interest when it comes to AI infrastructure policy?
Lizzie Irwin: Right now, there's definitely an asymmetry when it comes to what the balance looks like now. And it's to the side of industry because of the money and the access they have in politics. And they're really leveraging it to their benefit. And the public has no match in that, really.
So I'd like to see that rebalanced, where the conversation is more of the public interest on an equal footing or at equal caliber, especially as these local communities are being impacted. And I think we're starting to see some success in that redress of things. I'd say before this big confluence between AI and data centers, this was a more nascent project of building up data centers. Right? And so there wasn't much public awareness of how this is impacting things. And so data centers were being developed without taking to heart the public interest or public concerns.
So we saw something like Virginia's Data Center Alley start to be developed and going underway. And the public couldn't really stop that. But now that permitting proposal has almost been completely overturned because of constituent advocacy, where just this summer, judges had decided that this development project was void. And there was this recent report from Heatmap that found that in the last year, 25 data center plans were canceled as a direct result of the opposition from local people.
So there's certainly a growing redress and rebalance of people's input. And it's sort of being elevated in some ways by more and more policymakers taking these concerns seriously in a bipartisan manner. So we're seeing people from Bernie Sanders to Ron DeSantis saying, “Hey, wait a second. The way that things are going now is not how we want to be doing it anymore.” And I think that works sort of to the benefit of this movement because this issue is no longer breaking down traditional party lines. And so there's a real opportunity to see some glimmers of hope where bipartisan consensus can come together and make some progress here that actively counteracts the way the industry's been having an outsized role in the conversation.
Kristina Zagame: And what do you feel like is the first step or something tangible that people can do if they want to be a part of the conversation about AI infrastructure in their communities?
Lizzie Irwin: What's in some ways a benefit to this is a hyperlocal issue. While there are definitely federal things to be done—things to be done on the federal scale—there's this benefit that you can be engaged with things as local as your community organizing or town councils to see how they input the implementation of already proposed development that's happening, but also get involved to see what are they projecting on a state level, let's say.
Engaging in that way is a very good step in being able to gather some other like-minded neighbors who might not be in the conversation for an environmental reason. They might be in there for a cost reason, or they might just be there out of vanity in some ways. This is the sort of issue that works well in coalition building, and it's really grown to bring together a lot of different parties, which works to the benefit of the community, I'd say.
Kristina Zagame: Okay, Lizzie, I think you've answered all of my questions. Is there anything we might have missed, or anything about AI or moral and ethical responsibility that you wanted to add to the conversation?
Lizzie Irwin: Yeah, I'd say that when I think about sort of what are tangible policy things that could help us, I would start with something as easy—let's say it's easy—as getting transparency from these companies around both the models and the products themselves, as well as how this infrastructure ecosystem exists.
And I think that transparency unlocks a really big door for more things to come downstream of that to then develop standards for how we expect things to be built across the industry. And then there should also be clarity around who is responsible in this value chain. For tangible products, it isn't clear of the developer player—who holds the bag ultimately when things go wrong. And I think that there's a similar need for responsibility of who is responsible when resources are depleted, or there's been some sort of contamination in the community. I'd like for that to be clearer and clearer for people so that they understand that there is someone to be held accountable at the end of the day.
Kristina Zagame: Do you think that it kind of muddies the waters or makes it more difficult because there are so many different companies trying to promote their AI products? It's just like everywhere.
Lizzie Irwin: There are a lot of companies who are saying they're using AI, but oftentimes they're using it from a baseline product that is most likely developed from some of these handful of bigger companies, and then they're just tinkering and whatnot on the player side. So I think there needs to be a clear delineation of, okay, if you are taking a product off the shelf or only modifying it a little bit from some of these leading developers—from a Gemini, ChatGPT, and that sort of thing—there should be clarity around responsibility of, to what extent do you have to be tinkering with something to be considered your own? What responsibility do you have then, once you're shipping out that product in your own business? And then what responsibility do the bigger players play in this ecosystem? How much documentation are they giving to these businesses who are licensing out their products, so they know what the risks are before deploying it? And what is the level of responsibility then in that value chain?
And I think as this technology is rapidly developing and deploying, there has been an open question for businesses. And now they're taking a look back. They're like, wait a second. I don't know if I want to take on the risks that now I'm seeing are happening in the news, or how do I know what these risks are before I sort of ship it out and let it play around with my business? And I don't want to be the one who has to deal with that because that's my bottom line at hand.
Plug in for monthly energy-saving tips, climate news, sustainability trends and more.
Explore heat pumps, the latest in clean heating & cooling technology.