The headline is bogus and most of the comments are responding to the headline. Google's emissions increased 13% since last year, "primarily due to increases in data center energy consumption and supply chain emissions." It's unclear how much is due to AI. The supposed surge is a 48% increase compared to *2019*, consisting of moderate increases every year since 2020, not a nearly 50% surge due to AI.
Your comment is a little dismissive. Firstly, the headline doesn't mention "versus last year", and the 48% figure is straight from the document you mentioned and is repeated often within that document. It's directly from Google, and is significant. Moreover that document itself states (page 31) "As we further integrate AI into our products, reducing emissions may be challenging due to increasing energy demands". It's clear that AI advancements are the biggest hurdle in terms of energy use and therefore CO2 emissions for Google.
The article does a good job of clearly stating that the 48% is compared to 2019, but words like "surge" and "spike" do not closely match that factual basis and imho are misleading.
Attributing the 48% to AI specifically is largely baseless though. From my skim, the Google report does not make any specific claims about the increase in datacenter emission coming from AI. The closest claims are on page 12 where "the rapid advancement of AI has brought necessary increased attention to its energy consumption and resource demands" is juxtaposed with "total data center electricity consumption grew 17%"
In particular the third "key point" seems highly misleading.
> The company attributed the emissions spike to an increase in data center energy consumption and supply chain emissions driven by rapid advancements in and demand for AI.
The word "spike" does not occur in the document and the 48% number is never close to a mention of AI. While the 17% "spike" may have been attributed to AI by Google, I think it is clear the document does not attribute the 48% to AI.
Claiming that emissions surged 50% due exclusively to AI (as in the headline) is unsupported by the Google report, which is the article's singular source.
Again, looking at the graph of emissions at Google on pg 31, it's clear that the increase is linear after a dip 2019-2020 and the report identifies supply chain issues as a major source of emissions in addition to datacenter electricity costs—again, notably not specifically calling out their AI training/inference costs as a reason for their increase. The report does identify AI as a challenge going forwards, however that's not the same as saying it's exclusively responsible for the last 5 years of emissions.
It's indeed pretty spun. Some quick googling shows that real GDP is up 11.5% from 2019-2024, which makes up a bunch of the "effect" right there (obviously datacenter power isn't directly tied to economic activity as a first principles thing, but like everything else at scale it's going to be pretty darn close).
Of the remainder, I get an average yearly increase of... 5.8% over the time period in question. I mean, that's an effect, but it's certainly not a big one.
> The supposed surge is a 48% increase compared to 2019*
That's a smoking gun, alright. If you compare today to just before the pandemic, you're going to get all kinds of weird results that can mostly be attributed to the world working in emergency mode for 2020-2022.
Let's not forget what happened in 2020: the world went into lockdown over COVID-19. Everyone was trying to do emergency gear-shift into remote work, and the demand for online collaboration services - and supporting infrastructure - exploded.
That IMO can easily account for majority of the increase since 2019, and it seems that this is another example of why you have to be suspicious of any average that includes pandemic years in its range. 2020-2022 was a unique period in all kinds of ways.
EDIT: Google's document even acknowledges that, in footnote 103 linked to from page 32. Quoting:
Although 2020 was the most recent emissions inventory available at the time the target was set, 2020 was deemed to not be representative of a typical year, because operations were impacted by the COVID-19 pandemic. The next most recent year with representative data, 2019, was selected as the base year.
"In 2023, our total GHG emissions were 14.3 million tCO2e [metric tons of carbon dioxide], representing a 13% year-over-year increase and a 48% increase compared to our 2019 target base year. This result was primarily due to increases in data center energy consumption and supply chain emissions. As we further integrate AI into our products, reducing emissions may be challenging due to increasing energy demands from the greater intensity of AI compute, and the emissions associated with the expected increases in our technical infrastructure investment."
"...reducing emissions may be challenging..."
The supposed justification for this level of pollution and water use [see page 42] is debatable. The expected profits are highly speculative.
What seems more clear is that unless things change it is going to to nigh impossible for young people to rise to challenges of climate change while simultaneously being addicted to mobile/Wifi internet access. Disconnecting a kid today from the internet is like separating a junkie from their drugs.
Gebru's estimates of AI energy consumption were adopted from Strubell. Strubell's estimates of AI energy consumption are ridiculous and according to David Patterson range from 88x to 10000x high. My humble opinion on the affair is that the conflict between Gebru and the rest of the company arose from Gebru's refusal to use the best and most comprehensive information on AI energy consumption, that is what they had access to firsthand as a privilege of being a Google insider.
I see the current gold rush of AI the same way I saw crypto-currencies. Even if originally there were people that believed in the concept it became just a snake-oil sellers business.
The parallelism is made even more relevant by its hungry use of electricity.
There is a future for AI but it is not what we see companies developing right now. Chat-bots are more dystopian and problematic than useful. AI future (and present) is on analyzing big chunks of data about chemical bounds, traffic-flow, astronomical observations, etc.
But all that really useful AI is not attracting the kind of investment that flashy consumer-oriented chat-bots are getting.
I spent most of the weekend playing with my kids in the pool. Three years ago this would have taken more than a week of my time and I wouldn't have done much else.
OP does downplay them (“more dystopian and problematic than useful”). For one, it’s a nonsensical statement because dystopia and problems are not comparable to utility. A generous read of what they meant might be: LLMs are not very useful, and are dystopian and problematic in some vague way.
Do you believe parent thinks LLMs are very useful? They clearly think the real value of AI comes from limited applications of ML to science, simulations, optimizations, etc instead of general-purpose automation and capabilities that LLMs provide.
It was a glib comment, but I fundamentally reject the premise that we'll have more personal time because of ChatGPT.
It's like saying that buying a faster computer that compiles quicker means you can spend more time with your family. Theoretically true, but in practice people who are that way inclined can find myriad other ways to use the new technology to eat up time.
The core of your attitude comes through here - that productivity is pointless, computers are worthless and coding is a waste of time. Anyone dismissing LLMs comes from a similar value system of hating technology, as far as I’m concerned.
Let's unravel what you've said here. You're saying that there is no way that I'm not a bad parent because I would have found a way to ignore my family regardless of some time saving device.
Glib? That's not glib. That's just being a jerk. And fueled by what, your personal distaste for LLMs?
I don't think you're a bad parent, but I do think you were using hyperbole when describing how much time ChatGPT saved you. I also don't believe that you would spend a week away from your kids to make a 1,200-line app that just explains how simple bash commands work. https://explainshell.com/ already does a pretty great job of that.
I don't have a personal distate for LLMs — I use ChatGPT weekly and I work for a company that ships LLM-powered functionality that I use daily. But your comment implied that LLMs can today provide a big productivity improvement for developers, and that’s not something I think there’s any concrete evidence for.
I do find the hype immensely distateful, especially in response to an article pointing to the environmental downsides of that excitement.
What I made is not like explainshell.com at all. It is a GUI for writing command pipelines. Updates to the prompt are parsed into an AST and used to update the GUI. Updates in the GUI are used to build an AST to update the prompt.
I wrote it so I didn't have to keep writing things like this into the line editor:
It's a small program but it's not a trivial task to get the UX working in a productive manner. I took plenty of the same wrong turns that I would have if I was writing it by hand but was able to iterate much more quickly.
It's more exploratory in nature, seeing how text input and GUI input can complement each other in a novel manner. Sure, I could do this by embedding a SQL query in an R script or something but there's just something that rubs me the wrong way about embedding one language in another instead of having them work side by side.
It's not "just explaining how simple bash commands work".
So thank you for calling me a shitty parent and devaluing my work. You seem awesome.
For what it's worth, all of that really useful AI you mention(and then some!) is attracting the kind of investment you want, you just don't hear about it because it's not interesting to the average person.
There is reason to be cautious about every kind of endeavor where the potential for enrichment can cloud judgment. However, the possibilities opened up by technologies this radical in the magnitude of the improvements they make in specific fundamental societal functions are enormous, and for that reason, they are on the balance massively beneficial, and that includes chat bots.
> However, the possibilities opened up by technologies this radical in the magnitude of the improvements they make in specific fundamental societal functions are enormous, and for that reason, they are on the balance massively beneficial, and that includes chat bots.
The improvements it can be dreamed of to the bottom line of companies by replacing expensive labour are massive, that's why it got US$ 1tn Capex downed into it as noted in another article from Goldman Sachs shared today on HN.
It doesn't mean it's massively beneficial to societal functions, a bigger case for that would be a clean, and massively cheap energy source (if fusion turned out to be viable and scalable, for example) but we don't see US$ 1tn poured into the pursuit of that because it does not promise improving companies' bottom line in a short time frame.
AI can definitely turn out to be a panacea but more and more grifters are attaching themselves to it, and the companies churning out the most advances of it are not really invested in improving society as a whole, that's the marketing speak used but we in tech have seen those empty promises of utopian changes turning out to be just smoke and mirrors for the surveillance capitalism age to set in.
Reducing the need to expend expensive labor to produce goods/services is massively beneficial. It is the basis of all economic development and wage growth.
This can also describe the great enshittification race of every good/product/service. The 'wage growth' part does not always follow. Middle-skilled laborers are still be pushed out of the economy. [1]
Wages, after adjusting for inflation, are more than 20x greater today than they were 200 years ago. That's why infant mortality has declined so much.
We still have a lot of room for improvement, with the masses still forced to subsist on sub-par products (e.g. ultra-processed foods, products that poison them with microplastics, etc) and long work hours just to afford basics (housing, healthcare, childcare, etc), and the only way to get there is continued economic development, which literally amounts to automation of labor-intensive production processes.
Wages strongly track per capita GDP. The natural instinct to be cynical is misguided when it comes to expectations regarding the impact of automation.
> Wages, after adjusting for inflation, are more than 20x greater today than they were 200 years ago.
Agree. Though, the time periods are not all that comparable and I'm really referring to the last 20 to 40 years.
200 years ago, agriculture was dominant, things indeed have changed a lot since then.
To another extent, I think this is kinda like looking at stock charts of something like AAPL. Yeah - it is up by like a million percent since the 1990s, but if you are an investor in the last year or two, you're not up by that much per se, and with the right timing, possibly down (AAPL is not the best example, but the point is hopefully conveyed).
Mass automation with computers, did not start until the last 50 years. It's potentially of a different nature. I hear you on the skepticism, people thought ATMs were going to put all bank tellers out of a job. In a way it did, but also displaced jobs to new work.
> That's why infant mortality has declined so much.
Wage growth was not the only factor. Vaccines against childhood sicknesses, that previously routinely killed children before they were 3 years old was a huge factor as well. The lack of childhood deaths is largely attributed to longer life expectancy - it's kind an effect of averaging. 200 years ago, if you lived to be 25, you had a good chance to live into your 40s and 50s and later. With that said, better overall incomes, better sanitation, better knowledge of things like infections - there's a number of keys factor (wage growth included). I just want to point out that it was not just wage growth.
> We still have a lot of room for improvement, with the masses still forced to subsist on sub-par products (e.g. ultra-processed foods, products that poison them with microplastics, etc)
In a way this was more my point. Craft goods replaced by mass produced goods. It's not all bad for sure, but it is what it is (not all good either).
(Though, market capture and monopolies, IMO the mass produced goods, notably tech products - are arguably getting enshittified at a faster rater than anything that happened in the 1900s. Market forces indicate that these companies don't need to put out better products, they can do shrink-flation, price gauging, generally lower the quality of already cheap stuff simply because they can and profit motive dictates that they should [profit motive IMO is a good thing, but profit motive without competition is not])
> and long work hours just to afford basics (housing, healthcare, childcare, etc), and the only way to get there is continued economic development, which literally amounts to automation of labor-intensive production processes.
This is getting to my point as well. 50 years ago - a high school degree was plenty to have a middle class lifestyle. Since the last 50 years, the stratification of 'lower' and 'higher' has increased. That middle that was the backbone of the US from the mid-1900s is shrinking.
> Wages strongly track per capita GDP. The natural instinct to be cynical is misguided when it comes to expectations regarding the impact of automation.
Some automation is good, some is not. The new automated AI summary of Google I abhor. It's wrong so often, and absolutely no better than the existing site summaries they had previously. We can think of a lot of really not fun software to use. Things like JIRA replacing a good old whiteboard. Sometimes the automated tool is awful - but sometimes it's great and saves a gob of time too.
My fundamental point though is that the automation of the last 50 years, really of the last 30, is of a very different character from the last 200 - in a way that puts a lot more pressure on 'middle' jobs. It is yet to be seen how that will play out. Income inequality of today is essentially unparalleled in history. Monopolies have not been broken up in a very long time (AT&T is probably the last big breakup, but that has re-formed), regulation to avoid capture of markets is essentially non-existent. I do believe in free markets, but at the same time I believe that over time 'winners' can arise - and that destroys those free markets. I don't know of any similar time in history that matches the last 40 years with respect to these parameters: runaway income inequality, automation eliminating "middle" jobs and stratifying the workforce, unfettered monopolistic powers (eg: tech companies, food companies, meat producers; these are all places where a few companies control 90% of all activity)
While this response was long, I hope it was interesting and furthers the dialog a bit. Thank you for your original response, I appreciate the dialog & exchange of ideas.
>Wage growth was not the only factor. Vaccines against childhood sicknesses, that previously routinely killed children before they were 3 years old was a huge factor as well.
You're right, however, much of the death from disease was ultimately due to poverty. This is evident by the fact that many diseases have much higher fatality rates in the developed world than in the developing world, absent any vaccination. Lack of nutrition and access to healthcare makes a significant impact.
Also, the second order effects of more economic development includes amenities like good sanitation, proper insulation and non-polluting heating sources, which also reduces infant mortality.
>I just want to point out that it was not just wage growth.
You're right. I would also add that vaccination programs themselves develop and advance more quickly in a world with more economic output, both in terms of total output and per capita output.
>Craft goods replaced by mass produced goods.
I see your point. I would add that AI can give us much more affordable craft goods.
>(Though, market capture and monopolies, IMO the mass produced goods, notably tech products - are arguably getting enshittified at a faster rater than anything that happened in the 1900s. Market forces indicate that these companies don't need to put out better products, they can do shrink-flation, price gauging, generally lower the quality of already cheap stuff simply because they can and profit motive dictates that they should [profit motive IMO is a good thing, but profit motive without competition is not])
This is a common trope, and at least in automobiles, this one comment on Reddit made me convinced of its inaccuracy:
It may be that in some product categories, people are buying lower quality products on average today, but my guess is that the reason for it is consumers making a conscious choice to buy disposable products, because they perceive the trade-off that they provide as being better than that offered by more expensive, higher quality products.
>This is getting to my point as well. 50 years ago - a high school degree was plenty to have a middle class lifestyle. Since the last 50 years, the stratification of 'lower' and 'higher' has increased.
2nd reply, another perspective, perhaps summarizing. I think you're saying essentially a raising tide lifts all boats, and lifted boats reduces essentially all problems. To some extent that is true, on average, poor people are better off than poor people 70 years ago. At the same time, the "great american middle class" is a lot smaller & is shrinking. Also, at the same time, social mobility in the US is falling. Wealth is now a greater predictor of success than is ability, education or anything else [1][2]. To that extent, there is now less social mobility in the US than in Europe [2].
Also to consider, comparing 70 years ago to today hides that there was a ton of progress from the 1950s to 1970s, but since then the trajectory has changed [2].
"The correlation between parents' income and their children's income in the United States is estimated between .4 and .6"
"Several studies have found that inter-generational mobility is lower in the US than in some European countries, in particular the Nordic countries.[4][5] The US ranked 27th in the world in the 2020 Global Social Mobility Index.[6]"
"Social mobility in the US has either remained unchanged or decreased since the 1970s."
Provides significant evidence challenging this idea of a shrinking middle class, in my opinion.
The rate of progress has slowed since 1970, and that has correlated with a slowdown in per capita GDP growth, which goes back to my point that the overriding determinant of quality of life improvement is productivity growth.
As for housing, it comes solely down to the rate of construction of houses being artificially limited by regulations.
There is a woeful shortfall in construction of new housing in North America.
In the US at least, there is a 40-year record number of apartment buildings under construction:
Even now, adjusted for population, it is still well below the 1970s peak.
Restrictions on housing construction have mounted in the US since the 1960s, especially in the large coastal metropolises with the greatest productivity, as this study details:
This, combined with rapid population growth, has created some of the highest rental rates in the world in San Francisco.
This is also relevant to income inequality: rising rent is the primary cause of capital's share of income growing at the expense of labor's, and not any of the other usual suspects (e.g. tax cuts, IP law, technological disruption, regulatory barriers to competition, corporate consolidation, etc) (see Figure 3):
What the US and its major cities need to do is straightforward, but hard: speed up permitting, upzone and remove rent control that discourages building rentals.
This strategy is extremely effective, as best exemplified by Chongqing, China, where construction has kept rent to $75 a month:
We disagree on a number of things. AI generated content is not "craft" IMO at all, almost the definition of mass produced. EG: AI artwork, can get a lot of it, but if you want a Mona Lisa - you're going to need a human.
With regards to common trope - you seem to imply that shrinkflation is not a thing? There is a reasonable argument that most of the inflation of the last couple years is simply price gauging, record profits. The common trope that I'm getting at is that unregulated capitalism falls apart because winners emerge. To some extent, this has happened. EG: facebook buying up instagram, conglomerations, actual price cartels (eg: airlines). Many examples. We are in an era of a great squeeze for profits (not just generic economic growth, but instead a phase of growth achieved through squeezing)
> And in the US, this was mostly not the case either:
I am speaking from a US based perspective. I don't think I agree that the data you pointed to shows the same thing. While wage growth can still occur, the acceleration of wage growth has changed and is becoming more stratified. The source I linked to earlier showed that.
> In the US, the biggest problem is housing: cities have put in place barriers to building houses, and that has increased the financial burden of rent
I agree regarding the problem. Though, I would state more specifically the issue is the price of housing relative to wages. That speaks to the trend of people moving away from cities due to pricing, gentrification.;
Though, let's not also forget the median family income is still below $50k, and child poverty in the US effects an astonishing 11 out of 74 million children in the US [1]. Which is to say, I think we often bias from an urban perspective. There is a WHOLE lot of poor people in America. Even in places with affordable housing, people are still not getting by. The significance that 1 in 7 children are living in poverty in the US is hard to overstate.
> As for computer-based automation, I personally see it as a huge benefit in my life. I even like Google's new AI generated responses.
Sure. Though, per the other resources I linked, the trend in the last 30 years is to squeeze out middle jobs. I want to emphasize it is uncharted territory, there is no full historical analog to it. We have yet to see how it plays out.
> AI can definitely turn out to be a panacea but more and more grifters are attaching themselves to it, and the companies churning out the most advances of it are not really invested in improving society as a whole
I came to see the "AI hype" not as reminiscent of crypto-hype, but of dot-com boom. Lots of grifters attached themselves to this new Internet thing. Lots of bullshit products was made. Lots of people lost their shirt through this. That had no bearing on how useful the Internet was, and turned out to be. And so today, we also have a lot of AI-grifters, and this also doesn't mean the underlying tech is bogus.
Not sure if energy use is a sufficient criteria to dismiss some technology. This universe progresses by turning low entropy energy into high entropy energy, which the use of energy is doing. So it seems to be closely tied.
Making processes more energy-efficient is only really a good thing as long as it retains its effectiveness.
> Not sure if energy use is a sufficient criteria to dismiss some technology. This universe progresses by turning low entropy energy into high entropy energy, which the use of energy is doing.
What is the connection between those two sentences?
I think that things like intelligence and productivity are directly related to an increase in entropy. In this universe, we have a budget of entropy, and when it is used up, we're done for. We have to make our mark before that is the case, or the program that runs our simulation will just terminate with no result.
It is a very pseudoscientific believe of mine, I guess.
I'm demanding more/better code-completion style AI for sure. It's an invaluable part of my workflow now and I'm happy paying a monthly fee for the service.
Sure, but you are paying for that. It also has nothing to do with the bizarre and untrustworthy AI answers being shoved into Google and Bing search results, or Facebook's terrible AI search bar.
It's become a really indispensable productivity boosting tool. Code, translation, broad research, summarization, writing drafts, finding that word you know but just can't remember, making up terrible puns... the list goes on.
Hours of work compressed into a minute or two at most.
Exactly. I was the first to adopt nearly every new technology in my family growing up. With AI, I query claude like twice a week for things that a search engine never gives to me straight like - "How long do I cook a rack of ribs?", but it's not something I'm about to pay $20/mo for, which is the only way that AI is sustainable. I use it sparingly, and I only use it because it's free. There's just not much appeal to something as unreliable as AI.
If you don't know how long to cook a rack of ribs for - how will you know if the AI gave you a correct answer?
> There's just not much appeal to something as unreliable as AI.
Even your trivial example I would question regarding the unreliability.
AI is probably only best for qualitative things, like touching up photos, identifying data patterns for humans to then investigate. Places where 1% inaccuracy leads to bad tasting rack of ribs, or outright incorrect answers - the fact LLMs are able to produce reasonable sounding answers is amazing - but not necessarily useful.
Consumers are definitely demanding something like ChatGPT. Other companies are trying to ride the wave, and their attempts often feel forced as they’re trying to inject LLMs where they don’t really belong.
People also said the same thing about NFTs, and they were correct.
I think “AI” will eventually have more consumer facing uses, but the current crop is incapable of telling truth from fiction which severely limits the appeal to Google’s main user base. OpenAI is riding on hope and it feels like Google is chasing after that application to avoid looking like they are behind, but it’s not clear that actual customer demand is looking for Google to offer such half baked technology.
> A winner of the Nobel Prize in Economics, Paul Krugman wrote in 1998, “The growth of the Internet will slow drastically, as the flaw in ‘Metcalfe’s law’—which states that the number of potential connections in a network is proportional to the square of the number of participants—becomes apparent: most people have nothing to say to each other! By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.”
I'm all for doing down Nobel prize winning Paul Nobel prize winning Krugman and his Nobel prize (did you know he got a Nobel prize?) but as hideously as most of that has aged the bit about most people having nothing to say to each other was spot on.
And that's still somewhat true. I'm up at our cottage right now, where we have 300mbps cable internet. The upstream is paltry (10mbps), but I haven't really felt it impacting my work (and my kids are still able to stream everything they'd like, too). Sure, I miss the 5gbps symmetrical we have at home, but in reality there's not a huge real impact.
... and they were right, and there was a massive destruction of capital invested in companies that installed dark fiber and went bankrupt. The difference was that the dark fiber then got lit later on, because it was useful physical infrastructure.
The same cannot be said of the present AI boondoggles.
How would you describe the behavior of Character.AI's millions of users if not demand? Who's paying OpenAI their billions of revenue? You can argue about how durable the demand for AI is, but I don't see how it's tenable anymore to claim it doesn't exist.
You ask a good question. Who is paying for AI, and is OpenAI profitable. A lot of the demand is there when cost is subsidized by investors -- will it continue when price actually must bare costs and profits?
Businesses and wealthy people want AI so they can lay people off and hold onto their money, and build stuff faster to capture more marketshare / power.
There wasn't demand for computers either. Or, the industrial revolution.
AI is a competitive advantage for those that can leverage it.
Consumers ask for AI all the time. Their requests sound like this:
- "Google search is drowned by SEO, I can't find anything anymore"
- "Wow all these ads suck"
- "Ugh my social media is full of hucksters and bots"
- "My job applications all go into a black hole"
- "LoL gmail can't even find emails I know are there"
- "Ffs stop autocorrecting to duck"
- "Oh wow we can't even say 'vagina' anymore because The Algorithm"
- "Support doesn't even read my message, why do I bother"
- "I can't keep up with all my inboxes"
- "This meeting should've been an email"
- "The press release makes no sense, can someone translate corporatespeak?"
- "Ecomm sites are awful why can't I just find what I'm looking for?"
etc.
We already use AI for all sorts of things. This will only accelerate. Half the things that used to be AI aren't even called AI anymore. That's because "AI" is a marketing label. When a technology becomes mundane, it ceases to be called AI and we find something new.
Consumers were not demanding the internet or computers either. But it turns out they were useful for consumers and especially for the profits of capitalists. People may not like it but AI will be an important part of our lives soon.
The key driver for the CO2 growth is "scope 2" emissions, mainly electricity demand from data centers. See pages 34-38 in the PDF for the definition of scope 2 emissions and overall progress on running data centers with carbon free energy. They're currently at 63% CFE, the same as in 2022, but absolute growth in electricity consumption also meant absolute growth in emissions from the other 37%.
It seems clear that the article is not about arbitrary data center usage, it's about a large increase in data center usage at Google, attributable to AI.
The initial premise of the thread was "Why isn't this about (arbitrary data center other than Google)?"
Isn't that easily answered by: because it's about the delta, and the context of AI, where Google is perceived behind, and Satya famously pointed out the energy demands x their margins puts them in a strategic situation with only bad choices?
Witb that context, do you believe this is attributable to someone picking what to write articles about, based on how much they'll be able to influence them to change their behavior?
The NSA's data center is a drop in the bucket compared to the hundreds of similar data centers operated by Google, Amazon, Microsoft, Facebook, et al. And I would not call increasing your power consumption YoY by 13% or whatever it was to be "trying". To the extent they are trying, it's only to squeeze the value out of their real estate and to avoid blowing their transformers. They can spin that as "saving energy" until something like LLMs with their GPU requirements comes along.
I don't think it's right to characterize this as a hit piece. They're reporting on a document Google published, using Google's numbers and framing. If Google didn't want people to talk about their carbon consumption they would presumably not have published this report.
It's because the NSA's "massive" datacenter is actually tiny, cannot contain the things that paranoid people think it contains, cannot do the things weirdos imagine it does.
And once again a big tech company proves that as soon as their "principes" are put to the test they will abandon them almost instantly.
When buying some wind energy was easy and wasn't really a trade-off they were quick to use big phrases like being "committed to the planet" and such. Commitments seem to be rather "flexible" with these companies.
I mean they're just doing what the oil companies did to great success for decades, the bullshit about buying a hybrid or electric car, or recycling^1 your plastic. Why reckon with the impact of your global industry on the planet and have to answer hard questions when you can just make it the consumers fault and guilt powerless people into using a canvas bag^2 instead of a plastic one, despite the fact that the only reason all the fucking plastic bags exist is because your company is making them by the billion?
1: If you want to get angry, look into what an utter farce plastic recycling is.
2: Not that reusable bags are a bad thing in the slightest, I just don't like being condescended to for using a free resource provided at point of sale by a multi-national conglomerate that's burning the world and then turning around to chide me for having a slightly older car than I otherwise could.
In my imagination, I picture a long-time Google employee who spent his/her entire career dutifully pouring over every minutia of their search infrastructure to squeeze the most performance out of every watt, in a quest to make Google a better, more environmentally friendly company. And then comes AI...
That should be weighted against how many CO2 is saved by LLM due to increased efficiency. A person living large in a big house, eating well, flying overseas for vacations, with 2-3 cars in the garage, consumes much more CO2 than a GPU that replaces him.
meaningless stat in the grand scheme of things. eventually, it will all transition to nuclear/solar. the question is does it move the needle in a meaningful way right now. so much hand wringing about carbon emissions only for germany to end up burning coal again.
Germany meme is nice, but it is probably just a dent in global coal use that breaks new record almost every year. Transition to nuclear/solar is still fantasy.
Apparently, instead of batteries the original idea was to have them be part of a brain based neural net which makes a lot more sense than batteries. But this was felt to be too complicated.
Exactly. Why they gave up on the human collective neural net movie plot and replaced it with the human batteries one is beyond me. It makes no sense, humans make terrible batteries.
If the machines from the Matrix needed to use some mammals as batteries, why not just use cows instead of humans? They'd be much more complacent with living in the Matrix, grazing on virtual fields, and less likely to wise up and try to escape and rebel against the machines since they're not very intelligent and the machines would need a lot less processing power to simulate a realistic cow world than the world in 1999 for humans.
To wit, if someone has access to a lot of GPU compute, could
you please use gen-AI to re-make the Matrix but with the characters as cows, I'd pay to watch it. Would definitely beat Matrix Resurrections or whatever that cash grab turd was called.
It's an easy analogy to say in <10 seconds that non-technical people can understand. Had they not mentioned "bioelectricity" at all, it might have been harder to nitpick.
I came up with this explanation to make me enjoy the movie more: The world outside the matrix is not the real world, but just another layer by the machines for obfuscation. In the real real world humans are actually used as processing units but since it is easier for the humans to accept and not ask further questions, the intermediate level with humans as batteries is introduced by the machines.
Wow, it would make so much more sense. Humans could be doing processing for AI when they are asleep in virtual world and what they consider their waking hours with free will would be actually a dream so they can regenerate. AI would be syncing their dreams and providing structure to maintain illusion that it's a real world.
Isn't there a long-standing myth that humans only use 10% of their brainpower? They could've used that to explain that the machines were stealing the other 90%. Anyone can understand that in 5 seconds.
Of course that means anyone unplugged from the Matrix would immediately become an S-tier genius. And how the hell do you write that?
Google's document is at: https://www.gstatic.com/gumdrop/sustainability/google-2024-e... See pdf page 8 / document page 7 for details, as well as the graph on page 32/31.