• 15 Posts
  • 201 Comments
Joined 2 years ago
cake
Cake day: June 20th, 2023

help-circle



  • I’d argue that that directly opposes this statement. Those activities are actively engaging both the AI and the human, so however much freshwater it would take for the human to do independent research or whatever is appropriate on the topics they’re asking AI about is still being used by the human using the AI, but the AI’s water use is occurring in addition to that.

    Practical guidance queries should be compared against searching for practical guidance, yes? So if you would be searching 4-5x times, the AI has cut that time from the process. Especially so if you find a guide that lacks one extra bit of context, AI lets you ask the follow up and get an answer in the same format and context, while search would require re-reading what you already know and cross checking it. If the knowledge you need isn’t written in a convenient place/format, and you would have used a person, then the AI has successfully cut humans in the loop in half.

    Same goes for “seeking information”, the second most common use case. This one I suppose comes down to how the AI is being used. If someone is asking the AI a question and taking the response they get at face value and doing nothing further, they will invariably spend less time than doing independent research, however the quality of that result is roughly equivalent to just typing it into a search engine and trusting whatever the top result is, which is also a very low time consuming task. In either case, the human is engaged during the whole process, so the AI is adding additional water usage.

    I know less people who do this intentionally (see involuntary AI use below). Those I do are using it for stack-exchange style questions, where the information is highly context specific, probably only present in a few forums, and would require a lot of effort to get a precise search result (lots of AND’s and NOT’s and site filtering). I think these difficult searches are probably not what ‘seeking information’ usually means, and would agree this use is not great.

    Is reading AI generated fiction really any better than reading a book? Because reading a book is certainly going to consume less water than having the AI write that fiction.

    This one depends a lot on book maintenance, construction, and availability. Note libraries, bookstores, and ebook hosting take labor, power, and water too.

    Most generated fiction is in niche genre’s I think, so the cost of getting a human to write it would be astronomically worse. And while I am just as happy reading the original Dracula instead of an ultra-specific undertale fanfic, I have a hard time telling someone else that they are literally interchangable.

    If you also consider all of the “involuntary” AI use - for example, AI generated entries at the top of search results when none were requested or wanted - there’s a quantity of resources - not only water, but power, as well, which I think is the bigger concern overall, particularly in the US right now - being spent for zero benefit.

    Yeah I do endorse these uses as efficient. They are bad/stupid/silly. I’ve disabled them where possible, and welcome others to do the same. That said, this water waste is likely small compared to other (equally terrible) industrial practices (we don’t need to triple wash every carrot, and powerwashing various vehicles and surfaces is often not efficient or needed).

    One issue with AI generated recipes that I will point out is that the AI doesn’t actually know how to make that thing, it’s just compiling what it thinks is a reasonable recipe based on the recipes it has been trained with. Even if we assume that the ingredient quantities make sense for what you’re making, chances are the food will taste better - particularly for complex dishes - if you’re using a recipe curated by humans rather than an AI approximation.

    Yeah, I’ve just had a terrible time finding actual humans providing recipes on the internet. I am entirely prepared to believe this is a skill issue on my part. YouTube has helped somewhat, but now we’re comparing an LLM to video hosting+processing and ~5 minutes taking careful notes along the way.


  • If I were to try and play up his argument, I might appeal to ‘we can shorten the dark times’, Asimov’s foundation style. But I admit my hearts not in it. Things will very likely get worse before they get better, partially because I don’t particularly trust anyone with the ability to influence things just a bit to actually use that influence productively.

    I do think this oligarchy has very different tools than those of old; far fewer mercenary assassinations of labor leaders, a very different and weirdly shaped strangle-hold on media, and I put lower odds on a hot conflict with strikers.

    I don’t know the history of hubris from oligarchs; were the Tsar’s or Barons also excited about any (absurd and silly) infrastructure projects explicitly for the masses? I guess there were the Ford towns in the amazon?


  • I am primarily trying to restate or interpret Schneiers argument. Bring the link into the comments. I’m not sure I’m very good at it.

    He points out a problem which is more or less exactly as you describe it. AI is on a fast track to be exploited by oligarchs and tyrants. He then makes an appeal: we should not let this technology, which is a tool just as you say, be defined by the evil it does. His fear is: “that those with the potential to guide the development of AI and steer its influence on society will view it as a lost cause and sit out that process.”

    That’s the argument afaict. I think the “so what” is something like: scientists will do experiments and analysis and write papers which inform policy, inspire subversive use, and otherwise use the advantages of the quick to make gains against the strong. See the 4 action items that they call for.




  • Success would lead to AI use that properly accounted for its environmental impact and had to justify it’s costs. That likely means much AI use stopping, and broader reuse of models that we’ve already invested in (less competition in the space please).

    The main suggestion in the article is regulation, so I don’t feel particularly understood atm. The practical problem is that, like oil, LLM use can be done locally at a variety of scales. It also provides something that some people want a lot:

    • Additional (poorly done) labor. Sometimes that’s all you need for a project
    • Emulation of proof of work to existing infrastructure (eg, job apps)
    • Translation and communication customization

    It’s thus extremely difficult to regulate into non-existence globally (and would probably be bad if we did). So effective regulation must include persuasion and support for the folks who would most benefit from using it (or you need a huge enforcement effort, which I think has its own downsides).

    The problem is that even if everyone else leaves the hole, there will still be these users. Just like drug use, piracy, or gambling, it’s easier to regulate when we make a central easy to access service and do harm reduction. To do this you need a product that meets the needs and mitigates the harms.

    Persuading me I’m directionally wrong would require such evidence as:

    • Everyone does want to leave the hole (hard, I know people who don’t. And anti-AI messaging thus far has been more about signaling than persuasion)
    • That LLMs really can’t/can be made difficult to be done locally (hard, the Internet gives too much data, and making computing time expensive has a lot of downsides)
    • Proposed regulation that would actually be enforceable at reasonable cost (haven’t thought hard about it, maybe this is easy?)


  • I strongly agree. But I also see the pragmatics: we have already spent the billions, there is (anti labor, anti equality) demand for AI, and bad actors will spam any system that took novel text generation as proof of humanity.

    So yes, we need a positive vision for AI so we can deal with these problems. For the record, AI has applications in healthcare accessibility. Translation, and navigation of beurocracy (including automating the absurd hoops insurance companies insist on. Make insurance companies deal with the slop) come immediately to mind.



  • The argument I was responding to was that using AI for these tasks represents a water savings, not that it represents a time savings or efficiency gain.

    If that’s no longer your argument, maybe it would help if you re-stated your revised position, so we’re both arguing from the same starting point?

    Took a day to think it through carefully. My position is that for many AI use cases, it is more efficient to spend the freshwater having the AI do a task than having a person do that same tasks. Equivalently, the total freshwater spent on entities doing the tasks will be lower if the AI does them than if we have people do them.

    I believe you’ve convincingly argued that this means more things will get done, not less water will be spent. I think that’s consistent with my current position, and I agree the OP does not make this distinction very well. My particular question was asking how I should update the OP to respond to your argument, as a way to check that I comprehended correctly.

    Your most recent post makes the point that tasks AI are often used for are small, easily done even cheaper, or poor fits for the technology. I think there’s probably some useful discussion that could be had here, though I might suggest we focus on what people seem to actually be doing (instead of my vibes from personal interactions). I recently learned about this paper OpenAI posted recently. What I extracted on a quick skim:

    • Mostly non-work chats (70% of use)
    • Most common use: Practical guidance (defined as how-to’s, tutoring, creative ideation, or self-care)
    • Next: seeking information (as a search engine)
    • Next: writing (editing provided text, helping to write personal communication, translation, fiction generation)
    • all other use cases constitute less than half the use rate of the above 3, which are pretty close to each other (see Figure 7)

    Speaking from experience, getting tone + politeness + clarity in business writing is hard for me (as you can probably tell). Plenty of emails where what needs to be said is simple, obvious, and short, but I will spend 3 hours agonizing over the wording. Rejection sampling what an LLM produces for such a socially anxious person is likely faster and (thus) more water efficient for that task. I think this is a pretty solid and common AI use case.

    While there are many tutorials, guides, and discussions on how to do various things (like recipes), I think crawling through SEO text can be hard and frustrating for some people. I want a simple peanut sauce recipe for the ratios. You can either query the LLM and get an answer in 10 seconds, or you can search and weed through 4 much longer, worse written, and likely also LLM generated slobs for the same (Why 4? Because each one will be focusing on some sponsored extra ingredient that probably throws off the ratio). I think this is also both a common AI use, and more efficient if you are not excellent at managing SEO text already.

    (Should it matter, I’m dramatizing the previous paragraphs a fair bit. I’ve used AI for ~3 emails, and ~2 recipes. Both times only after trying the usual way and getting terrible results.)



  • I think it’s worth getting a bit of context; here’s our world in data on extreme poverty. Most people on earth go day-to-day on about $10 per day per person. America, while definitely worse for the bottom quarter of the population than ever before, is relatively far from this. Here’s a chart on what portion of people in the US are living on (order of magnitude) this amount of money over time.

    I think this gives a sense of what survivable looks like. For example, ~50% of the US population in 1970 lived on less than $40/day (inflation adjusted). Today we are probably somewhere around ~30%. Mortality was higher in 1970, but I don’t think it was quite that drastically worse. There are new rent-controlled and monopolistic practices that make dollars worse than they used to be in hard-for-inflation-to-detect ways; it probably requires an expert to weed through that.

    You can look at other related data; we have built up quite a buffer compared to 50 years ago. This means a lot more systems have to fail/degrade than back then for massive deaths (at least due to economics). You can also look at the living conditions of exploited people historically and across the globe; they do tend to survive. Exploiters are incentivized to keep people around to exploit.

    Perhaps you are asking about thriving/living/having any semblance of quality of life. I don’t know anyone really predicting these will improve, except perhaps AI-salvationists. Many many systems that kept life improving for so long are being degraded or destroyed, and it seems unlikely (at the moment) that we will rebuild them intelligently.


  • (I think you’ve also done something sneaky mathematically; the units of your numerator are ‘change in freshwater use from leaving the human alive’, but your units on the denominator are ‘change in work from the human not existing at all’. I think the two units should try to align; either both assume the human not existing at all, or both assume the human. I’ve been taking the first set of units, the second set of units would compare 0/0 with Y/(w/e the human does instead of Z), which seems less insightful.)


  • Thank you, I understand your argument. I think we should complicate the model ever so slightly, because the human will exist regardless and does something with that extra time. Suppose there are two tasks, instead of just one. The first task is as we’ve described, the second task is something the human would prefer to do, but cannot do until the first task is done. Let’s say the tasks are comparable; both contribute Z to work done (in general we would have Z and Z’).

    Without AI, the water use / work done is X/Z.

    With AI, the water use / work done is (X+Y)/(2Z).

    The second ratio is smaller whenever Y < X, thus in this case the AI has made our freshwater use more efficient.

    We can certainly discuss which model is more accurate/typical, I would welcome such. Do you feel this model of ‘total water use / total work done’ is fair? Generally, I put a lot of value on work that people want to do, and not all that much value on work that people would rather give to AI, so usually Z << Z’, and I think the efficiency gain is rather large (this includes things we don’t normally call work, like self care).




  • You are right I’ve been feeling defensive. But I also don’t understand why you say discussion with me is pointless. In response to earlier, entirely correct, comments I’ve edited the OP to remove a bad argument that I had made. I removed it because it wasn’t honest or correct. Is that also defensive behavior?

    If I extend this argument about subjectivity, I think the strongest thing it could argue is that “it doesn’t matter that people believe false things about the magnitude of AI water use”. Would you say that’s correct?