Talk:Utility monster

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Wiki Education Foundation-supported course assignment[edit]

This article is or was the subject of a Wiki Education Foundation-supported course assignment. Further details are available on the course page. Student editor(s): Petra Sen. Peer reviewers: Tyler Chinappi.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 12:12, 17 January 2022 (UTC)[reply]

Discussion[edit]

I have a problem with using this utility monster to criticize utility theory:

"If the utility monster can get so much pleasure from each unit of resources, It follows from utilitarianism, that the distribution of resources should acknowledge this. If the utility monster existed, it would justify the mistreatment and perhaps annihilation of everyone else, according to the doctrine utilitarianism."

No; the law of diminishing returns would guarantee the monster's efficiency would go down, and conversely everyone else's efficiency would go up as resources are consumed. Turidoth 20:23, 23 May 2007 (UTC)[reply]

if everyone else's pleasure gained goes up whilst resources go down, wouldn't the monster also appreciate the diminishing resources as well? This is a horrible misapplication of the law of diminishing returns, you realise it isn't something which can be applied wherever?

In response to the comment posted using ip 58.165.246.52, I think you are misunderstanding either my application of the Law, or the Law itself. As the monster gathers more and more resources, each addition unit of resource will result in less satisfaction for it. For example, the first french fry you eat will give more utility to you than the next fry, and the third fry will have even less utility than the second, and so on. At the same time, your friends will be willing to pay more for this dwindling (assuming limited) resource. In short, no, you will not appreciate paying more for each fry, while receiving less utility. Please, think before criticizing others. Turidoth 14:31, 28 June 2007 (UTC)[reply]

I deleted the following:

However, for the monster to be able to measure utility on the same scale as us it must intuitively be similar to us. In which case, it would seem to be virtually impossible for monster to use all the available resources better than the human population would. So, while they may take a large amount of resources they still would not lead to a particularly terrible result. Unless there were many such monsters, in which case it would seem to be more a case of natural selection. Not a particularly distressing outcome, and not one that would cause the denial of utilitarianism.

The above is nonsense because the concept of the utility monster has nothing to do with using resources "better." What is better? By Robert Nozick's definition, the Utility Monster gets more utility from resources, and we are only considering utility, so it is impossible to for the human population to use the resources "better." Furthermore, natural selection is a distressing outcome. Something being natural does not mean that it is good or better. —Preceding unsigned comment added by Estlacertosus (talkcontribs) 09:40, 24 October 2007 (UTC)[reply]

lets say i use one lemon and make one glass of lemonade. if i sell many glasses, i could buy a house. if the utility monster is very small and likes to live in lemons, i'd say in a real estate context, the monster gets a better return on lemons. this is only a problem if there are many monsters and i rely on lemons for my livelyhood. the monster is then just a pest. if, however, i make one glass out of one lemon, and the monster makes 100 glasses out of an identical lemon, isnt the most utilitarian next step to enslave the monster and force it to make lemonade for me? i'd say that's more distressing than natural selection. 99.135.90.249 (talk) 20:04, 30 December 2008 (UTC)[reply]

I think this shows a flaw in dropping "for the greatest number" from the equation of utilitarianism. If the monster's number is one, it simply needs 1/100 of the resources to be equally happy.
In any case, discussion pages are supposed to be about improving the article. Any third party criticisms of this thought experiment out there that we can cite? -Yamara 20:14, 15 January 2009 (UTC)[reply]

This problem ignores Utilitarianisms in which there is an upper bound to mind size. Basically, if mind size is bounded by some form of computational power, and thus so is experience, eventually the Utility Monster will reach its maximum and cap out if it exists in a finite universe. If there are infinite resources with which to construct the utility monster's brain, then there are infinite resources for everyone else, too. 75.187.205.74 (talk) 00:46, 22 November 2011 (UTC)[reply]

This isn't a blog. Editors' concerns and arguments aren't relevant to Wikipedia. -- Jibal (talk) 07:09, 26 October 2022 (UTC)[reply]

Worthy of an article?[edit]

This has no citations, no indications of notability... It could be merged with either Nozick or one of the utilitarianism articles. I would probably favour expanding the section in Nozick's article and having it redirect to there (as with e.g. non-overlapping magisteria). Richard001 (talk) 11:41, 13 May 2008 (UTC)[reply]

fair. no objections:)Spencerk (talk) 06:07, 14 May 2008 (UTC)[reply]

I wouldn't like to press on link about a problem while reading a major article like Utilitarianism and get to a biography page or a section of another article. Not classy. And I don't know how significant this problem is in philosophy, but at least for me it was very interesting and (according to utilitarianism) that IS important, so I think the article should stay.Tiredtime (talk) 07:40, 14 May 2009 (UTC)[reply]

I would like to re-raise this question. The sources cited, besides the author's original text, seem problematic. The cited book by Kennard, Frederick seems to have been independently published, and I cannot find an existing copy to verify its content. Parfit's article does indeed mention the concept, but only as "Nozick's" (p 148). And, the BBC article is a think-piece. As not classy as it seems, this does seem best fit in a biography, rather than its own article. Hdehacks (talk) 22:30, 29 October 2022 (UTC)[reply]

No original research and no opinions please![edit]

"Nozick's criticism of utilitarianism relies on a cardinal rather than ordinal definition of utility. Cardinal utility, which is quantifiable and comparable interpersonally as proposed originally by Jeremy Bentham, has been rejected by most utilitarians since John Stuart Mill. Ordinal utilitarianism would hold the interpersonal, or in this case intermonstropersonal, comparison needed to justify "that we all be sacrificed in the monster's maw" as invalid."

Who says that? Please cite a quality source for that statement. And don't use wikipedia to express your own opinion in an article! 188.23.64.27 (talk) 22:17, 3 May 2009 (UTC)[reply]

Why is the 'paperclip maximizer' even there?[edit]

The notion that mere carelessness of programmers would lead to creation of a well grounded concept of real world paperclip (and a well defined count of 'real world paperclips'), resistant to any potential reinterpretation by a superhuman intelligence, has nothing to do with utility monster. It is also a fringe view coming from people that know very little about AI programming. 78.60.253.249 (talk) 12:05, 25 November 2012 (UTC)[reply]

I don't particularly see it as a fringe view, given that it's discussed—in conjunction with Parfit—in academic publications. I guess I understand your objection to the scenario, but I think the hypothetical is clearly relevant: that a powerful intelligence without the flexibility of human moral reasoning might prioritize some outcome that most humans would consider horrifying. Do you really think the article is better off with this section excluded? I'd like to restore it, but I'll wait a little while for other comments. groupuscule (talk) 23:57, 26 November 2012 (UTC)[reply]
Well, the primary issue is that it has nothing to do with "utility monster". Two folks cited are both soliciting donations for them preventing the destruction of the world due to the aforementioned carelessness, too. I think it should be treated the same as commercial linking irrelevant to the subject of the article in general. 78.60.253.249 (talk) 16:00, 28 November 2012 (UTC)[reply]
What is relevance of "Sara Goldsmith, "The Perfection of the Paper Clip", Slate, 22 May 2012" ? 78.60.253.249 (talk) 03:06, 29 November 2012 (UTC)[reply]
To help establish outside notability for the "paperclip monster" concept. I understand your concern about Yudkowski and Bostrom, but ultimately I think this is an ad hominem argument that could be leveled at many scholars. I don't see how it diminishes the relevance of the thought experiment. groupuscule (talk) 03:57, 29 November 2012 (UTC)[reply]
The relevance of it to the article is zero. One naturally looks why irrelevant, non notable (1 minor reference in Slate?) things are added to the article. Typically there is financial incentive. Indeed, there is: the only two people you are linking to that actually discuss it, are taking donations for this. That's clearly not neutral coverage. Normally, for a notable concept you could e.g. find a philosopher that is not soliciting donations (let alone soliciting donations via fear mongering), and link that. 78.60.253.249 (talk) 08:33, 30 November 2012 (UTC)[reply]
It looks like the paperclip maximizer has been removed which is probably for the best. I strongly disagree that the paperclip monster is a niche concept; it's the canonical example in CS ethics about the effects of building an unsupervised general AI which optimizes for some outcome which is orthogonal or secondary to the primary outcomes that humans want -- yes, we want our paperclip AI to build as many paperclips as it reasonably can, but not at the cost of accumulating all the processing power and wealth in the world and effectively enslaving and harvesting the human race and later eliminating life on earth to secure more resources for paperclip production. It's a similar style of argument to the utility monster but I don't actually think it's the same; the whole point is that the paperclip AI generates a massive amount of negative utility because it doesn't have any mechanism to calculate utility, only to calculate the rate of paperclip production. If the paperclip AI could calculate utility correctly then it should stop producing more paperclips when the utility to the population of creating new paperclips reaches equilibrium to the negative utility to the population of the actions the AI needs to take to produce more. It is less an argument about ethical frameworks and more an argument that (under any reasonable ethical framework), a sufficiently-advanced algorithm would still need to be supervised by humans. It's certainly not nonsense though; while we don't yet have general AI that's sufficiently-advanced enough to enslave the human race and end the world, we do certainly have an increasing number of unsupervised algorithms which create large amounts of disutility because they optimize for their function without consideration to their effects on society. The algorithms that Facebook used to determine which posts appeared on your 'wall' generated so much free publicity for sensational and false news articles that Facebook has started to hire manual fact-checkers to mark them. It turns out, it's easier to write interesting news articles that people want to click on, read, and share, if you can make up whatever you like (and whatever most appeals to your audience) instead of reporting the truth, and the Facebook algorithms optimize for popularity, not for truth. --24.108.52.222 (talk) 21:30, 29 September 2018 (UTC)[reply]

Increasing marginal returns[edit]

There is no source cited for the claim that a utility monster "receives as much or more utility from each additional unit of a resource he consumes as the first unit he consumes." Nor would this imply that all resources should be given to the utility monster, since it says nothing about how the utility the monster gets from each resource compares to the utility anyone else gets from the resource. The quote from Nozick establishes a completely different definition: that a utility monster gets more utility from each unit of resource than others would get from the same resource. As far as I've been able to find, Nozick never says anything about the monster not having diminishing marginal returns. If no one can find a source for the claim about constant or increasing marginal returns, it should be changed to the definition quoted by Nozick. 50.0.142.172 (talk) 18:12, 19 August 2013 (UTC)[reply]

I have now removed the mention of marginal returns, and replaced it with Nozick's original definition. [Edit: I am the same person who brought up the objection originally.] 24.7.24.119 (talk) 22:55, 15 January 2014 (UTC)[reply]

Link to chaospet web comic[edit]

As I see it, the link adds no value to the article and should be removed. --Benjamin Schwarz (talk) 12:05, 4 May 2014 (UTC)[reply]

@Benjamin Schwarz: I agree. I've removed the webcomic link from the article. —Mark Bao (talk) 02:31, 8 May 2014 (UTC)[reply]

Revisions and Additions to Bibliography and Content[edit]

On the article of Utility Monster, I would like to add the following corrections:

-Add a link to the Wikipedia "Utilitarianism" page when it is mentioned -Citation of Frederick Kennard's quotes throughout the article, where citations are currently established as needed -More background or history of the concept would be extremely beneficial, so readers can better grasp the subject and it's relevance to thought today because they understand from where the idea arose. -This article needs more material that is not quoted or closely summarized.

In conclusion, I would add a history of the subject through explaining Nozick's approach to ethics, and refine what is already written to include less quotes and close paraphrasing.

Bibliography of corrections and additions: *there are not many reputable sources published on the topic, so I would dig deeper into the first two sources here (which are already referenced on the Wikipedia page, but minimally so), and add information from the last source about Nozick's approach.

"Utilitarianism". Wikipedia. 2016-10-14.

Kennard, Frederick (March 20, 2015). Thought Experiments: Popular Thought Experiments in Philosophy, Physics, Ethics, Computer Science & Mathematics (First ed.). AMF. p. 322. ISBN 9781329003422.

"Nozick, Robert | Internet Encyclopedia of Philosophy". www.iep.utm.edu. Retrieved 2016-10-18.

Petra Sen (talk) 23:25, 18 October 2016 (UTC)Petra Sen[reply]

Peer Review[edit]

I thought you did a great job on adding to the article Utility Monster. At the top of the page, it stated that it was an "orphan" and needed links, and you addressed that need right away by adding the link to the Wikipedia Utilitarianism page. Additionally, you inputted two citations where requested for sources. The sources look good and credible too according to Wiki guidelines. The only suggestion I have is the one line, "Any other people who not in the said group or not the individual are left with less happy units to be split among them" does not really flow off the tongue easily. I had to reread it a few times to understand. If it is not a direct quote, whichI believe it is paraphrased, maybe you could reword that just a bit to be easily understood. I understand what it is saying, but I think the sentence could be structured a bit better to read. Something about "people who not in the said group" maybe needs rewording. I learned something today about Utility Monster because of you. I had no idea. I want lots of happy units!!! I think you did a nice addition to the article.Tyler Chinappi (talk) 03:43, 10 November 2016 (UTC)[reply]

History / Relevance / Social implications[edit]

This needs to be tidied up. Does anyone have any suggestions on how to merge the History, Relevance, and Social implications sections into fewer sections without eliminating the useful context that each of these provide? ... JimsMaher (talk) 01:02, 5 January 2017 (UTC)[reply]

Every optimization function has a monster?[edit]

The article currently states that "It can be shown that all consequentialist systems based on maximizing a global function are subject to utility monsters". But the claim seems similar to the observation that any estimator with a low breakdown point can be thrown off by a single arbitrarily large outlier. I don't have access to the source. Does it take analogs of robust statistical measures into account? If it does, perhaps the article should say so. 46.66.178.159 (talk) 13:40, 20 September 2019 (UTC)[reply]

I wish I'd seen this earlier... see my comment under "Citogenesis". The only source for this claim is a book consisting of articles copied from Wikipedia. I removed the claim. Cholling (talk) 12:40, 5 December 2022 (UTC)[reply]

Citogenesis[edit]

Removed this bold claim: "It can be shown that all consequentialist systems based on maximizing a global function are subject to utility monsters." First of all, "can be shown" is almost certainly "has been claimed". Second, the "source" that was cited was Frederick Kennard's Thought Experiments: Popular Thought Experiments in Philosophy, Physics, Ethics, Computer Science & Mathematics, appears to be one of those self-published books consisting solely of articles copied from Wikipedia. Just another example of citogenisis. I'm interested in this claim, but only if a real, non-Wikipedia source can be found. Cholling (talk) 12:38, 5 December 2022 (UTC)[reply]

...damn, that is bold...and it has some intuitive force, although I'm not really sure why. and not really a subject for wikipedia though, not without pre-existing secondary resources anyway. shame that the original source is essentially a content thief.
I'll consider looking into this as I think the claim may have some merit / has been discussed in this way before. the utility monster (in Nozick's original iteration at least) can only really refute classical utilitarianism, and does pretty much nothing to, say, J.S. Mill's account which also works alongside the harm principle. but if the utility monster could be extended to critique J.S. Mill in some meaningful way...well, that's kind of a big deal. Meikkon (talk) 21:42, 6 March 2024 (UTC)[reply]

on animal rights / adding this to watch list[edit]

nozick does not once say the word "cookie" in ASU, last I checked. nor does he say "happiness units", or "happiness hogs". by this I mean to say the whole page is discussing this thought experiment with very little reference to the original text. the main references are from secondary sources - only one reference is actually nozick himself, and it is a single quote from ASU.

these references shouldn't be removed, but need to be clarified. kennard's discussion /is/ interesting - but that's kennard's work, not nozick's. the original text uses the thought experiment to prove the holes in utilitarian thought through inconsistencies regarding animal rights. this is crucial to interpreting nozick's work correctly.

that's not to say, of course, the contemporary popularisation of the thought experiment (i.e. kennard and the likes) are also referring to animal rights. that is obviously not the case and the discussion has largely been divorced overtime.

it is important here to iterate, I think - nozick did not spend long on the utility monster at all. he may have been the first to bring it up, but he did not develop it in great depth (outside his discussion of animal rights). that credit ought to be more clearly attributed to other scholars so that people can find relevant sources more easily (and better understand ASU as a text).

I'll overhaul this page at a later date but will need to do some further reading outside of nozick's work to do a good job of that. any help in the meantime is appreciated. thx yall. Meikkon (talk) 17:39, 12 December 2023 (UTC)[reply]