Jump to content

Talk:Superintelligence/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

Where to put Super-Intelligence

From what I've heard, Super Intelligence has been trying to find a home. I have currently shifted my focus to the artificial intelligence article, and from what I've seen, this article (Super-Intelligence) would be the perfect. Thank You, and if I could get a prompt response and/or course of action, that would be very much appreciated. Chucks91 (talk) 18:59, 19 November 2009 (UTC)

Speculations or facts

The article as it now stands does not adequately distinguish between verifiable, published facts and the author's own speculations. Statements about the hypothetical capabilities of some future entity, if they cannot be supported by reliable sources, should be deleted. --Russ (talk) 13:46, 25 March 2008 (UTC)

I have added references - chief among these are several articles by Oxford Based Philospher N.Bostrom. Ray Kurzweil and others. Pictures are my own - fully based on others work.

Note - I'm happy for many things to be deleted - my aim is for this article to serve as a 'base' for others to develop. There's nothing in article that has not been speculated on in the public domain...Kind Regards Wjwillia (talk) 19:11, 29 March 2008 (UTC)

It just does not hold. OK, there was a publicated work that stated "SI may be able to X", and an expert understands that in this field pretty much everything is some kind of speculation. HOWEVER, in Wikipedia most cited sources are NOT speculations. Hence Wikipedia article cannot just keep such tone and repeat "SI may be able to X" after the source, when few links away you've got another article that says "32-bit CPU may be able to address 4 GiB of RAM". This is plainly inappropriate. Any speculations, now being 95% of the article, should be very clearly marked as such, and still sourced as a widely recognized point of view.
Another remark. Too many of the citations are from the primary sources. This is against Wikipedia guideline WP:RS. --Kubanczyk (talk) 18:45, 8 April 2008 (UTC)
Agreed. There is a great deal of speculation in this article, which, though supported by references, does not interpret those sources accurately. There is a great deal of uncertainty regarding what the capabilities of a superintelligence may be. Any claims, even cited, should be expressed as "person x has made the claim that SI may be capable of y" which is not the same as "SI may be capable of y". Many speculations on the page also make assumptions, such as that an SI would have massive storage capabilities. An SI would be software. Storage capacity and processing capabilities are likely to be dependant on the hardware which is a separate issue. A non-SI, but massive worldwide computing network would have many of the capabilities ascribed here to an SI. The article assumes a certain implementation of an SI. For the most part it strikes me as being akin to 1920s expectations of "the future". 216.36.186.2 (talk) 14:10, 15 May 2008 (UTC)
This article needs to be heavily edited to trim down the speculation to the minimum. That'll probably reduce it to a paragraph or two, but that's better than what's there at present. Autarch (talk) 17:49, 1 July 2008 (UTC)

... What the hell is this? All the naming and classification systems (in the graphs) seem arbitrarily constructed and are effectively semantics. This article does not at all present a two-sided analysis of ANY issue. --Alex Rohde April 28th 2008. —Preceding unsigned comment added by 199.111.71.84 (talk) 20:18, 28 April 2008 (UTC)

This article seems to have a strong techno-utopian bias. —Preceding unsigned comment added by 70.1.206.244 (talk) 13:12, 14 May 2008 (UTC)

The whole concept suffers from the difficulty of defining "intelligence" in the first place. It would be an improvement to note this fact in the first or second sentence. — Wegesrand (talk) 09:16, 4 March 2013 (UTC)

Embarrassing Article

This article is a little embarrassing to read. Where did all the article text come from, and who decided this was a good place to put it? I was going to try to hit the article with a {fact} hammer, but I gave up a third of the way through. The article may need all but a total rewrite. FFLaguna (talk) 00:27, 26 May 2008 (UTC)

Don't bother adding templates, most of the content is surely unsalvageable. I've trimmed the most egregious parts already, and when I have a bit more time intend to go at the parts that look like they might contain a few worthwhile sentences or sources. ~~ N (t/c) 01:42, 26 May 2008 (UTC)
Thank you, this page is in need of serious cleanup. It is an interesting topic but a lot of what is here is original research and there is significant overlap with other pages such as technological singularity. 216.36.188.184 (talk) 01:48, 5 June 2008 (UTC)
This reads like a piece of science fiction for the most part. It's a fascinating thought experiment to be sure, but for all the reasons listed by others before me it really doesn't belong on Wikipedia in anywhere near it's current form. —Preceding unsigned comment added by 72.77.26.34 (talk) 14:35, 26 June 2008 (UTC)

I agree, there is significant overlap with article on technological singularity.

But that's because technological singularity is an inextricable concept and, by far a more developed article in the English-Wikipedia, the concept of superintelligence should be better discussed there, together with the "Ultraintelligent Machine", an early concept that lead to both the Singularity and superintelligence. #Merge with technological singularity :—-— .:Seth_Nimbosa:. (talkcontribs) 11:56, 28 October 2009 (UTC)

Total rewrite

This article was, IMO, not salvageable, as well as a disgrace to what is supposed to be an encyclopedia. What to delete and not delete was extremely hard to determine based on the fact that most of it seemed to be a synthesis of cites, making it impossible to determine which part of the synthesis contained the kernel of truth. Thus, I've reduced it back to a stub to start fresh.--Hypergeometric2F1(a,b,c,x) (talk) 08:21, 8 July 2008 (UTC)

Thank you for taking this on. Excellent work, and I agree completely. ---- CharlesGillingham (talk) 23:59, 8 July 2008 (UTC)

Merge with technological singularity

Technological singularity is an inextricable concept, and a more developed article in the English-Wikipedia, the concept of superintelligence should be better discussed there, together with the "Ultraintelligent Machine", an early concept that lead to both the Singularity and superintelligence:

"The Singularity represents an "event horizon" in the predictability of human technological development past which present models of the future may cease to give reliable answers, following the creation of strong AI or the enhancement of human intelligence...
"A number of noted scientists and technologists have predicted that after the Singularity, humans as we exist presently will no longer be driving technological progress, with models of change based on past trends in human behavior becoming obsolete.
"In 1965, statistician I.J. Good described a concept similar to today's meaning of the Singularity, in Speculations Concerning the First Ultraintelligent Machine.."

In short, I am engaging your support for a MERGE, together with the citations, etc. :—-— .:Seth_Nimbosa:. (talkcontribs) 11:56, 28 October 2009 (UTC)

True superintelligence is the zeazrovneba

Philosopher Oleg Starchen(or Oleg Olden) says:"True superintelligence is the zeazrovneba". Zeazrovneba is a Georgian term. —Preceding unsigned comment added by 217.147.230.82 (talk) 12:58, 18 May 2011 (UTC)

Dubious et al. in-line tags

Hello! Well, everything is grammatically correct now, but I was surprised with the amount of weasel words and unsupported attributions that were present. Virtually nothing is cited, even statements like "There are no serious ongoing projects which aim at creating superintelligence." Really?? I don't know how you can say that after Watson won first prize and $1,000,000 on Jeopardy in 2011. That's just one example of unsupported statements. There are plenty of “There is fear...”, “Skeptics claim...”, “Critics argue...”, and such. I'm actually interested in this topic, so I'll do what I can to introduce citations and expand things. We could start by adding more in-line citations to the four sources that are already present. Regards. Braincricket (talk)

Delete entire Criticism section

There is not a single reference to the criticism section. Unless some editor comes up with a reference , this section will need to be deleted.Trade2tradewell (talk) 12:12, 15 October 2012 (UTC)

Still in need of much work

I see from this talk page that this article has been off to a bad start and has had to be blown up and started over at least once before. It would be helpful to look up better sources. I've put together a bibliography of Intelligence Citations, posted for the use of all Wikipedians who have occasion to edit articles on human intelligence or psychology and related issues. I happen to have circulating access to a huge academic research library at a university with an active research program in these issues (and to another library that is one of the ten largest public library systems in the United States) and have been researching these issues since 1989. You are welcome to use these citations for your own research. You can help other Wikipedians by suggesting new sources through comments on that page. It will be extremely helpful for articles on psychology to edit them according to the Wikipedia standards for reliable sources for medicine-related articles, as it is important to verify articles on these issues as well as possible. -- WeijiBaikeBianji (talk, how I edit) 23:01, 1 December 2014 (UTC)


Biological superintelligence

"Selective breeding and genetic engineering could improve human intelligence more rapidly." This statement is said with a moderate degree of conviction but has nothing to back it up. Is this a hypothesis or is this actually happening? — Preceding unsigned comment added by Xkit (talkcontribs) 08:36, 24 December 2014 (UTC)

Introduction is just garbage

"Experts in AI and biotechnology do not expect any of these technologies to produce a superintelligence in the very near future." ummm... yea, this is just something the author pulled out of his butt — Preceding unsigned comment added by 174.116.111.232 (talk) 13:01, 30 April 2015 (UTC)

Do you have any reliable sources to suggest for improving this article? -- WeijiBaikeBianji (talk, how I edit) 19:49, 30 April 2015 (UTC)

"Sentient machines" is not "Superintelligent machines"

A sentence in the fourth paragraph of the lede glosses over an issue that is important in the philosophy of artificial intelligence. It refers to "sentient machines", making the assumption that "super intelligence" and "sentience" are somehow the same thing.

(For background, please see Artificial intelligence#Philosophy and ethics, Philosophy of artificial intelligence, Chinese room, Turing test#Weaknesses, Hard problem of consciousness, Mary's room, Computationalism, Functionalism, Philosophy of mind etc.)

Sentience is the ability to "feel" and have "experiences", i.e. it is a rudimentary form of "consciousness". The term refers to both people and animals, and specifically to the strong intuition that, even if these animals are not intelligent, they still "feel" and thus are entitled to certain rights.

Intelligence (even "super intelligence") narrowly refers to the ability to solve extremely difficult problems. It's possible that consciousness is required to solve difficult problems. It's also possible that consciousness is not necessary to solve difficult problems. We don't know.

We must careful to avoid the mistake of conflating "intelligence" and "personhood". In science fiction, it's necessary to have a way for the writer and the reader to make it clear what is a "character" (i.e. a "person") and what is a machine. Science fiction thus uses a number of terms to make this distinction: "self-awareness", "sentience", "consciousness", etc. When a machine makes the transition into a character, the writer usually adds a vaguely spiritual moment where the machine "comes alive" or "wakes up". This is a literary device and is not an accurate presentation of the modern philosophical understanding of the subject. Indeed, it is a throwback to Cartesian dualism, a discredited approach to the philosophy of mind. ----CharlesGillingham (talk) 18:38, 19 February 2018 (UTC)

"Wisdom"

Referring to the quote "[...] an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills."

I know Bostrom uses the term wisdom here, but it still should not appear in the article. The source is from 1998, whereas the book is from 2004 and defines Superintelligence as "[...] intellects that greatly outperform the best current human minds across many very general cognitive domains" (page 63). Bostrom also defines the orthogonality thesis (p130) like so "Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal" and he later explicitly says (page 141): "Second, the orthogonality thesis suggests that we cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans [...]".

It's fairly clear that saying 'general wisdom' was an unlucky choice of words. It flies into the face of the entire alignment problem insofar as it's associated with ethics, and insofar as it isn't, it's clearly misleading, and unnecessarily so. --109.192.165.115 (talk) 19:03, 2 March 2018 (UTC)

I changed it to his 2014 definition, rather than the previous definition cited to Bostrom' work from 1998. Rolf H Nelson (talk) 00:23, 4 March 2018 (UTC)

"copy edit"

"(mean 2168, st. dev. 342 years)" surely this is wrong? Should be "2068, st. dev. 3.42 years"? — Preceding unsigned comment added by 41.50.83.197 (talk) 08:22, 23 January 2019 (UTC)

Prediction markets

Prediction markets are hardly superhuman in "virtually all domains of interest" and are not even reliably better than the best individual human predictors. Anyone care to defend this before I remove it?WeyerStudentOfAgrippa (talk) 17:42, 27 January 2020 (UTC)

What do we do with Kshitij Gautam's (08:18, 5 May 2021‎ Motocrox) edit?

He added his name being pretty much an unknown (Google Scholar shows 2 people with this name and both did not achieve much), I read his entire article (him being the only author), that looks more like a show-off to his friends/girlfriend. I am not a superintelligence scientist, but the article looks like garbage to me. I will not undo his edit myself, as I am waiting on other people's thoughts. — Preceding unsigned comment added by Callmesolis (talkcontribs) 15:59, 5 May 2021 (UTC)

I assume the article and edit were in good faith, but there are thousands of papers on what AGI algorithms will and should look like, and if we were going to discuss that topic, this is not high on the list of article / ideas we should discuss, and it's also not a reliable source WP:RS I think. I suggest undoing the edit. --Steve (talk) 12:53, 6 May 2021 (UTC)
Well, I have waited a week. I guess the article is not very popular. I am undoing it. Callmesolis (talk) 21:12, 11 May 2021 (UTC)

Misleading thing about poll and not reliable sources

In the section "Forecasts" there is a misleading sentence: "In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines...". The response rate was 29 out of 100 and there is no guarentee that those who said "it will never occur" are within those 29. Like Hubert Dreyfus who said: “I wouldn’t think of responding to such a biased questionnaire. … I think any discussion of imminent super–intelligence is misguided. It shows no understanding of the failure of all work in AI. Even just formulating such a questionnaire is biased and is a waste of time.” Additionally my opinion: It is not a good response rate and the biased questions there were already a problem from social science view since they all made an assumption that superintelligence is possible (and HMLI also, in this article sort of HMLI) and that HMLI will lead to superintelligence and they only gave them two options to choose how much percent they think it will appear within a) 2 years; b) within 30 years. — Preceding unsigned comment added by Wholesomist (talkcontribs) 21:49, 5 March 2022 (UTC)

existential nihilism

"However, it is also possible that any such intelligence would conclude that existential nihilism is correct and immediately destroy itself, making any kind of superintelligence inherently unstable."

I pulled the above from the article, as it is pure supposition with no reference, citation, or reliable source. It follows that, since we do not consider ourselves to be superintelligent, we would not know the boundaries of any conclusion a superintelligence might have. And, thanks to our (very limited) understanding of causality, it is extremely unlikely that a superintelligence would conclude existential nihilism. Without any reading whatsoever, I would suggest that "Analysis Paralysis" is far more likely an outcome. 20040302 (talk) 15:32, 30 May 2023 (UTC)