Jump to content

Talk:Superintelligence

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Too many lists, other types of superintelligence, definition

[edit]

Lists can be good for understanding. But this article probably has too many numbered or bullet lists (currently 10, which is much more than any article I have seen). We may consider reducing it by half.

One other potential issue is that there is one section about biological superintelligence in the middle of sections about artificial superintelligence. ASI clearly deserves more content than biological superintelligence, but that might confuse readers. Perhaps we could have a section "Other forms of superintelligence", covering biological superintelligence and other speculative paths like networks and orgnanizations brain-computer interfaces.

I think the first sentence was better before (I believe this could be removed, for accuracy and simplicity: ", ranging from marginally smarter than the upper limits of human-level intelligence to vastly exceeding human cognitive capabilities", see this discussion) Alenoach (talk) 01:35, 12 September 2024 (UTC)[reply]

Biased graph

[edit]

Hi, The graph shown here (File:Test scores of AI systems on various capabilities relative to human performance - Our World in Data.png) can't be generalized, contrary to what it seems to be. There are numerous sources cited on [1] that show otherwise. Yann (talk) 18:12, 24 September 2024 (UTC)[reply]

I made this alternative version that precises the benchmark names, so that it's precise about what the results are. Is it ok to replace the old image with it? Alternatively, there could also be this one from the 2024 AI index, that gives very similar data than the one from Our World in Data, but that presents fewer benchmarks.
Otherwise, if you disagree with the concept of showing benchmark results, I would say that benchmarks are not perfect, but they offer a factual metric. As you showed, there are articles that claim that AI just doesn't understand anything and are just stochastic parrots, or that show examples of where AIs fail. But there are also many other researchers (most I would guess) that simply consider that AI understands, to some degree.[2] And when AI starts to surpass humans on every objective test for understanding (including tests for which the data is private and unavailable from the internet), can we still contend that it doesn't understand? Alenoach (talk) 19:54, 24 September 2024 (UTC)[reply]