Discussion about this post

User's avatar
Fin Moorhouse's avatar

Thanks for writing this! Lots of interesting points. A few thoughts while reading:

>What happens if superintelligence discovers that our fundamental assumptions about causality, consciousness, or even logic are wrong?

I'm actually not sure this is worth worrying about. Our understandings of causality and consciousness indeed are changing and highly disputed, but in most contexts (e.g. understanding US politics) this isn't very relevant. I don't know what it would look like to discover that our fundamental assumptions about logic are wrong (some have argued against obvious-seeming axioms, e.g. dialetheism, but those people live their lives much like the rest of us).

>How do you select for “truthfulness” when the nature of truth itself is being revised monthly?

Similarly, I'm not fully sure what this means, but it sounds a bit too dramatic to me. Again consider that people trying to figure things out in other epistemic domains rarely care to ask which theory of truth is correct.

>I think there’s a much stronger case for automated forecasting working, but it too has a critical weakness: trust […] if people's entire worldviews are crumbling monthly, why would they trust anything, even an AI with a perfect prediction record?

Here I'm just not sure I see the positive case for why you and I will lose all trust in every source of information. Why would you personally decide not to listen to "an AI with a perfect prediction record?". Another angle on this is that it will always be possible (if painstaking) to scrutinise the reasoning / sources of your favoured source, and verify if they seem sensible. If they seem like bullshit, you can tell others, and that source will fall out of favour, and vice versa.

>They [non-enhanced people] would functionally become children (or, even newborns if the intelligence explosion gets really crazy) in a world run by incomprehensible adults.

I do think this is a good and worrying point. But a couple thoughts. One is that already, some people in the world are in a far stronger epistemic position than others. Some are lucky enough to have learned a lot about the world, know how to competently access credible sources, etc. Some, as you point out, have crazy views about the world (e.g. young Earth creationism). Why isn't this already a disastrous situation? I think one reason is that we're most free to form crazy (wrong) views on issues which don't materially affect our lives. Our beliefs about the age of the Earth don't matter much for how our lives go; our beliefs about e.g. which side of the road people drive on do matter, so we get them right more often (and those few people who are not able to understand which side of the road to drive on are typically not going to be frequent drivers, i.e. there is often a happy coincidence between epistemic competence and the consequences of making errors).

A second thought is that all of us are in a position of deference-by-default on a huge range of questions. I have not personally recreated the experiments to verify whether the Earth is flat, or revolves around the Sun, but I trust the community that figured these things out, scrutinised the results, and disseminated the results.

Incidentally, I really recommend Dan William's Substack, it shaped my views on a lot of these questions — https://www.conspicuouscognition.com/

Thanks again!

A. Jacobs's avatar

A fascinating perspective on the social consequences of superintelligence. If intelligence accelerates faster than our capacity to comprehend it, reality drift may emerge as a defining feature of the transition.

1 more comment...

Ready for more?