I dropped a link in the zoom chat to this chart to illustrate a point, but I never found an opportunity to change the topic
This is one of my all time favorite graphs: evidence vs. confidence. The weight of evidence in bits is on the x-axis, and the corresponding Bayesian credence is on the y-axis. If you have no evidence, or if the evidence in favor is exactly counter-balanced by evidence against, then the confidence in the claim is 0.5, perfectly agnostic. The interesting aspect is that it takes only a few bits of evidence (like 5) in order to be very confident (>95%) in a claim.
I wanted to use this as an analogy to Chapmanâs critique. It is like he is pointing out that you can never be 100% confident in a claim because that would require infinite bits of evidence which is impossible, which is technically true but pragmatically irrelevant when 5 bits would suffice for most cases and 10 bits is good enough for 99.9% confidence. If you require military-grade, bet the lives of your children, confidence then you might invest in acquiring 30 bits (wrong 1 time in a billion).
After some initial smalltalk about the ethics of keeping pets we dove in and spent a long while arguing about the meaning of âimpossibleâ in this claim:
Assigning a consistent set of numbers to diverse statements seems impossible.
While we were waiting for others to join we started off on an interesting tangent about MKULTRA and all its downstream effects (Charles Manson, The Unabomber, Ken Kesey, The Grateful Dead, Apple Computer, etc)
This led to friend-of-the-Stoa Erik Davis and his book
Apparently Davis knew PKD and I mentioned Iâm a member of the Exegis II project
A discussion of meta-models led to the work of David Wolpert:
As a preliminary warmup, discussed why different timezones persist. (No good reason, we should switch to UTC)
I suggested that a universal object id registry might be theorectically possible by considering all possible patterns in the binary expansion of the reals. This led to a tangent on levels of infinity and the continuum hypothesis.
Maybe our universe corresponds to a single transcendental numberâŠ
I took issue with Chapmanâs narrow notion of objects as being physical, at least all the examples in this article. My concept of an object as a set of related properties is no doubt informed by decades of practice in object-oriented programming.
This led to a discussion of mathematical objects such as circles, and their ontological status.
I suggested that an object exists if it can be described (as a set of related properties), but it is only ârealâ if it is ârealizedâ physically in mass-energy in space-time.
Apparently some mathematicians deny the existence of very large natural numbers:
We agreed more or less that objects are reified for a purpose which necessarily brings in the notions of agents, values, and consciousness.
TIL the concept of âmoral patientâ
Philosophers distinguish between moral agents, entities whose actions are eligible for moral consideration and moral patients, entities that themselves are eligible for moral consideration. Moral agency - Wikipedia
Can there be value without consciousness? Was there any value in the universe a million years after the big bang when presumably there were no conscious agents? A related quote was offered:
Let us imagine one world exceedingly beautifulâŠAnd then imagine the ugliest world you can possibly conceive. Imagine it simply one heap of filth, containing everything that is most disgusting to us, for whatever reason, and the whole, as far as may be, without one redeeming feature. The only thing we are not entitled to imagine is that any human being ever has or ever, by any possibility, can, live in either, can ever see and enjoy the beauty of the one or hate the foulness of the other⊠[S]till, is it irrational to hold that it is better that the beautiful world should exist than the one which is ugly? Would it not be well, in any case, to do what we could to produce it rather than the other? Certainly I cannot help thinking that it would; and I hope that some may agree with me in this extreme instance. - G.E. Moore, Principia Ethica
The consensus was that this thought experiment was incoherent, you have to imagine inserting an observer into the world to evaluate its beauty but that possibility is excluded by the experiment. Maybe weâre missing something?
Kicked it off by objecting to the first paragraph:
The correspondence theory of truth does not include a causal explanation of how the correspondence between beliefs and reality comes about. Unfortunately, there are no correspondence fairies to do that job for us. Perception can do at least part of the work.
I suggested perception is only half the story. The other half is action. Internal models are built from perception (sensory inputs inform the models). The models are used to inform action. Actions are bets. Agents invest time and energy to perform actions in a bet to increase value. If the bets pay off, then that is good evidence that the models are true.
This led to a discussion of counterfactuals and causality
We had an interesting discussion on inference vs rationality, rational vs. irrational vs arational, and the reasonableness of animals. David Friedman has written on ârationality without mindâ in his book on Price Theory
I do like the idea of CEV but I assume Yudkowsky has repudiated it along with all of his earlier work (which ironically leads me to discount everything he says now because I assume he will repudiate it later).
Iâve had this on my to-read list for too long, thanks for the recommendation:
I mentioned that I encountered someone who claimed that they knew how to program an AGI but chose not to (presumably to save the world, or at least to postpone the end). Roger Williams is the author of MOPI:
@Sahil and I noted that the more Chapman made definitive strong claims, the more we tend to disagree with him, and that was certainly the case for this chapter.
Of course, I had to object to the initial claim again:
Philosophers use the word âpropositionâ to designate whatever is the sort of thing one believes or disbelieves, or that could be true or false. They canât say what sort of thing that is, though, or how one would work.
A proposition is a model of a condition. A model is a representation. A condition is an abstract pattern that is used to match other patterns, abstract and concrete. A condition is true to the extent that its pattern matches the pattern of the world model. In other words correspondence is inferred from coherence.
A belief is a model of conditional behavior assigned to an agent. Beliefs are instrumental in that they are used to explain past behavior or predict future behavior.
@Sahil noted that the Good Regulator Theorem implied that it was not possible for an agent to act in the world without a model:
We discussed whether System 1 and 2 were appropriate models in this context
Christian quoted Dawkins as saying we donât have a good theory of creativity which I found ironic considering my belief that creativity is necessarily an evolutionary process, variation and selection.
I usually look for an excuse to bring the Many Worlds Interpretation of QM into the discussion. This time it was to pitch my idea that it solves the problem of why is there something rather than nothing. If you go back far enough in the Everettian timelines there is one with something which is the origin of our universe and another one with nothing. Sahil objected saying (something along the lines) that the Schrödinger wave equation only made sense in our universe, so I tentatively conceded is was more of a Tegmarkian claim than a QM one.
Obligatory related LW articles courtesy of @Sahil:
And another from Scott Aaronson âWhy Philosophers Should Care About Computational Complexityâ
Just as a side note, some friends and I formed a club called The Daemon Maxwell Group way back in the late 20th century that produced Robin Hansonâs first prediction market.
We took turns trying to explain Bayes to Christian. I led with âBayes is a simple formula for turning observations into knowledge.â
There was general agreement at the end that Chapmanâs critiques of Rationalism donât really apply to LW-style Rationalists, but he isnât attacking a strawman either because there do exist Rationalists of the type Chapman criticizes. (Daniel should note I used an existential quantifier there ) Instead, Chapman is using a weak man argument:
I confessed to being one of the people Chapman was talking about in the first footnote
Some rationalists simply define ârationalâ as âconforming to decision theory,â in which case probabilistic rationality is a complete and correct theory of rationality by definition.
My defense was along the lines of pragmatism. If the ultimate criterion for choosing a system in a situation, whether it is rationalist or meta-rationalist, or something else, unless you choose randomly there has to be some standard and for the pragmatist that is âwhatever worksâ. But what does that mean? I interpret that to mean you get a good ROI, return on investment. Everyone choice is a bet (the investment) and the goal hopefully that it pays off more often than not, at least when it matters. Given this view, we necessarily come back to probability theory, and decision theory, and (I should have mentioned) game theory.
We discussed what it means to understand something (âif I donât know how to program it I donât understand itâ). Or maybe there are levels of understanding, and possessing a causally correlated model is what yields varying degrees of understanding.
Weâll defer discussion of Chapmanâs Probability theory does not extend logic until next week when other can join. Also we forgot to read it
We led with a short discussion of the Waking Up conversation between Evan Thompson and Sam Harris (linked in previous message). It was interesting in that it involved a contextualized and a decoupled taking the positions you would expect the other to take under most circumstances.
Some topics that came upâŠ
Reconsidering the merits of ritual
The glass bead game
Our shared history with game theory including Dawkins and JvN
I confessed to feeling personally attacked by Chapman in this chapter because Iâve identified with the game theory âideologyâ for so long. Of course I recognize that isnât particularly rational, and Iâm open to being shown the error of my ways in future installments (though I find it difficult to imagine at the moment how you can do better than game theory). @Sahil was sympathetic. (At one point I said âweâre the same!â which he heard as âweâre the sane!â lol)
What does it mean to âbe presentâ?
Canât have an Eggplant meeting without linking LW and SSC, right?
TIL there is a name for viewpoint that game theory is universally applicable, Nassim Taleb calls it the ludic fallacy. I should say TI(re)L because I did read The Black Swan a long time ago
Chapman really seemed to miss the mark in this one when he attempted to show that it was difficult to get the probabilities of all possible outcomes to add up to 1. Trivially in his example you either cross the river or you donât. If the first outcome is assigned 0.9 then the other outcome is necessarily 0.1 no matter how many ways there are to not cross the river.
On a related note I was surprised to learn the chance of a coin flip landing on its edge is about 1/6000 (American nickle)
The second part about how to interpret the observation of there being a high probability of green cheese on the moon was more interesting, but again it wasnât clear that good old rationality didnât address this problem adequately.
In mostly unrelated news I mentioned my new tech project, a system for establishing ownership of random numbers that can be used as pure indexicals, code-named Metatron
My first objection was to the opening paragraph yet again
Rationalisms are mainly concerned with thinking correctly. However, they are often also concerned with acting, and try to provide correctness or optimality guarantees for action as well.
I suggested that thinking and acting were not separate, that thinking was just one form of action. The others (Evan, Sahil, Dan) agreed, but still thought it was a useful polarity even if it is a false dichotomy. Fair.
Another objection was to the criticism of close world idealizations:
To apply a rational action theory, you have to make a closed world idealization and ignore all but a few possibilities.
I donât think any possibilities are ignored when the analysis considers whether relevant conditions are true (or not). For example if youâre interested in whether you can cross the river safely, the other possibilities (not crossing the river safely) are all taken into account. Iâm not sure I was able to convince anyone else in the group that this was a valid objection.
⊠decision âtheoryâ is stated in terms of a non-detectable metaphysical substance, âutility,â that almost certainly doesnât exist.
Evan suggested that âutilityâ like âfireâ is not a natural category. Relevant Chapman reference:
Since people are obviously mostly not rational, descriptive rationalism has mainly retreated to the claim that part of your brain is properly rational, and part of it isnât.
Are people mostly not rational? I was thinking in terms of everyday choices made by most people. There are literally millions of irrational actions they could take (contrary to implicit beliefs and interests) and yet they make the rational choice. Maybe they didnât use system 2 or conscious Bayesian inference yet the choices come out the same. This led to a good discussion of equivocation and Evan introduced (as far as I know) the concept of the âmotte and baileyâ trap. Worth exploring in the future.
Evan made an interesting analogy, the relation between reasonableness and rationality was like between lower and higher-level programming languages respectively, like assembly or C underlying a higher-level scripting language like python. The rationalists are like the naĂŻve pythonistas claiming that python was all you ever need for any project. But that isnât true, if you want to write an operating system you need to access lower levels like hardware drivers, or at least use python libraries that wrap the lower level access.
I wondered if there was always a backdoor for rationality when it comes to evaluating reasonableness. This doesnât really fit into the programming language analogy, it would be like C programs requiring a python script to evaluate whether they met the requirements. If rationality mostly evolved for purposes of evaluation (i.e. testing solutions against criteria) I can see how that can be misconstrued as a panacea in the same sense that early AI went down a path of seeing every problem as a search problem. Like, technically true, you can in principle view every problem that way but it turns out to be impractical.
Some topics that came upâŠ
Rationalists tend to view AGI as an existential risk. It is unclear whether most reasonable people share that concern, at least since the first Terminator movie (1984) or Collossus: The Forbin Project (1970).
When people thought the Earth was flat, they were wrong. When people thought the Earth was spherical, they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together. Wronger than wrong - Wikipedia
The rationalists have an admonition for noobs, âdonât try to be cleverâ. Evan suggested that Epictetus was offering a similar warning to would-be philosophers with his quote on the Stoa site:
This led to a fruitful discussion on the efficacy of âpointing methodsâ like the Socratic method and zen koans, and how they become less effective when the student understands how they are intended to work, which tends to create a tradition of secrecy and an inevitable arms race: