Started with some small talk about the current political situation in the US, with big tech bringing the ban hammer to Trump in particular and the alt-right in general. We tied this in to the current topic noting that hyperobjects are not reducible…
A discussion on whether even mathematics has leaky abstractions led to the history of imaginary numbers…
Forrest Landry’s Immanent Metaphysics came up again
It was suggested that Chapman’s project is deconstructive in the Derrida sense
In robbing a hotel room, people see ‘doors’ and ‘locks’ and ‘walls’, but really, they are just made out of atoms arranged in a particular order, and you can move some atoms around more easily than others, and instead of going through a ‘door’ you can just cut a hole in the wall12 (or ceiling) and obtain access to a space. At Los Alamos, Richard Feynman, among other tactics, obtained classified papers by reaching in underneath drawers and ignored the locks entirely.
I dropped a link in the zoom chat to this chart to illustrate a point, but I never found an opportunity to change the topic
This is one of my all time favorite graphs: evidence vs. confidence. The weight of evidence in bits is on the x-axis, and the corresponding Bayesian credence is on the y-axis. If you have no evidence, or if the evidence in favor is exactly counter-balanced by evidence against, then the confidence in the claim is 0.5, perfectly agnostic. The interesting aspect is that it takes only a few bits of evidence (like 5) in order to be very confident (>95%) in a claim.
I wanted to use this as an analogy to Chapman’s critique. It is like he is pointing out that you can never be 100% confident in a claim because that would require infinite bits of evidence which is impossible, which is technically true but pragmatically irrelevant when 5 bits would suffice for most cases and 10 bits is good enough for 99.9% confidence. If you require military-grade, bet the lives of your children, confidence then you might invest in acquiring 30 bits (wrong 1 time in a billion).
After some initial smalltalk about the ethics of keeping pets we dove in and spent a long while arguing about the meaning of “impossible” in this claim:
Assigning a consistent set of numbers to diverse statements seems impossible.
While we were waiting for others to join we started off on an interesting tangent about MKULTRA and all its downstream effects (Charles Manson, The Unabomber, Ken Kesey, The Grateful Dead, Apple Computer, etc)
This led to friend-of-the-Stoa Erik Davis and his book
Apparently Davis knew PKD and I mentioned I’m a member of the Exegis II project
A discussion of meta-models led to the work of David Wolpert:
As a preliminary warmup, discussed why different timezones persist. (No good reason, we should switch to UTC)
I suggested that a universal object id registry might be theorectically possible by considering all possible patterns in the binary expansion of the reals. This led to a tangent on levels of infinity and the continuum hypothesis.
Maybe our universe corresponds to a single transcendental number…
I took issue with Chapman’s narrow notion of objects as being physical, at least all the examples in this article. My concept of an object as a set of related properties is no doubt informed by decades of practice in object-oriented programming.
This led to a discussion of mathematical objects such as circles, and their ontological status.
I suggested that an object exists if it can be described (as a set of related properties), but it is only “real” if it is “realized” physically in mass-energy in space-time.
Apparently some mathematicians deny the existence of very large natural numbers:
We agreed more or less that objects are reified for a purpose which necessarily brings in the notions of agents, values, and consciousness.
TIL the concept of “moral patient”
Philosophers distinguish between moral agents, entities whose actions are eligible for moral consideration and moral patients, entities that themselves are eligible for moral consideration. Moral agency - Wikipedia
Can there be value without consciousness? Was there any value in the universe a million years after the big bang when presumably there were no conscious agents? A related quote was offered:
Let us imagine one world exceedingly beautiful…And then imagine the ugliest world you can possibly conceive. Imagine it simply one heap of filth, containing everything that is most disgusting to us, for whatever reason, and the whole, as far as may be, without one redeeming feature. The only thing we are not entitled to imagine is that any human being ever has or ever, by any possibility, can, live in either, can ever see and enjoy the beauty of the one or hate the foulness of the other… [S]till, is it irrational to hold that it is better that the beautiful world should exist than the one which is ugly? Would it not be well, in any case, to do what we could to produce it rather than the other? Certainly I cannot help thinking that it would; and I hope that some may agree with me in this extreme instance. - G.E. Moore, Principia Ethica
The consensus was that this thought experiment was incoherent, you have to imagine inserting an observer into the world to evaluate its beauty but that possibility is excluded by the experiment. Maybe we’re missing something?
Kicked it off by objecting to the first paragraph:
The correspondence theory of truth does not include a causal explanation of how the correspondence between beliefs and reality comes about. Unfortunately, there are no correspondence fairies to do that job for us. Perception can do at least part of the work.
I suggested perception is only half the story. The other half is action. Internal models are built from perception (sensory inputs inform the models). The models are used to inform action. Actions are bets. Agents invest time and energy to perform actions in a bet to increase value. If the bets pay off, then that is good evidence that the models are true.
This led to a discussion of counterfactuals and causality
We had an interesting discussion on inference vs rationality, rational vs. irrational vs arational, and the reasonableness of animals. David Friedman has written on “rationality without mind” in his book on Price Theory
I do like the idea of CEV but I assume Yudkowsky has repudiated it along with all of his earlier work (which ironically leads me to discount everything he says now because I assume he will repudiate it later).
I’ve had this on my to-read list for too long, thanks for the recommendation:
I mentioned that I encountered someone who claimed that they knew how to program an AGI but chose not to (presumably to save the world, or at least to postpone the end). Roger Williams is the author of MOPI:
@Sahil and I noted that the more Chapman made definitive strong claims, the more we tend to disagree with him, and that was certainly the case for this chapter.
Of course, I had to object to the initial claim again:
Philosophers use the word “proposition” to designate whatever is the sort of thing one believes or disbelieves, or that could be true or false. They can’t say what sort of thing that is, though, or how one would work.
A proposition is a model of a condition. A model is a representation. A condition is an abstract pattern that is used to match other patterns, abstract and concrete. A condition is true to the extent that its pattern matches the pattern of the world model. In other words correspondence is inferred from coherence.
A belief is a model of conditional behavior assigned to an agent. Beliefs are instrumental in that they are used to explain past behavior or predict future behavior.
@Sahil noted that the Good Regulator Theorem implied that it was not possible for an agent to act in the world without a model:
We discussed whether System 1 and 2 were appropriate models in this context
Christian quoted Dawkins as saying we don’t have a good theory of creativity which I found ironic considering my belief that creativity is necessarily an evolutionary process, variation and selection.
I usually look for an excuse to bring the Many Worlds Interpretation of QM into the discussion. This time it was to pitch my idea that it solves the problem of why is there something rather than nothing. If you go back far enough in the Everettian timelines there is one with something which is the origin of our universe and another one with nothing. Sahil objected saying (something along the lines) that the Schrödinger wave equation only made sense in our universe, so I tentatively conceded is was more of a Tegmarkian claim than a QM one.
Obligatory related LW articles courtesy of @Sahil:
And another from Scott Aaronson “Why Philosophers Should Care About Computational Complexity”
Just as a side note, some friends and I formed a club called The Daemon Maxwell Group way back in the late 20th century that produced Robin Hanson’s first prediction market.
We took turns trying to explain Bayes to Christian. I led with “Bayes is a simple formula for turning observations into knowledge.”
There was general agreement at the end that Chapman’s critiques of Rationalism don’t really apply to LW-style Rationalists, but he isn’t attacking a strawman either because there do exist Rationalists of the type Chapman criticizes. (Daniel should note I used an existential quantifier there ) Instead, Chapman is using a weak man argument:
I confessed to being one of the people Chapman was talking about in the first footnote
Some rationalists simply define “rational” as “conforming to decision theory,” in which case probabilistic rationality is a complete and correct theory of rationality by definition.
My defense was along the lines of pragmatism. If the ultimate criterion for choosing a system in a situation, whether it is rationalist or meta-rationalist, or something else, unless you choose randomly there has to be some standard and for the pragmatist that is “whatever works”. But what does that mean? I interpret that to mean you get a good ROI, return on investment. Everyone choice is a bet (the investment) and the goal hopefully that it pays off more often than not, at least when it matters. Given this view, we necessarily come back to probability theory, and decision theory, and (I should have mentioned) game theory.
We discussed what it means to understand something (“if I don’t know how to program it I don’t understand it”). Or maybe there are levels of understanding, and possessing a causally correlated model is what yields varying degrees of understanding.
We’ll defer discussion of Chapman’s Probability theory does not extend logic until next week when other can join. Also we forgot to read it
We led with a short discussion of the Waking Up conversation between Evan Thompson and Sam Harris (linked in previous message). It was interesting in that it involved a contextualized and a decoupled taking the positions you would expect the other to take under most circumstances.
Some topics that came up…
Reconsidering the merits of ritual
The glass bead game
Our shared history with game theory including Dawkins and JvN
I confessed to feeling personally attacked by Chapman in this chapter because I’ve identified with the game theory “ideology” for so long. Of course I recognize that isn’t particularly rational, and I’m open to being shown the error of my ways in future installments (though I find it difficult to imagine at the moment how you can do better than game theory). @Sahil was sympathetic. (At one point I said “we’re the same!” which he heard as “we’re the sane!” lol)
What does it mean to “be present”?
Can’t have an Eggplant meeting without linking LW and SSC, right?
TIL there is a name for viewpoint that game theory is universally applicable, Nassim Taleb calls it the ludic fallacy. I should say TI(re)L because I did read The Black Swan a long time ago
Chapman really seemed to miss the mark in this one when he attempted to show that it was difficult to get the probabilities of all possible outcomes to add up to 1. Trivially in his example you either cross the river or you don’t. If the first outcome is assigned 0.9 then the other outcome is necessarily 0.1 no matter how many ways there are to not cross the river.
On a related note I was surprised to learn the chance of a coin flip landing on its edge is about 1/6000 (American nickle)
The second part about how to interpret the observation of there being a high probability of green cheese on the moon was more interesting, but again it wasn’t clear that good old rationality didn’t address this problem adequately.
In mostly unrelated news I mentioned my new tech project, a system for establishing ownership of random numbers that can be used as pure indexicals, code-named Metatron