In the Cells of the Eggplant

Dec 19 meeting…

Some topics that came up:

Unpacking The Meaning Crisis (Vervaeke and Chapman exhange)

Roam to discuss aboutness, minds, and a billion tiny spooks on

Roam for my own publications at Metamind

We decided next meeting on Dec 26 is an optional hangout to reflect on The Eggplant so far.

Jan 2 meeting

I mentioned a book I read on the topic of truth that I found really influential. Turns out I was conflating 2 books:



Eliezer on truth (and Chapman’s reply)

the unavoidable truth of hierarchy

how LW-style rationalism is closer to empiricism


Eliezer’s toolbox vs law essay

Christian recommends Jeffrey Martin

and SSC’s response

classic Chapman

Evan recommends Blindsight

Jan 9

Started with some small talk about the current political situation in the US, with big tech bringing the ban hammer to Trump in particular and the alt-right in general. We tied this in to the current topic noting that hyperobjects are not reducible…

A discussion on whether even mathematics has leaky abstractions led to the history of imaginary numbers…

Forrest Landry’s Immanent Metaphysics came up again

It was suggested that Chapman’s project is deconstructive in the Derrida sense

Seems we are all fans of Gwern

In robbing a hotel room, people see ‘doors’ and ‘locks’ and ‘walls’, but really, they are just made out of atoms arranged in a particular order, and you can move some atoms around more easily than others, and instead of going through a ‘door’ you can just cut a hole in the wall12 (or ceiling) and obtain access to a space. At Los Alamos, Richard Feynman, among other tactics, obtained classified papers by reaching in underneath drawers and ignored the locks entirely.

Jan 16

A lot of the discussion revolved around where to draw the boundaries…

Aesthetics/values: science (rationalist) vs mysterian (romantic)

Fredkin’s paradox came up in the context of a major life decision

I dropped a link in the zoom chat to this chart to illustrate a point, but I never found an opportunity to change the topic


This is one of my all time favorite graphs: evidence vs. confidence. The weight of evidence in bits is on the x-axis, and the corresponding Bayesian credence is on the y-axis. If you have no evidence, or if the evidence in favor is exactly counter-balanced by evidence against, then the confidence in the claim is 0.5, perfectly agnostic. The interesting aspect is that it takes only a few bits of evidence (like 5) in order to be very confident (>95%) in a claim.

I wanted to use this as an analogy to Chapman’s critique. It is like he is pointing out that you can never be 100% confident in a claim because that would require infinite bits of evidence which is impossible, which is technically true but pragmatically irrelevant when 5 bits would suffice for most cases and 10 bits is good enough for 99.9% confidence. If you require military-grade, bet the lives of your children, confidence then you might invest in acquiring 30 bits (wrong 1 time in a billion).

Jan 23 #12

After some initial smalltalk about the ethics of keeping pets we dove in and spent a long while arguing about the meaning of “impossible” in this claim:

Assigning a consistent set of numbers to diverse statements seems impossible.

I was reminded of a good read

This led to many other interesting topics:

Sarah Perry

Acting crazy for rational reasons

Or, in layman’s terms, sometimes you have to be a crazy bastard so people won’t walk all over you.

Unpacking The Meaning Crisis


Jan 30 #13

While we were waiting for others to join we started off on an interesting tangent about MKULTRA and all its downstream effects (Charles Manson, The Unabomber, Ken Kesey, The Grateful Dead, Apple Computer, etc)

This led to friend-of-the-Stoa Erik Davis and his book

Apparently Davis knew PKD and I mentioned I’m a member of the Exegis II project

A discussion of meta-models led to the work of David Wolpert:

and a short short story by Borges

Moldbug on Hanson:

Feb 6 #14

Started with a nice segue into the topic of reference with the question, What is Game~B?

“synarchy” has some shadowy historical connotations…

holoarchy vs. holocracy

I suggested that reference comes from the process of interpretation, using the definition from computer science to illustrate

@Evan_McMullen interpreted it to mean it was necessarily participatory, invoking Vervaeke’s 4Ps:

and G.I. Gurdjieff

The Sapir-Whorf hypothesis has a discredited strong version (language determines thought) and a weaker version (language influences thought):

An analogy was made between the Stoic “live in accordance with nature” and the LW-Rationalist “coherent extrapolated volition”

Peter Watt’s always seems to come up…

Gnosticism vs Agnosticism as an aesthetic choice…

We agreed we were all ultimately pragmatists, shifting between gnosticism and agnosticsm, whatever works

We considered the I Ching as a gnostic practice

and the irony of receiving hex 23 as the answer to “Should I trust the I Ching?”

Feb 13 #15

As a preliminary warmup, discussed why different timezones persist. (No good reason, we should switch to UTC)

I suggested that a universal object id registry might be theorectically possible by considering all possible patterns in the binary expansion of the reals. This led to a tangent on levels of infinity and the continuum hypothesis.

Maybe our universe corresponds to a single transcendental number…

It seems we all enjoyed Tegmark’s book

Though it is not without critics

On the subject of a context-free truth I expressed skepticism:

even when a rational system has found a universal truth that is true regardless of context

… invoking Quine and model theory:

A discussion about bootstrapping semantics from syntax (my ex post facto characterization) led to


Chris Langan’s CTMU and his
seems at least superficially similar to my bitstring theory

We finished off talking about the NYT article about SSC and the Rationalist community…

Feb 20 #16

I took issue with Chapman’s narrow notion of objects as being physical, at least all the examples in this article. My concept of an object as a set of related properties is no doubt informed by decades of practice in object-oriented programming.

This led to a discussion of mathematical objects such as circles, and their ontological status.

I suggested that an object exists if it can be described (as a set of related properties), but it is only “real” if it is “realized” physically in mass-energy in space-time.

Apparently some mathematicians deny the existence of very large natural numbers:

We agreed more or less that objects are reified for a purpose which necessarily brings in the notions of agents, values, and consciousness.

TIL the concept of “moral patient”

Philosophers distinguish between moral agents, entities whose actions are eligible for moral consideration and moral patients, entities that themselves are eligible for moral consideration.
Moral agency - Wikipedia

Can there be value without consciousness? Was there any value in the universe a million years after the big bang when presumably there were no conscious agents? A related quote was offered:

Let us imagine one world exceedingly beautiful…And then imagine the ugliest world you can possibly conceive. Imagine it simply one heap of filth, containing everything that is most disgusting to us, for whatever reason, and the whole, as far as may be, without one redeeming feature. The only thing we are not entitled to imagine is that any human being ever has or ever, by any possibility, can, live in either, can ever see and enjoy the beauty of the one or hate the foulness of the other… [S]till, is it irrational to hold that it is better that the beautiful world should exist than the one which is ugly? Would it not be well, in any case, to do what we could to produce it rather than the other? Certainly I cannot help thinking that it would; and I hope that some may agree with me in this extreme instance. - G.E. Moore, Principia Ethica

The consensus was that this thought experiment was incoherent, you have to imagine inserting an observer into the world to evaluate its beauty but that possibility is excluded by the experiment. Maybe we’re missing something?

Feb 27 #17

Kicked it off by objecting to the first paragraph:

The correspondence theory of truth does not include a causal explanation of how the correspondence between beliefs and reality comes about. Unfortunately, there are no correspondence fairies to do that job for us. Perception can do at least part of the work.

I suggested perception is only half the story. The other half is action. Internal models are built from perception (sensory inputs inform the models). The models are used to inform action. Actions are bets. Agents invest time and energy to perform actions in a bet to increase value. If the bets pay off, then that is good evidence that the models are true.

This led to a discussion of counterfactuals and causality

We had an interesting discussion on inference vs rationality, rational vs. irrational vs arational, and the reasonableness of animals. David Friedman has written on “rationality without mind” in his book on Price Theory

I do like the idea of CEV but I assume Yudkowsky has repudiated it along with all of his earlier work (which ironically leads me to discount everything he says now because I assume he will repudiate it later).

I’ve had this on my to-read list for too long, thanks for the recommendation:

I mentioned that I encountered someone who claimed that they knew how to program an AGI but chose not to (presumably to save the world, or at least to postpone the end). Roger Williams is the author of MOPI:

@Sahil and I noted that the more Chapman made definitive strong claims, the more we tend to disagree with him, and that was certainly the case for this chapter.

Mar 6 #18

Of course, I had to object to the initial claim again:

Philosophers use the word “proposition” to designate whatever is the sort of thing one believes or disbelieves, or that could be true or false. They can’t say what sort of thing that is, though, or how one would work.

A proposition is a model of a condition. A model is a representation. A condition is an abstract pattern that is used to match other patterns, abstract and concrete. A condition is true to the extent that its pattern matches the pattern of the world model. In other words correspondence is inferred from coherence.

A belief is a model of conditional behavior assigned to an agent. Beliefs are instrumental in that they are used to explain past behavior or predict future behavior.

@Sahil noted that the Good Regulator Theorem implied that it was not possible for an agent to act in the world without a model:

We discussed whether System 1 and 2 were appropriate models in this context

Christian quoted Dawkins as saying we don’t have a good theory of creativity which I found ironic considering my belief that creativity is necessarily an evolutionary process, variation and selection.

I usually look for an excuse to bring the Many Worlds Interpretation of QM into the discussion. This time it was to pitch my idea that it solves the problem of why is there something rather than nothing. If you go back far enough in the Everettian timelines there is one with something which is the origin of our universe and another one with nothing. Sahil objected saying (something along the lines) that the Schrödinger wave equation only made sense in our universe, so I tentatively conceded is was more of a Tegmarkian claim than a QM one.

Obligatory related LW articles courtesy of @Sahil:

And another from Scott Aaronson “Why Philosophers Should Care About Computational Complexity”

Mar 13 #19

Is this session we proved there is no correlation between article length and meeting length. Some topics that came up:

An LW answer to Chapman…

The relation between information and thermodynamics…

History of AI…

Phlogiston, aether, etc…

What is noise? What is randomness?

Dennett on Real Patterns


Just as a side note, some friends and I formed a club called The Daemon Maxwell Group way back in the late 20th century that produced Robin Hanson’s first prediction market.

Mar 20 #20

Some notes…

We spent a good portion of the meeting discussing whether brains can (or should) be said to implement algorithms.

Friston’s active inference

We took turns trying to explain Bayes to Christian. I led with “Bayes is a simple formula for turning observations into knowledge.”

There was general agreement at the end that Chapman’s critiques of Rationalism don’t really apply to LW-style Rationalists, but he isn’t attacking a strawman either because there do exist Rationalists of the type Chapman criticizes. (Daniel should note I used an existential quantifier there :slight_smile: ) Instead, Chapman is using a weak man argument:

Mar 28 #21

We agreed we will read for next week

I confessed to being one of the people Chapman was talking about in the first footnote

Some rationalists simply define “rational” as “conforming to decision theory,” in which case probabilistic rationality is a complete and correct theory of rationality by definition.

My defense was along the lines of pragmatism. If the ultimate criterion for choosing a system in a situation, whether it is rationalist or meta-rationalist, or something else, unless you choose randomly there has to be some standard and for the pragmatist that is “whatever works”. But what does that mean? I interpret that to mean you get a good ROI, return on investment. Everyone choice is a bet (the investment) and the goal hopefully that it pays off more often than not, at least when it matters. Given this view, we necessarily come back to probability theory, and decision theory, and (I should have mentioned) game theory.

Some topics that came up…

We discussed what it means to understand something (“if I don’t know how to program it I don’t understand it”). Or maybe there are levels of understanding, and possessing a causally correlated model is what yields varying degrees of understanding.

On a related note I heard about Simon de Deo’s From Probability to Consilience: How Explanatory Values Implement Bayesian Reasoning
on the Jim Rutt show

TIL Constructor Theory (h/t @Sahil)

@Sahil recommends

Apr 3 #22

We’ll defer discussion of Chapman’s Probability theory does not extend logic until next week when other can join. Also we forgot to read it :slight_smile:

We led with a short discussion of the Waking Up conversation between Evan Thompson and Sam Harris (linked in previous message). It was interesting in that it involved a contextualized and a decoupled taking the positions you would expect the other to take under most circumstances.

Some topics that came up…

Reconsidering the merits of ritual

The glass bead game

Our shared history with game theory including Dawkins and JvN

I confessed to feeling personally attacked by Chapman in this chapter because I’ve identified with the game theory “ideology” for so long. Of course I recognize that isn’t particularly rational, and I’m open to being shown the error of my ways in future installments (though I find it difficult to imagine at the moment how you can do better than game theory). @Sahil was sympathetic. (At one point I said “we’re the same!” which he heard as “we’re the sane!” lol)

What does it mean to “be present”?

Can’t have an Eggplant meeting without linking LW and SSC, right?

To (re-)read for next week:

Apr 10 #23

TIL there is a name for viewpoint that game theory is universally applicable, Nassim Taleb calls it the ludic fallacy. I should say TI(re)L because I did read The Black Swan a long time ago :slight_smile:

Some topics that came up…

C.S. Pierce on abduction

Blindsight keeps coming up. At least I have a copy now…

I thought Turchin was the one associated with the Principia Cybernetica Project but that turns out to be Peter’s father…

Apr 17 #24

Chapman really seemed to miss the mark in this one when he attempted to show that it was difficult to get the probabilities of all possible outcomes to add up to 1. Trivially in his example you either cross the river or you don’t. If the first outcome is assigned 0.9 then the other outcome is necessarily 0.1 no matter how many ways there are to not cross the river.

On a related note I was surprised to learn the chance of a coin flip landing on its edge is about 1/6000 (American nickle)

The second part about how to interpret the observation of there being a high probability of green cheese on the moon was more interesting, but again it wasn’t clear that good old rationality didn’t address this problem adequately.

Other topics that came up…

The 2-4-6 task

CFAR and Julia Galef’s new book

The vibe difference between Yudkowsky and Chapman

Forrest Landry’s An Immanent Metaphysics

Apr 24 #25

In mostly unrelated news I mentioned my new tech project, a system for establishing ownership of random numbers that can be used as pure indexicals, code-named Metatron