In the Cells of the Eggplant

I dropped a link in the zoom chat to this chart to illustrate a point, but I never found an opportunity to change the topic

image

This is one of my all time favorite graphs: evidence vs. confidence. The weight of evidence in bits is on the x-axis, and the corresponding Bayesian credence is on the y-axis. If you have no evidence, or if the evidence in favor is exactly counter-balanced by evidence against, then the confidence in the claim is 0.5, perfectly agnostic. The interesting aspect is that it takes only a few bits of evidence (like 5) in order to be very confident (>95%) in a claim.

I wanted to use this as an analogy to Chapman’s critique. It is like he is pointing out that you can never be 100% confident in a claim because that would require infinite bits of evidence which is impossible, which is technically true but pragmatically irrelevant when 5 bits would suffice for most cases and 10 bits is good enough for 99.9% confidence. If you require military-grade, bet the lives of your children, confidence then you might invest in acquiring 30 bits (wrong 1 time in a billion).

Jan 23 #12

After some initial smalltalk about the ethics of keeping pets we dove in and spent a long while arguing about the meaning of “impossible” in this claim:

Assigning a consistent set of numbers to diverse statements seems impossible.

I was reminded of a good read

This led to many other interesting topics:

Sarah Perry


Acting crazy for rational reasons

Or, in layman’s terms, sometimes you have to be a crazy bastard so people won’t walk all over you.
https://astralcodexten.substack.com/p/still-alive


Unpacking The Meaning Crisis
BY JOHN VERVAEKE & DAVID CHAPMAN

Calibration


Jan 30 #13

While we were waiting for others to join we started off on an interesting tangent about MKULTRA and all its downstream effects (Charles Manson, The Unabomber, Ken Kesey, The Grateful Dead, Apple Computer, etc)

This led to friend-of-the-Stoa Erik Davis and his book

Apparently Davis knew PKD and I mentioned I’m a member of the Exegis II project

A discussion of meta-models led to the work of David Wolpert:

and a short short story by Borges

Moldbug on Hanson:
https://www.unqualified-reservations.org/2009/05/futarchy-considered-retarded/

Feb 6 #14

Started with a nice segue into the topic of reference with the question, What is Game~B?

“synarchy” has some shadowy historical connotations


holoarchy vs. holocracy

I suggested that reference comes from the process of interpretation, using the definition from computer science to illustrate

@Evan_McMullen interpreted it to mean it was necessarily participatory, invoking Vervaeke’s 4Ps:

and G.I. Gurdjieff

The Sapir-Whorf hypothesis has a discredited strong version (language determines thought) and a weaker version (language influences thought):

An analogy was made between the Stoic “live in accordance with nature” and the LW-Rationalist “coherent extrapolated volition”

Peter Watt’s always seems to come up


Gnosticism vs Agnosticism as an aesthetic choice

https://iep.utm.edu/gnostic/

We agreed we were all ultimately pragmatists, shifting between gnosticism and agnosticsm, whatever works
https://plato.stanford.edu/entries/pragmatism/

We considered the I Ching as a gnostic practice

and the irony of receiving hex 23 as the answer to “Should I trust the I Ching?”

Feb 13 #15

As a preliminary warmup, discussed why different timezones persist. (No good reason, we should switch to UTC)

I suggested that a universal object id registry might be theorectically possible by considering all possible patterns in the binary expansion of the reals. This led to a tangent on levels of infinity and the continuum hypothesis.

Maybe our universe corresponds to a single transcendental number


It seems we all enjoyed Tegmark’s book

Though it is not without critics

On the subject of a context-free truth I expressed skepticism:

even when a rational system has found a universal truth that is true regardless of context


 invoking Quine and model theory:

A discussion about bootstrapping semantics from syntax (my ex post facto characterization) led to

and

Chris Langan’s CTMU and his
https://www.cosmosandhistory.org/index.php/journal/article/view/867
seems at least superficially similar to my bitstring theory

We finished off talking about the NYT article about SSC and the Rationalist community


Feb 20 #16

I took issue with Chapman’s narrow notion of objects as being physical, at least all the examples in this article. My concept of an object as a set of related properties is no doubt informed by decades of practice in object-oriented programming.

This led to a discussion of mathematical objects such as circles, and their ontological status.

I suggested that an object exists if it can be described (as a set of related properties), but it is only “real” if it is “realized” physically in mass-energy in space-time.

Apparently some mathematicians deny the existence of very large natural numbers:

We agreed more or less that objects are reified for a purpose which necessarily brings in the notions of agents, values, and consciousness.

TIL the concept of “moral patient”

Philosophers distinguish between moral agents, entities whose actions are eligible for moral consideration and moral patients, entities that themselves are eligible for moral consideration.
Moral agency - Wikipedia

https://plato.stanford.edu/entries/repugnant-conclusion/

Can there be value without consciousness? Was there any value in the universe a million years after the big bang when presumably there were no conscious agents? A related quote was offered:

Let us imagine one world exceedingly beautiful
And then imagine the ugliest world you can possibly conceive. Imagine it simply one heap of filth, containing everything that is most disgusting to us, for whatever reason, and the whole, as far as may be, without one redeeming feature. The only thing we are not entitled to imagine is that any human being ever has or ever, by any possibility, can, live in either, can ever see and enjoy the beauty of the one or hate the foulness of the other
 [S]till, is it irrational to hold that it is better that the beautiful world should exist than the one which is ugly? Would it not be well, in any case, to do what we could to produce it rather than the other? Certainly I cannot help thinking that it would; and I hope that some may agree with me in this extreme instance. - G.E. Moore, Principia Ethica

The consensus was that this thought experiment was incoherent, you have to imagine inserting an observer into the world to evaluate its beauty but that possibility is excluded by the experiment. Maybe we’re missing something?

Feb 27 #17

Kicked it off by objecting to the first paragraph:

The correspondence theory of truth does not include a causal explanation of how the correspondence between beliefs and reality comes about. Unfortunately, there are no correspondence fairies to do that job for us. Perception can do at least part of the work.

I suggested perception is only half the story. The other half is action. Internal models are built from perception (sensory inputs inform the models). The models are used to inform action. Actions are bets. Agents invest time and energy to perform actions in a bet to increase value. If the bets pay off, then that is good evidence that the models are true.

This led to a discussion of counterfactuals and causality

We had an interesting discussion on inference vs rationality, rational vs. irrational vs arational, and the reasonableness of animals. David Friedman has written on “rationality without mind” in his book on Price Theory

I do like the idea of CEV but I assume Yudkowsky has repudiated it along with all of his earlier work (which ironically leads me to discount everything he says now because I assume he will repudiate it later).

I’ve had this on my to-read list for too long, thanks for the recommendation:

I mentioned that I encountered someone who claimed that they knew how to program an AGI but chose not to (presumably to save the world, or at least to postpone the end). Roger Williams is the author of MOPI:

@Sahil and I noted that the more Chapman made definitive strong claims, the more we tend to disagree with him, and that was certainly the case for this chapter.

Mar 6 #18

Of course, I had to object to the initial claim again:

Philosophers use the word “proposition” to designate whatever is the sort of thing one believes or disbelieves, or that could be true or false. They can’t say what sort of thing that is, though, or how one would work.

A proposition is a model of a condition. A model is a representation. A condition is an abstract pattern that is used to match other patterns, abstract and concrete. A condition is true to the extent that its pattern matches the pattern of the world model. In other words correspondence is inferred from coherence.

A belief is a model of conditional behavior assigned to an agent. Beliefs are instrumental in that they are used to explain past behavior or predict future behavior.

@Sahil noted that the Good Regulator Theorem implied that it was not possible for an agent to act in the world without a model:

We discussed whether System 1 and 2 were appropriate models in this context

Christian quoted Dawkins as saying we don’t have a good theory of creativity which I found ironic considering my belief that creativity is necessarily an evolutionary process, variation and selection.

I usually look for an excuse to bring the Many Worlds Interpretation of QM into the discussion. This time it was to pitch my idea that it solves the problem of why is there something rather than nothing. If you go back far enough in the Everettian timelines there is one with something which is the origin of our universe and another one with nothing. Sahil objected saying (something along the lines) that the Schrödinger wave equation only made sense in our universe, so I tentatively conceded is was more of a Tegmarkian claim than a QM one.

Obligatory related LW articles courtesy of @Sahil:

And another from Scott Aaronson “Why Philosophers Should Care About Computational Complexity”

Mar 13 #19

Is this session we proved there is no correlation between article length and meeting length. Some topics that came up:

An LW answer to Chapman


https://www.readthesequences.com/Einsteins-Arrogance

The relation between information and thermodynamics


History of AI


http://www.incompleteideas.net/IncIdeas/BitterLesson.html

Phlogiston, aether, etc


What is noise? What is randomness?

Dennett on Real Patterns

Illusionism

Just as a side note, some friends and I formed a club called The Daemon Maxwell Group way back in the late 20th century that produced Robin Hanson’s first prediction market.

Mar 20 #20

Some notes


We spent a good portion of the meeting discussing whether brains can (or should) be said to implement algorithms.

Friston’s active inference

We took turns trying to explain Bayes to Christian. I led with “Bayes is a simple formula for turning observations into knowledge.”

There was general agreement at the end that Chapman’s critiques of Rationalism don’t really apply to LW-style Rationalists, but he isn’t attacking a strawman either because there do exist Rationalists of the type Chapman criticizes. (Daniel should note I used an existential quantifier there :slight_smile: ) Instead, Chapman is using a weak man argument:

Mar 28 #21

We agreed we will read for next week

I confessed to being one of the people Chapman was talking about in the first footnote

Some rationalists simply define “rational” as “conforming to decision theory,” in which case probabilistic rationality is a complete and correct theory of rationality by definition.

My defense was along the lines of pragmatism. If the ultimate criterion for choosing a system in a situation, whether it is rationalist or meta-rationalist, or something else, unless you choose randomly there has to be some standard and for the pragmatist that is “whatever works”. But what does that mean? I interpret that to mean you get a good ROI, return on investment. Everyone choice is a bet (the investment) and the goal hopefully that it pays off more often than not, at least when it matters. Given this view, we necessarily come back to probability theory, and decision theory, and (I should have mentioned) game theory.

Some topics that came up


https://plato.stanford.edu/entries/platonism-mathematics/

We discussed what it means to understand something (“if I don’t know how to program it I don’t understand it”). Or maybe there are levels of understanding, and possessing a causally correlated model is what yields varying degrees of understanding.

On a related note I heard about Simon de Deo’s From Probability to Consilience: How Explanatory Values Implement Bayesian Reasoning
on the Jim Rutt show

TIL Constructor Theory (h/t @Sahil)

@Sahil recommends

Apr 3 #22

We’ll defer discussion of Chapman’s Probability theory does not extend logic until next week when other can join. Also we forgot to read it :slight_smile:

We led with a short discussion of the Waking Up conversation between Evan Thompson and Sam Harris (linked in previous message). It was interesting in that it involved a contextualized and a decoupled taking the positions you would expect the other to take under most circumstances.

Some topics that came up


Reconsidering the merits of ritual

The glass bead game

Our shared history with game theory including Dawkins and JvN

I confessed to feeling personally attacked by Chapman in this chapter because I’ve identified with the game theory “ideology” for so long. Of course I recognize that isn’t particularly rational, and I’m open to being shown the error of my ways in future installments (though I find it difficult to imagine at the moment how you can do better than game theory). @Sahil was sympathetic. (At one point I said “we’re the same!” which he heard as “we’re the sane!” lol)

What does it mean to “be present”?

Can’t have an Eggplant meeting without linking LW and SSC, right?

To (re-)read for next week:

Apr 10 #23

TIL there is a name for viewpoint that game theory is universally applicable, Nassim Taleb calls it the ludic fallacy. I should say TI(re)L because I did read The Black Swan a long time ago :slight_smile:

Some topics that came up


C.S. Pierce on abduction

Blindsight keeps coming up. At least I have a copy now


I thought Turchin was the one associated with the Principia Cybernetica Project but that turns out to be Peter’s father


https://meltingasphalt.com/crony-beliefs/

Apr 17 #24

Chapman really seemed to miss the mark in this one when he attempted to show that it was difficult to get the probabilities of all possible outcomes to add up to 1. Trivially in his example you either cross the river or you don’t. If the first outcome is assigned 0.9 then the other outcome is necessarily 0.1 no matter how many ways there are to not cross the river.

On a related note I was surprised to learn the chance of a coin flip landing on its edge is about 1/6000 (American nickle)

The second part about how to interpret the observation of there being a high probability of green cheese on the moon was more interesting, but again it wasn’t clear that good old rationality didn’t address this problem adequately.

Other topics that came up


The 2-4-6 task

CFAR and Julia Galef’s new book

The vibe difference between Yudkowsky and Chapman

Forrest Landry’s An Immanent Metaphysics

Apr 24 #25

https://plato.stanford.edu/entries/closure-epistemic/

In mostly unrelated news I mentioned my new tech project, a system for establishing ownership of random numbers that can be used as pure indexicals, code-named Metatron

May 1 #26

My first objection was to the opening paragraph yet again :slight_smile:

Rationalisms are mainly concerned with thinking correctly. However, they are often also concerned with acting, and try to provide correctness or optimality guarantees for action as well.

I suggested that thinking and acting were not separate, that thinking was just one form of action. The others (Evan, Sahil, Dan) agreed, but still thought it was a useful polarity even if it is a false dichotomy. Fair.

Another objection was to the criticism of close world idealizations:

To apply a rational action theory, you have to make a closed world idealization and ignore all but a few possibilities.

I don’t think any possibilities are ignored when the analysis considers whether relevant conditions are true (or not). For example if you’re interested in whether you can cross the river safely, the other possibilities (not crossing the river safely) are all taken into account. I’m not sure I was able to convince anyone else in the group that this was a valid objection.

Some topics that came up:

May 8 #27

For this special Bridge session crossing over to pt 2 we reflected on what we took away from pt 1

Good news, we all agreed that though we may have some specific disagreements with Chapman we find much to value in The Eggplant.

Some topics that came up


Samo Burja’s Live Players vs. Dead Players

Yudkowsky’s Timeless Decision Theory

We took a fun tangent on what would happen if we got Yudkowsky and Chapman in a safe room and fed them enough MDMA to have a cordial conversation. :slight_smile:

https://www.overcomingbias.com/2010/06/non-conformists-conform.html

The Gateless Gate


May 15 #28

Another excellent, wide-ranging discussion thanks to @Evan_McMullen , @dglickman and @Sahil


 decision “theory” is stated in terms of a non-detectable metaphysical substance, “utility,” that almost certainly doesn’t exist.

Evan suggested that “utility” like “fire” is not a natural category. Relevant Chapman reference:

Since people are obviously mostly not rational, descriptive rationalism has mainly retreated to the claim that part of your brain is properly rational, and part of it isn’t.

Are people mostly not rational? I was thinking in terms of everyday choices made by most people. There are literally millions of irrational actions they could take (contrary to implicit beliefs and interests) and yet they make the rational choice. Maybe they didn’t use system 2 or conscious Bayesian inference yet the choices come out the same. This led to a good discussion of equivocation and Evan introduced (as far as I know) the concept of the “motte and bailey” trap. Worth exploring in the future.

Some topics that came up


May 22 #29

Thanks to @Evan_McMullen @Sahil @dglickman and Christian for excellent contributions to this discussion.

Evan made an interesting analogy, the relation between reasonableness and rationality was like between lower and higher-level programming languages respectively, like assembly or C underlying a higher-level scripting language like python. The rationalists are like the naïve pythonistas claiming that python was all you ever need for any project. But that isn’t true, if you want to write an operating system you need to access lower levels like hardware drivers, or at least use python libraries that wrap the lower level access.

I wondered if there was always a backdoor for rationality when it comes to evaluating reasonableness. This doesn’t really fit into the programming language analogy, it would be like C programs requiring a python script to evaluate whether they met the requirements. If rationality mostly evolved for purposes of evaluation (i.e. testing solutions against criteria) I can see how that can be misconstrued as a panacea in the same sense that early AI went down a path of seeing every problem as a search problem. Like, technically true, you can in principle view every problem that way but it turns out to be impractical.

Some topics that came up


Rationalists tend to view AGI as an existential risk. It is unclear whether most reasonable people share that concern, at least since the first Terminator movie (1984) or Collossus: The Forbin Project (1970).

When people thought the Earth was flat, they were wrong. When people thought the Earth was spherical, they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together.
Wronger than wrong - Wikipedia

Speaking of Monty Hall



source

The rationalists have an admonition for noobs, “don’t try to be clever”. Evan suggested that Epictetus was offering a similar warning to would-be philosophers with his quote on the Stoa site:

This led to a fruitful discussion on the efficacy of “pointing methods” like the Socratic method and zen koans, and how they become less effective when the student understands how they are intended to work, which tends to create a tradition of secrecy and an inevitable arms race:


credit: @Evan_McMullen