In the Cells of the Eggplant

Jun 26 #34

We started with a deep dive on what we mean when talking about “thoughts”. Turns out not surprisingly that we had quite different views, though with some commonality. There seemed to be a core of language-oriented stream of consciousness thoughts we all agreed on, but once we strayed into the territory of memories, perceptions, sensations, non-symbolic, and subconscious thoughts the less there was any consensus on whether these counted. Freud’s massive influence was acknowledged but not at all venerated, quite the opposite.

I recounted a story I heard Nick Chater tell on a podcast (Jim Rutt not Sam Harris)

Chapman noted in footnote 3:

This is also the easiest aspect of human active vision to study scientifically, because eye tracking apparatus can determine where you are looking, with high precision, as you move your eyes around.

Chater described an experiment that used eye-tracking to show the subject a screen of text but only the part of the screen the subject was looking at was the actual text, the rest of the screen had constantly changing text. To the subject it looked like a normal page but for anyone else it was constantly shifting page of gibberish.

@Evan_McMullen expressed strong skepticism that it was true that the mind is flat. As far as I recall Chater would say the depth of the mind is an illusion kind of like how our field of view has the illusion of detail, but only because it is detailed only wherever we are looking. The mind appears deep only because it has depth whenever we look for depth, if that makes sense.

@Sahil was surprised I didn’t object to Chapman’s objection to objective perception:

“Objective” would mean that it is independent of your theories, of your projects, and of anything that cannot be sensed at this moment, such as recent events. We saw that, for several in-principle reasons, this seems impossible.

Even though I would have defended an objective reality a few years ago, I’ve changed my mind(!) since researching QM and reading Hoffman, and almost certainly would not defend objective perception as that sounds incoherent to me (like objective value is incoherent, another tangent). I’m not sure where our wires got crossed.

Mandatory Joscha Bach reference…

A discussion on why some philosophers make a career around incoherent thought experiments like p-zombies cough Chalmers cough led to Graeber’s concept of BS jobs:

We agreed the difference between Dennett and Chalmers, between good faith and bad faith philosophy, would be very difficult to distinguish as an outsider.

I mentioned that I had met both Dennett and Chalmers who were a gentleman and an asshole respectively. To be fair I met Chalmers a very long time ago at the Santa Fe Institute when he was still a grad student so he may have changed a lot since then.

As always, much appreciation to the crew for another stimulating discussion!

Jul 3 #35

As is tradition, I objected to Chapman’s opening statement:

The typical rationalist view is that the purpose of language is to state facts and theories.

It may be a small difference but I suggested that the rationalist view is that purpose of language is symbolic modeling, and that stating facts and theories is derivative. Sahil pointed out that Chapman’s statement could be interpreted the same way, and doesn’t necessarily imply communication.

We fairly quickly returned to the perennial question on whether rationality is an ideal, even while we all agreed that it is impossible to achieve for finite, bounded agents. Evan caught the rest of us off guard by arguing that ideals in general were harmful in the sense that Platonism is considered harmful. We spent most of the rest of the session unpacking this bold claim.

Some topics that came up:

Forrest Landry’s ethics was discussed:

I confessed I’m still trying to figure out how to reconcile physics and choice, i.e. what does it mean for a physical system like a biological organism to make choices.

Jul 10 #36

We discussed computational metaphors for references, the ubiquitous “pointer”, and took turns trying to explain programmatic pointers to Christian. I tried to tie it back to the Eggplant chapter by noting that in natural language, words are essentially pointers, pointing to concepts, and that ability gives humans great power in thinking and communicating.

TIL it was a Latin translation of Harry Potter that enabled Evan to become truly fluent

In the context of learning to code, Christian mentioned an interesting new standard:

Sahil replied with the standard retort on new standards:

AI is coming to coding:

As is tradition I took exception to one of Chapman’s main claims:

Referring is accomplished by whatever means is available , and improvised methods are unenumerable, so there can’t be any systematic theory or rational taxonomy of reference, only an unsystematic catalog of special cases.

I suggested there are many classes that contain unenumerable (infinite) instances that are nevertheless amenable to systematic theory and/or rational taxonomy, namely real numbers, possible functions, possible programs, and more prosaic classes like possible human experiences, possible books, etc.

Do infinities actually exist? Is the concept misleading? Joscha Bach argues that only the computable is real (and by extension infinities are not):

On understanding different kinds of infinities Evan recommends

I recommend

Me and Eliezer on discovering digits of Pi

We discussed scientism as eternalist monist physicalist religion, and Evan took the opportunity to analyze it in terms of Landry’s IDM: the immanent, the omniscient, and the transcendent.

Jul 17 #37

Before we started on today’s reading we revisited some of the themes of the past couple weeks.

What does Chapman mean by “unenumerable”? We’re leaning toward not literally infinite, but practically intractable, sort of like the “ten thousand things” in Daoism

We briefly went down the rabbit hole of whether math is part of natural philosophy. What is math really?

In revisiting the harm in neoplatonism, Evan brought up the concept (TIL) of archetype entanglement

When we finally started on the topic (40 minutes into the session) I suggested that Chapman, in describing various senses of how the word “belief” is used reasonably, missed an important underlying pattern. Me on beliefs

A belief is really a model of conditional behavior. When we say that an agent holds a belief we are claiming that the agent will tend to behave in certain ways under various conditions. That doesn’t mean that the agent represents the belief per se anywhere in its brain (or whatever hardware it uses to think). It does mean that the belief is represented in the agent assigning the belief to the other agent. Beliefs are in the eye of the beholder. Beliefs are mental models that agents use to model the agency of agents. A belief requires two agents, one to assign the belief to the other, unless the agent is assigning the belief to itself.

We discussed how drugs and meditation have similar effects using the framing of raising the temperature of the connectome before neural annealing settles into new patterns of thought.

Evan was able to confirm my suspicion that Evolving Ground (founded by friend of the Stoa, Jared Jaynes, and Chapman’s partner, Charlie Awbrey) is inspired in part by Whitehead’s process philosophy

Evan recommends Michael Ashcroft’s course on the Alexander Technique.

We ended with a very fun group dunk on post-rats, suggesting that one of the risks of leaving the path and bushwhacking was falling off unmapped cliffs, metaphorically speaking.

h/t Valeria for mentioning this classic:

Jul 24 #38

First I want to note an odd coincidence. In the discord channel I posted a photo of a page from a book that had just been delivered and challenged the readers to guess which book.

@Evan_McMullen correctly noted it is a page from Chapman’s PhD thesis book:

What I didn’t anticipate is the chapter we covered today would mention it explicitly:

This chapter recapitulates artificial intelligence research from my PhD thesis, which I quote from below. I wrote a program, Sonja, that took instructions while playing a video game; it illustrated most of the points covered here.

Maybe a small coincidence but I found it remarkable.

Also remarkable that I made it all the way to paragraph 7 before objecting to one of Chapman’s claims:

This is obvious, but bears emphasizing due to a potential rationalist misunderstanding. Often programming is introduced as “giving the computer instructions,” and programs are likened to recipes. This is probably helpful for novices, but potentially misleading in that a program does have to specify what will be done in complete detail. Then the computer does exactly and only what the program says.

I would say that computer programs are a lot more like recipes in that a great deal of information is left unspecified, and left up to the computer to interpret. As a group we discussed there being a spectrum here with some kinds of instructions (recipes) being less formal than others (computer programs) with something like legal language in between in terms of how narrowly the instructions may require interpretation by the receiver.

I was reminded that reading today’s chapter led to a new (for me) association, meta-rationality (the recognition of nebulosity, with corresponding limitations put on rationality) was like getting greypilled in the vgr/ribbonfarm sense.

Another recommendation from @Sahil to read Scott Aaronson’s Why Philosophers Should Care About Computational Complexity

The rabbit hole of the week was about the reality of formal systems in particular, and abstractions in general. I commented that it was seldom useful to ask if something is real, rather ask in what sense is it real? Like in what sense is the game of chess real, and the (Judeo-Christian) god is not (assuming it is not :slight_smile: )

TIL philosophical nominalism h/t @Sahil

Bookmarked for another time: in what sense is (spiritual) enlightenment real?

Quotable from @dglickman:

Conceptual analysis is a dangerous activity

Next week (Jul 31) we’ll cover the introduction to Part 3:

Jul 31 #39

We started with a discussion about Chapman’s use of the term “circumrationality”. Evan associated it with “circumambulation” in the sense of walking around a garden, and tending to its edges. I yes/anded that, noting that it could be generalized to systems at many scales: cells, organisms, and organizations all tend to maintain their borders with the environment.

The only real (and relatively slight) disagreement with this week’s reading was raised by Valeria:

“But I don’t get it ,” the student struggling with high school algebra protests. “What does ‘x’ mean ?” It doesn’t mean anything. That is the whole point. That meaninglessness is why formalism works. You cannot get this by learning facts, nor procedures, though both are necessary.

We agreed that even in this context the variable x is not completely meaningless. It serves some function which gives it at least some meaning.

@dglickman referenced a relevant text:

The rabbit hole of the week was an exploration of rhetoric

I find it interesting how Sophists were once respected teachers but now it is an allegation of practicing “sophistry”, the use of fallacious arguments, especially with the intention of deceiving. Similarly the every day usage of “stoic”, “cynic”, and “epicurean” have been debased over time.

Evan mentioned a Latin diss track. I was amused by thought of a devout Christian monk transcribing it for posterity.

Before I had to leave we were talking about “envisioning” in the context of programming computers. I mentioned how I usually envision the components of a program as agents, calling on each other, passing messages back and forth, depending on the knowledge and capabilities of each other, like an organization of people working together.

Aug 7 #40

The discussion this week was mainly around the role of analogical reasoning and whether it is necessary for learning and understanding. We agreed probably not for learning, since animals learn but don’t seem to understand by analogy. Sahil didn’t see a reason to distinguish between map->territory type maps and map->map type maps, the latter presumably associated with analogies. Evan suggested that non-human animals have a hard limit on recursion levels when it comes to mental maps which prevents them from the kind of analogical learning under discussion.

Sahil recommended:

I asked if it was possible to understand anything new without relying on analogies. That immediately brought up the (in)famous Mary’s Room thought experiment:

Evan recommended Hofstadter for a compelling extended argument on why analogies are fundamental to human cognition:

I asked if long-term memory required changing neural connections, and it was suggested that no, it just requires adjusting ANN-style weights between neurons. No one present knew what that might entail at the biological level. I was reminded that Hawkins claims to have made great strides in this area in recent decades, as described in his new book:

I also recommended Clark in the chat:

We discussed recent discoveries that upended long-held assumptions about (lack of) neuroplasticity:

Since we’re running out of Eggplant material, Evan (I think) humorously suggested that we join the Evolving Ground community to hassle him to write more

We pivoted to a discussion on the history of Buddhism and the relations between various lineages and traditions. Evan compared Rime to Common Lisp, as a movement to bring together the various extent strains.

Valeria mentioned this classic:

Without looking Evan thought it was either that one or this one:

More book recos:

next week:

Aug 14 #41

Valeria started by noting two areas seemingly actively undergoing ontological remodeling these days: the LGBTQIA2S++ community of ever-expanding inclusion, and the realm of psychological disorders.

This enabled @Evan_McMullen to get our mandatory SSC/ACX refs out of the way right at the start :slight_smile:

I drew attention to a possible connection between Ontological Remodeling and Ontological Design as articulated by Fraga and Cox on the Stoa:

The first rabbit hole we went down was talking about education, and in particular meta-education, i.e. the education of teachers and the state of the art, and why it seems to be so bad. We converged on a few related pessimistic takes: education majors statistically have the lowest educational standards in university, and that tends to be self-reinforcing as the ones that stay on to do research and teach are maybe not the best to make progress in the field. Add to that the fact that teachers today are relatively poorly paid civil servants, making difficult to imagine how the field can improve in the near term. The bureacracy inevitably leads to a coalition of blankfaces, a concept from Scott Aaronson’s blog:

https://www.scottaaronson.com/blog/?p=5675

Though the word was new to me, I mentioned that I had read complaints about these types of personalities dating back 2200 years to the Qin Dynasty

Evan said he wouldn’t be surprised if there were much older written complaints on Sumerian tablets.

A side discussion in the chat touched on Nozick’s utility monster:

After reading this passage…

For instance, eating a cookie might bring only one unit of pleasure to an ordinary person but could bring 100 units of pleasure to a utility monster.

…I will always picture the Utility Monster like this
image

Somehow it escaped me that we have an actual utilitarian in our midst (Sahil). I suggested this impossibility theorem might be bad news

Evan boldly came out against thought experiments, saying

they’re basically fancy tautologies, i think, and are used to smuggle in questionable metaphysical assumptions

I mentioned that I just finished reading a wonderful introduction to Constructor Theory which introduces counterfactuals into fundamental physics:

I was surprised to learn that Daniel had started the book but wasn’t impressed so far. YMMV. Perhaps for David Deutsch fans only?

We ended with a discussion on how most of us (all besides me?) subscribe to Buddhism in one form or another.

Worth revisiting The Bridge sessions:

We should try to employ the rationalist double crux more because we tend to be so close in our stances it is more interesting to discover where we differ:

Welcome @red_leaf !

Next week:

Aug 21 #42

Sahil kicked off the discussion with a good question:

Chapman says “Those are designed to minimize nebulosity.” in the Fitting the Territory to the Map section. How literally does he mean this? Can you do that, or only squirrel away nebulosity for a while?

The rabbit hole of the week was an exploration of a potentially new idea brought up by Evan and named by me, a “conservation of nebulosity”. I suggested there might be a relation to entropy there, considering that is physically impossible to decrease entropy in a system without increasing entropy in its environment.

Some topics that came up:

TIL “chthonic”

Teilhard de Chardin and the Noosphere

Christopher quoted:

“Does this matter? Everyone does understand that “the map is not the territory” is only a metaphor. But metaphors shape thought. The phrase imports a mass of implicit, embodied experiences of using maps, and additional associated concepts and practices that “representation” and “model” don’t. That unhelpfully directs attention away from ways maps are atypical.”

…and asked how much does this matter? I responded with another question: “How much does choice of programming language matter?” I hoped my question implied that it one sense it doesn’t matter, all (Turning complete) languages are equivalent, but at the same time it matters a great deal because choice of language impacts actual software development in a myriad of ways.

Sahil supported my point by referencing:

ob. LW ref, the 12th virtue…

The good regulator theorem makes another appearance. Controls systems require a model of what they are controlling.

The image is not the imagined

maps as hash tables

maps as functions

A related concept attributed to SF author Neal Stephenson:

full house!

Aug 28 #43

Before we got into today’s topic we discussed Christopher’s skepticism that those interested in Kegan stages would be mostly stage 5 considering something less than 1% of the population is by Kegan’s estimate. The way I think of it that people tend to behave as different stages at different times and if you imagine taking a sample over a period of time (say the past year) then you will get a histogram of their stages, which could be interpreted as a profile. The people interested in Kegan stages would likely be weighted toward the right side, but few would be entirely stage 5.

Somehow that led to us agreeing that children were primarily for psychological experiments

and Valeria was quick with the ob. xkcd

On the main topic of ethnomethodologies, Christopher referenced a meta-study:

Evan lamented that Chapman had not taken the opportunity to mention that Kary Mullis who got a Nobel prize for his invention of the polymerase chain reaction (PCR) method claimed that LSD had “helped him develop the polymerase chain reaction that helps amplify specific DNA sequences”.

Chapman, somewhat surprisingly, never mentions psychedelics, though the character based on him allegedly was not unfamiliar with them in Ken Wilbur’s book:

Turns out Evan and I were both exposed to weird books from our dad’s bookshelves when we were young, including this one in common:

I recommended watching Vervaeke on Peterson’s podcast, though I had to admit that JBP would do better by letting his guests talk more:

Two Stephenson novels got high recommendations:

We heard that Evan and Josh are discussing a version of Landry’s Immanent Metaphysics inspired by the AI book in The Diamond Age. I suggested a Roam book would be a good start.

We introduced Roam to Christopher

Valeria recommends a free alternative:

Maybe the first Roam book:

I’m working on my own, Metamind

Some other topics that came up:

I brought up real estate agents in the context of the agent/principle problem:

Christopher referenced A methodological systematic review of meta-ethnography conduct to articulate the complex analytical phases

Next week:

Sep 4 #44

The general consensus was Chapman was largely on point in his criticisms of AI research over its history. The only mitigating factor might be that AI is a relatively young discipline, so maybe it shouldn’t be surprising that so many rookie errors were made along the way.

Chapman links the paperclip maximizer meme to a lesswrong article that attributes it to Bostrom.
I added a comment that references the probable origin to a post by Yudkowsky to the extropians mailing list, which I was hosting at the time (from 1996 to 2003).

Christopher was unfamiliar with the Extropians but discovered a related article.

The extropians mailing list was quite influential. Robin Hanson met Eliezer Yudkowsky and started the Overcoming Bias blog where the LessWrong sequences were originally published, leading to the Rationalist movement. Hal Finney, Nick Szabo, and Wei Dai were all regular contributers and instrumental in the blockchain and cryptocurrency movement. Maybe Satoshi was a subscriber too. :wink:

I mentioned I am still working with one of the Extropians co-founders, Tom Bell, on his open source legal system Ulex

Christopher referenced an article he published while working on a PhD… ShapeShift: A Projector-Guided Sculpture System

I mentioned that I started my research in AI around the same time that GOFAI was collapsing in the early 90s. My dept at the U of Calgary was all working in GOFAI except for me, the outcast working with neural nets and genetic algorithms. I was mainly influenced by Randal Beer’s Intelligence as Adaptive Behavior and David Goldberg’s Genetic Algorithms in Search, Optimization and Machine Learning.

I mentioned on Discord that was delighted to discover that Joscha Bach and I shared an AI prof, Ian Witten. It was in Ian’s intro course that I recapitulated the block world toy AI of Winograd’s SHRDLU which exemplifies impressive results through mostly smoke and mirrors.

Some other topics that came up:

Sahil referenced

I wonder if he noticed that SciAm article was written by Ed Regis, the same author of the Extropians article in Wired linked above? If not, that’s a pretty odd coincidence!

Interesting that Valeria’s boyfriend worked for Ben Goertzel at WebMind. Goertzel was also a regular on the extropians list. Another connection is that I worked with Shane Legg at AGI startup a2i2 after he worked for WebMind, and before Shane went on to co-found DeepMind. Shane did his PhD under Marcus Hutter on models of superhuman intelligence including AIXI.

I described my Turing Test variant, the Pike test

I noted that Yudkowsky seems as pessimistic as ever on the alignment problem…

Christopher referenced:

next week:

Sep 11 #45

We started with a meta-level discussion of how Chapman is obviously somewhat bitter that cognitive science is only recently rediscovering the advances he and his colleagues made in the mid- to late-80s. I wondered if that kind of thing happens all the time in science, some group forms around a memeplex that gets forgotten and rediscovered some time later. The rationalist community in particular seems prone to starting over from scratch, perhaps because they tend to be smart contrarians who prefer first principle thinking…

The rabbit hole du jour was about the problem of social media algorithms optimizing engagement by generating outrage. Evan was in favor of some kind of ban, though perhaps through private guilds rather than state enforced. I suggested that this was just the most recent iteration of a moral panic around a new form a media destroying society by making people engage in the wrong kind of behavior (by previous standards), same as what happened previously with novels, radio, movies, and TV. We discussed whether it was really different this time.

We eventually returned to the article with a discussion about what Chapman might have meant by “not computational”:

We will not be surprised if the mind is similarly made of a stuff that is not computational, though it emerges from a computational medium.

Some topics that came up along the way…

ob. SSC

next week:

Sep 18 #46

We started with a discussion of how Kegan stages 3, 4, and 5 relate to the major categories of rationalism critiques: ignorant, irrelevant, and inscrutable respectively. I don’t recall the word “scrutable” being used unironically before in a conversation.

Dunning-Kruger not reproducible?

I was surprised to learn that others here (Valeria and Evan) pine for Orkut

ob. SSC clocking in at the 38 minute mark, perhaps a new record

We went down a bit of a rabbit hole on the nature of causality

quotable from @Valeria

correlation does not imply causation, but correlation is correlated to causation

I illustrated my contention with a new meme

Evan referenced

h/t Valeria for

Daniel pointed us to

and

@red_leaf brought us back to the main topic, leading a discussion on how we can address Chapman’s claim:

Meta-rationalism is inscrutable because we meta-rationalists have failed to explain our claims in a way anyone can understand.

We discussed some funny reactions to Evan’s thread on the meaning crisis:

Always appreciate these summaries @davidmc

1 Like

Sep 25 #47

We led with a discussion of whether cargo cults are more in line with Kegan stage 3 or 4. The original cargo cults were clearly from early (<4) stage cultures, but much of the article argued that stage 4 science is largely cargo cultish.

Evan pointed out footnote 21 mentioned that stages explicitly:

Another way of putting this, in the language of adult developmental theory, is that airport operations require stage 4 (systematic) cognition, but scientific innovation requires stage 5 (meta-systematic) cognition.

We spent most of the session on a deep dive discussing whether modern democracies are cargo cults as Evan contends, mere simulacra of more authentic democracies of the past (a reference, I presume, to Baudrilliard). Are we just going through the motions? Does the appearance of a democracy provide cover for an elite class systematically domesticating and “farming” the population like so much livestock?

We talked about the relatively recent and largely unexamined history of the nation state, in the context of egregores and memeplexes.

Some topics that came up along the way:

(tl;dr the US went off the gold standard in 1971, a favourite topic of bitcoin maxis)

There’s no escaping the Egregores! (even if you think you have selected one, it is only because it has possessed you)

ob. SSC

double feature

The Iron Law of Bureacracy is due to SF writer Jerry Pournelle

Plugging my own taking on egregores as memetic subcultures

Later in the session we pivoted to a discussion (maybe a lament) on how wokism has taken over the DNC and many other formerly fine institutions (e.g. SciAm and the ACLU), and an examination of elite counter-signaling behavior.

Oct 2 #48

Written in 2015, this article predates and to a large extent presages The Eggplant. If this curriculum actually became a course, what would that entail. We agreed almost certainly more than a single university course, and perhaps less than a full degree. Probably more like a Masters degree with a several courses, assuming the student starts with a STEM undergrad degree.

Some topics that came up along the way…

Apparently Douglas Lenat’s Cyc project is still going…

In his interview with Lex, Knuth recounts the origins of his Surreal Numbers book on a napkin scrawled by the John Conway in Calgary

We pivoted to a discussion of Vervaeke’s AFTMC series:

I still have to read Vervaeke’s and Chapman’s exchange on letter.wiki Unpacking The Meaning Crisis

I had a hard stop after only one hour, but at the time I left we were discussing whether rationality requires language as what constitutes a language in this context. I wanted to point out that even honey bees have a language:

Oct 9 #49

Just @dglickman and me this week, staring into the abyss. We had quite a wide-ranging discussion starting with a discussion of the varieties of modern Buddhism (inspired by a recent Stoa session) before circling back to Nihilism.

We speculated on whether wars between branches of the same religion were primarily a European phenomenon (Google suggests probably not) and whether Confucianism counts as a religion and why there was no good English word for traditions like Stoicism and Confucianism that occupy religion-like territory without being religions in the same sense as the typical world religions.

We joked about training GPT-3 on the Chapman corpus to get some additional insights, but the more we talked about it, the more it sounded like a good idea for a future project.

I offered my own attempt at defining the meaning of meaning in a way that relates the various senses of the term, from the meaning of a sentence to the meaning of life.

The point is that [[truth]] always depends on the [[meaning]] which is created by agents through the process of [[interpretation]]. The necessary implication is [[truth]] cannot be [[objective]]. The postmodernists were right.

So the meaning of something is how it is interpreted by an interpreter. The meaning necessarily depends on the interpreter. For the record, @dglickman seemed skeptical that my definition captured the essence of the concept. :slight_smile: We tried to nail it down by looking for extreme examples at the simple end of the spectrum, worms and thermostats. Does it makes sense to say the thermostat interprets the value of the thermometer as an ambient temperature? I conceded probably not, it has no concept of ambient temperature. Maybe the difference between the temperature value and its set point can be interpreted as pain or discomfort? We went off on a tangent discussing the difference between pain and suffering without arriving at any conclusions, which led to another discussion about aporia.

I mentioned I had recently seen a picture on twitter depicting Nihilism as a whirlpool endangering a nearby ship, and how it was a good metaphor for navigating the space between stages 4 and 5 while avoiding the nihilistic abyss of stage 4.5. Found it:

I mentioned my loss of faith in objective meaning (Eternalism) was similar to my loss of faith in objective truth and objective value, for similar reasons. We discussed the nature of objectivity which turns out to be a fairly tricky concept involving counterfactuals (theoretical observers rather than the lack of any observers). For example, what would the moon have looked like from the surface of the Earth one billion years ago, before anything had evolved eyes to look? Still depends on who you imagine is looking.

We discussed the possible futures of Landry’s Immanent Metaphysics. Will it one day be regarded as a great work? Very difficult to assess from this vantage point, but we agreed it was possible. Since it is possible, then it will definitely happen in some fraction of future timelines from my Everettian many-worlds perspective. We wrapped up by talking about how measure theory applies to infinite timelines and probability, suggesting an appropriate article for next week if we want to continue the discussion along those lines:

Oct 16 #50

Back to a full house, we started with a discussion of the relation between probability theory and rationality. Evan brought up Pierce’s abduction as an area that doesn’t get sufficient attention:

I characterized it as “inference to the best explanation” as explicated by David Deutsch, extending Popper’s theory of epistemology.

I was reminded of Jake’s thread

Evan recommends

The rabbit hole of the week was Isaac Newton and magic. He spent the latter half of his life obsessed with alchemy, numerology, bible codes, and gematria:

From one perspective it looks like he went crazy, but probably from his own perspective this was a reasonable pivot. I suggested that his earlier work developing the mathematics of science would certainly look like magic to a pre-scientific culture, after all he is manipulating symbols in a ritualistic way to gain actual control and prediction, mainly the domains of magic.

We got on the topic of the definition of risk with a bit of disagreement over whether Taleb’s definition was substantially different than “a value attached to a probability”.

Another plug for this conversation where Taleb goes into some detail:

Apparently Joscha Bach really likes squirrels?

ob. xkcd h/t @Valeria

ob, LW h/t @red_leaf recommends

Evan recommends

We ended with a discussion about where to take this salon next, after the Chapman material. There was some appetite for taking on Dreyfus if not Heidegger for source material.

We also spent some time discussing Vervaeke and Awakening From The Meaning Crisis. Perhaps we should consider a pivot?

Oct 23 #51

After warming up with a discussion about the new Dune movie we discussed how the article made lots of great points (setting up a serial publication of The Eggplant in general), but agreed with the top comments from the rationalists defending LW, there was nothing in there that they would disagree with. Namely, Bayesianism is a necessary but not sufficient condition for rationality.

I conceded that Bayesianism can be very seductive insofar as it seems simple and universal in a sense, once you see belief in terms of credences. There was definitely a Bayes phase at LW when it was treated like a zen philosophy or a martial art or even a secret society, the so-called “Bayesian Illuminati”. I confessed I was the originator of that last one. :blush:

But no one uses the equation daily as far as we know. The last time I used it explicitly was in trying to update my credence on the covid lab leak hypothesis.

For the record, from another discussion thread:

According to wikipedia there are 10,000 cities in the world and 50 BSL-4 labs. The probability that covid originated in a city that coincidentally has a BSL-4 lab is 0.5%. What am I missing here? Bayesian priors. Brett Weinstein correctly points out in the Rebel Wisdom video that most novel viruses have a zootic origin so you have to take that into account. Let’s say for the sake of argument that if you didn’t know what city the virus originated in you think there’s a 99% chance it has a zootic origin. Then conditioned on the evidence of it originating in a city with a BSL-4 lab, the odds go from 99:1 against to 99:200 against or approx 2:1 for a lab origin, ~67%. And that’s before taking into account the recent Fauci email leaks about funding gain of function research in Wuhan.

Can a rationalist have anti-Bayesian priors? There may in fact be good reason to think there is a non-trivial correlation between orthodox Bayesians (as exemplified by MIRI and CFAR) and psychological problems:

https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe

We discussed how all models could be wrong (not sure if Valeria and Daniel reached agreement on that), the difference between the universe and the observable universe (I changed my mind and agreed they are distinct and it was important to qualify the latter), and different definitions of “truth” (as demonstrated by Jordan Peterson and Sam Harris in their now infamous argument).

On the topic of reasonable theists, @Valeria recommended two videos with Jonathan Pageau:

We didn’t come to any conclusions but made some progress on what sorts of entities can be ascribed agency, e.g. the environment, egregores, corporations, thermostats, etc. Do they require thoughts, consciousness, beliefs, goals, sensations? I’m leaning towards a instrumental definition, ascribe agency when it is useful to do so with a mind on accuracy and prediction (not just for entertainment purposes).

After a lengthy digression on the travesties of the current justice system we turned an eye toward the future.

The next reading for the grand finale of the first year, meeting #52, of the Eggplant book club will be the essay that gave The Bridge discord community its name:

For the 2nd year we discussed selecting readings from Chapman’s source material, and related subjects from (for example) LessWrong, SSC/ACX, and Vervaeke’s Awakening From the Meaning Crisis. e.g. Peirce’s How To Make Ideas Clear and Quine’s Two Dogmas of Empiricism

To reflect this expanded direction I propose we rename our group to the Aubergine Society.

Oct 30 #52

We had a full house for the final session of year 1 of the Eggplant book club, starting with a discussion about the differences in style between Vervaeke and Chapman. While we all agreed Vervaeke’s pedagogical style was very academic (literally a series of lectures), we disagreed on whether his conversational style should be interpreted as angry or passionate. :slight_smile:

Apparently there are some good conversations happening on the Evolving Ground slack with Chapman participating. Evan mentioned they are in the process of moving to Discord so they can keep their history, and TIL the notion of thread necromancy (resurrecting a long dead thread).

We spent most of the session discussing the two bridges from 3 to 4 and from 4 to 5. Chapman, of course, focuses on the 2nd bridge from 4 to 5, as does Evan in his Bridge series. The question came up on whether the first bridge was more important with respect to avoiding civilizational collapse, and quite possibly it is, but there are already plenty of bridge builders there and that raised the question of whether someone in stage 4 or 5 would be a better teacher to bring someone to stage 4. This turned into the rabbit hole of the week, but I ended up agreeing with Evan, even coming up with the same analogy about teaching math. Who is better to teach grade school math about, say, calculus? Someone who has taken calculus at the university level (analogous to stage 4 here) or someone who has a PhD in math (stage 5)? Clearly (we suggest), the stage 4 is in a much better position to relate to the student and craft the presentation so they can better learn the material.

We turned to talking about where Kegan got it wrong, or perhaps is outdated considering The Evolving Self was published in 1982:

One possibility is that Kegan severely underestimates the number of people currently at stage 5, mostly (I guess) by ignoring ones trained in the Eastern traditions. Curiously we always end up talking about Buddhism (not really, considering our membership :slight_smile: ), and that led to a conversation about the possibility of stages higher than 5. Daniel dropped a link to a pdf written by one of Kegan’s students that postulates 9 levels. (offline now, maybe it will come back)

On a recent Stoa, Evan interviewed Leigh Braslington, author of Right Concentration

Next week we will discuss the letter.wiki exchange between Vervaeke and Chapman: Unpacking The Meaning Crisis