In the Cells of the Eggplant

May 29 #30

Thanks to @dglickman, @Sahil, and newcomer Valeria for a fun discussion.

We spent most of the time trying to infer the meanings of reasonable, rational, and meta-rational from the table. Somewhat surprisingly, Chapman confirmed in the comments that the columns correspond to Kegan stages:

Yes, you’ve got that pretty much right! I’m taking 3=reasonable, 4=rational, 5=meta-rational. Kegan stage 3 isn’t capable of dealing with complex formal systems.

It was difficult to resist ascribing the reasonable column to “intuitive” but Chapman disclaimed that interpretation explicitly (h/t Sahil):

Reasonableness does not show most characteristics typically ascribed to the non-rational cluster. It is not irrational, emotional, intuitive, creative, superstitious, religious, fantasy-prone, self-deceptive, unconscious, or subjective. The Eggplant doesn’t discuss any of these categories.

I mentioned that the stages reminded me of the midwit meme… this is the one I had in mind (and yes, I’m still somewhat identifying as the midwit here :slight_smile: )

Also of note:

Interestingly, the System 1/2 terminology originated with Keith Stanovich. He subsequently made the point that “System 1” is misleadingly heterogeneous. He also introduced a “tri-process theory” in which one of the three is explicitly meta-rational. In cognitive science, meta-rational operations are often described as “reflective,” and Stanovich’s third process is a “reflection” that judges when it’s worth applying “algorithmic” rationality. “Distinguishing the reflective, algorithmic and autonomous minds: Is it time for a tri-process theory?” In J. St. B. T. Evans & K. Frankish (Eds.), In two minds: Dual processes and beyond, 2009, pp. 55–88.

Jun 5 #31

Perhaps the largest turnout yet thanks to @Sahil, @dglickman , @Evan_McMullen and Valeria :slight_smile:

We started with a discussion of the relation between Kegan stages and reasonableness/rationality, trying to make sense of Chapman’s comment:

I’m taking 3=reasonable, 4=rational, 5=meta-rational. Kegan stage 3 isn’t capable of dealing with complex formal systems.

If that is the case wouldn’t that imply that rationality would transcend and include reasonableness in the same way stage 4 transcends and includes stage 3? But that seems inconsistent with what Chapman has written elsewhere. More investigation required, perhaps someone should ask Chapman directly in a reply in the comments.

The latter half of the session was spent discussing definitions of “understanding”. Searle’s Chinese Room was referenced as a useful metaphor, though I think we all agreed that his argument fails to show that AIs cannot understand in principle.

A real life example of a system that fakes understanding is the natural language generator GPT-3.

In my view, understanding is on a continuum. A system can be said to understand something to the extent that its internal models are accurately predictive. That necessarily implies that greater understanding requires more sophisticated models in general, but complex models don’t necessarily imply greater understanding because the models can just be wrong.

Evan took a different view, reserving the word “understanding” for a minimum level of model sophistication that included a world-model, a self-model, and a model of the process of understanding itself. In Evan’s view true understanding is relatively rare among humans (present company excluded!). In order to get a better sense of how Evan understands understanding I asked how far back in history you would have to go to find the first person that understood anything and Evan said he wouldn’t be surprised if that event predates history by a fair amount.

Evan introduced his ideas in terms of attractors in phase space, while Sahil discussed it in terms of a phase change, like the singularity but on a personal level. The technological singularity as a fundamental phase change does not really fit the definition of singularity from math or physics. Blame Vernor Vinge.

Related topics:

Jun 12 #32

Catala is a domain-specific programming language designed for deriving correct-by-construction implementations from legislative texts.

The Moral Landscape was criticized by many for ignoring the Is-Ought problem

My take is the criticism is misplaced because Harris is very explicit about conditioning his ethics on the assumption that the purpose of morality is to improve human well-being. Predicated on that, the rest follows.

We turned to a lengthy discussion on what it means to have “clear thinking”. One suggested example was the Alexander Technique as taught by friend of the Stoa, Michael Ashcroft

I learned that there is a difference of opinion on whether it was possible to think without concepts. I had an underlying assumption that concepts were a necessary condition, while Evan (and presumably others) demurred. I wouldn’t mind exploring this further next time.

The Repugnant Conclusion came up again in the context of criticisms of EA. Evan objects to the inclination towards wide-scale control. I objected to the moral weight they attach to future (almost certainly far wealthier) generations.

Sahil recommends Beckstead’s On The Overwhelming Importance of Shaping the Far Future

I don’t recall how we got on the topic of atrocities but there was general agreement that they always seem to involve a rationalization of dehumanization, and therefore rationality was to blame, while reasonableness pointed in the opposite direction.

Mandatory SSC reference courtesy of Sahil:

I mentioned I was confused by the common criticism around Bayesian thinking about how determining priors is left as an exercise for the reader. I suggested that you can always start from hypothetical complete ignorance and use priors of 0.5, then start taking into account any evidence you do have.

More specifically, if two people are genuine Bayesian rationalists with common priors, and if they each have common knowledge of their individual posterior probabilities, then their posteriors must be equal.

Evan resolved an additional confusion here. The “common priors” in this context doesn’t mean they start with the same credences, rather they start with the same possibilities.


Thanks to @Evan_McMullen @dglickman @Sahil and Valeria for a stimulating discussion, and apologies for ending the session accidentally when I left!

Jun 19 #33

Some topics that came up…

AIs, unsupervised learning, DeepMind’s Alpha Zero

Valeria recommends:

https://www.sciencedirect.com/topics/neuroscience/somatic-marker-hypothesis

I mentioned I have not signed up to get my brain and spinal cord frozen upon death, but only because I live in Canada and the logistics are too tricky. I am a long-time member of Alcor led my good friend Max More.

Thanks to @dglickman, @Sahil, and Valeria for another good round of discussion.

Jun 26 #34

We started with a deep dive on what we mean when talking about “thoughts”. Turns out not surprisingly that we had quite different views, though with some commonality. There seemed to be a core of language-oriented stream of consciousness thoughts we all agreed on, but once we strayed into the territory of memories, perceptions, sensations, non-symbolic, and subconscious thoughts the less there was any consensus on whether these counted. Freud’s massive influence was acknowledged but not at all venerated, quite the opposite.

I recounted a story I heard Nick Chater tell on a podcast (Jim Rutt not Sam Harris)

Chapman noted in footnote 3:

This is also the easiest aspect of human active vision to study scientifically, because eye tracking apparatus can determine where you are looking, with high precision, as you move your eyes around.

Chater described an experiment that used eye-tracking to show the subject a screen of text but only the part of the screen the subject was looking at was the actual text, the rest of the screen had constantly changing text. To the subject it looked like a normal page but for anyone else it was constantly shifting page of gibberish.

@Evan_McMullen expressed strong skepticism that it was true that the mind is flat. As far as I recall Chater would say the depth of the mind is an illusion kind of like how our field of view has the illusion of detail, but only because it is detailed only wherever we are looking. The mind appears deep only because it has depth whenever we look for depth, if that makes sense.

@Sahil was surprised I didn’t object to Chapman’s objection to objective perception:

“Objective” would mean that it is independent of your theories, of your projects, and of anything that cannot be sensed at this moment, such as recent events. We saw that, for several in-principle reasons, this seems impossible.

Even though I would have defended an objective reality a few years ago, I’ve changed my mind(!) since researching QM and reading Hoffman, and almost certainly would not defend objective perception as that sounds incoherent to me (like objective value is incoherent, another tangent). I’m not sure where our wires got crossed.

Mandatory Joscha Bach reference…

A discussion on why some philosophers make a career around incoherent thought experiments like p-zombies cough Chalmers cough led to Graeber’s concept of BS jobs:

We agreed the difference between Dennett and Chalmers, between good faith and bad faith philosophy, would be very difficult to distinguish as an outsider.

I mentioned that I had met both Dennett and Chalmers who were a gentleman and an asshole respectively. To be fair I met Chalmers a very long time ago at the Santa Fe Institute when he was still a grad student so he may have changed a lot since then.

As always, much appreciation to the crew for another stimulating discussion!

Jul 3 #35

As is tradition, I objected to Chapman’s opening statement:

The typical rationalist view is that the purpose of language is to state facts and theories.

It may be a small difference but I suggested that the rationalist view is that purpose of language is symbolic modeling, and that stating facts and theories is derivative. Sahil pointed out that Chapman’s statement could be interpreted the same way, and doesn’t necessarily imply communication.

We fairly quickly returned to the perennial question on whether rationality is an ideal, even while we all agreed that it is impossible to achieve for finite, bounded agents. Evan caught the rest of us off guard by arguing that ideals in general were harmful in the sense that Platonism is considered harmful. We spent most of the rest of the session unpacking this bold claim.

Some topics that came up:

Forrest Landry’s ethics was discussed:

I confessed I’m still trying to figure out how to reconcile physics and choice, i.e. what does it mean for a physical system like a biological organism to make choices.

Jul 10 #36

We discussed computational metaphors for references, the ubiquitous “pointer”, and took turns trying to explain programmatic pointers to Christian. I tried to tie it back to the Eggplant chapter by noting that in natural language, words are essentially pointers, pointing to concepts, and that ability gives humans great power in thinking and communicating.

TIL it was a Latin translation of Harry Potter that enabled Evan to become truly fluent

In the context of learning to code, Christian mentioned an interesting new standard:

Sahil replied with the standard retort on new standards:

AI is coming to coding:

As is tradition I took exception to one of Chapman’s main claims:

Referring is accomplished by whatever means is available , and improvised methods are unenumerable, so there can’t be any systematic theory or rational taxonomy of reference, only an unsystematic catalog of special cases.

I suggested there are many classes that contain unenumerable (infinite) instances that are nevertheless amenable to systematic theory and/or rational taxonomy, namely real numbers, possible functions, possible programs, and more prosaic classes like possible human experiences, possible books, etc.

Do infinities actually exist? Is the concept misleading? Joscha Bach argues that only the computable is real (and by extension infinities are not):

On understanding different kinds of infinities Evan recommends

I recommend

Me and Eliezer on discovering digits of Pi

We discussed scientism as eternalist monist physicalist religion, and Evan took the opportunity to analyze it in terms of Landry’s IDM: the immanent, the omniscient, and the transcendent.

Jul 17 #37

Before we started on today’s reading we revisited some of the themes of the past couple weeks.

What does Chapman mean by “unenumerable”? We’re leaning toward not literally infinite, but practically intractable, sort of like the “ten thousand things” in Daoism

We briefly went down the rabbit hole of whether math is part of natural philosophy. What is math really?

In revisiting the harm in neoplatonism, Evan brought up the concept (TIL) of archetype entanglement

When we finally started on the topic (40 minutes into the session) I suggested that Chapman, in describing various senses of how the word “belief” is used reasonably, missed an important underlying pattern. Me on beliefs

A belief is really a model of conditional behavior. When we say that an agent holds a belief we are claiming that the agent will tend to behave in certain ways under various conditions. That doesn’t mean that the agent represents the belief per se anywhere in its brain (or whatever hardware it uses to think). It does mean that the belief is represented in the agent assigning the belief to the other agent. Beliefs are in the eye of the beholder. Beliefs are mental models that agents use to model the agency of agents. A belief requires two agents, one to assign the belief to the other, unless the agent is assigning the belief to itself.

We discussed how drugs and meditation have similar effects using the framing of raising the temperature of the connectome before neural annealing settles into new patterns of thought.

Evan was able to confirm my suspicion that Evolving Ground (founded by friend of the Stoa, Jared Jaynes, and Chapman’s partner, Charlie Awbrey) is inspired in part by Whitehead’s process philosophy

Evan recommends Michael Ashcroft’s course on the Alexander Technique.

We ended with a very fun group dunk on post-rats, suggesting that one of the risks of leaving the path and bushwhacking was falling off unmapped cliffs, metaphorically speaking.

h/t Valeria for mentioning this classic:

Jul 24 #38

First I want to note an odd coincidence. In the discord channel I posted a photo of a page from a book that had just been delivered and challenged the readers to guess which book.

@Evan_McMullen correctly noted it is a page from Chapman’s PhD thesis book:

What I didn’t anticipate is the chapter we covered today would mention it explicitly:

This chapter recapitulates artificial intelligence research from my PhD thesis, which I quote from below. I wrote a program, Sonja, that took instructions while playing a video game; it illustrated most of the points covered here.

Maybe a small coincidence but I found it remarkable.

Also remarkable that I made it all the way to paragraph 7 before objecting to one of Chapman’s claims:

This is obvious, but bears emphasizing due to a potential rationalist misunderstanding. Often programming is introduced as “giving the computer instructions,” and programs are likened to recipes. This is probably helpful for novices, but potentially misleading in that a program does have to specify what will be done in complete detail. Then the computer does exactly and only what the program says.

I would say that computer programs are a lot more like recipes in that a great deal of information is left unspecified, and left up to the computer to interpret. As a group we discussed there being a spectrum here with some kinds of instructions (recipes) being less formal than others (computer programs) with something like legal language in between in terms of how narrowly the instructions may require interpretation by the receiver.

I was reminded that reading today’s chapter led to a new (for me) association, meta-rationality (the recognition of nebulosity, with corresponding limitations put on rationality) was like getting greypilled in the vgr/ribbonfarm sense.

Another recommendation from @Sahil to read Scott Aaronson’s Why Philosophers Should Care About Computational Complexity

The rabbit hole of the week was about the reality of formal systems in particular, and abstractions in general. I commented that it was seldom useful to ask if something is real, rather ask in what sense is it real? Like in what sense is the game of chess real, and the (Judeo-Christian) god is not (assuming it is not :slight_smile: )

TIL philosophical nominalism h/t @Sahil

Bookmarked for another time: in what sense is (spiritual) enlightenment real?

Quotable from @dglickman:

Conceptual analysis is a dangerous activity

Next week (Jul 31) we’ll cover the introduction to Part 3:

Jul 31 #39

We started with a discussion about Chapman’s use of the term “circumrationality”. Evan associated it with “circumambulation” in the sense of walking around a garden, and tending to its edges. I yes/anded that, noting that it could be generalized to systems at many scales: cells, organisms, and organizations all tend to maintain their borders with the environment.

The only real (and relatively slight) disagreement with this week’s reading was raised by Valeria:

“But I don’t get it ,” the student struggling with high school algebra protests. “What does ‘x’ mean ?” It doesn’t mean anything. That is the whole point. That meaninglessness is why formalism works. You cannot get this by learning facts, nor procedures, though both are necessary.

We agreed that even in this context the variable x is not completely meaningless. It serves some function which gives it at least some meaning.

@dglickman referenced a relevant text:

The rabbit hole of the week was an exploration of rhetoric

I find it interesting how Sophists were once respected teachers but now it is an allegation of practicing “sophistry”, the use of fallacious arguments, especially with the intention of deceiving. Similarly the every day usage of “stoic”, “cynic”, and “epicurean” have been debased over time.

Evan mentioned a Latin diss track. I was amused by thought of a devout Christian monk transcribing it for posterity.

Before I had to leave we were talking about “envisioning” in the context of programming computers. I mentioned how I usually envision the components of a program as agents, calling on each other, passing messages back and forth, depending on the knowledge and capabilities of each other, like an organization of people working together.

Aug 7 #40

The discussion this week was mainly around the role of analogical reasoning and whether it is necessary for learning and understanding. We agreed probably not for learning, since animals learn but don’t seem to understand by analogy. Sahil didn’t see a reason to distinguish between map->territory type maps and map->map type maps, the latter presumably associated with analogies. Evan suggested that non-human animals have a hard limit on recursion levels when it comes to mental maps which prevents them from the kind of analogical learning under discussion.

Sahil recommended:

I asked if it was possible to understand anything new without relying on analogies. That immediately brought up the (in)famous Mary’s Room thought experiment:

Evan recommended Hofstadter for a compelling extended argument on why analogies are fundamental to human cognition:

I asked if long-term memory required changing neural connections, and it was suggested that no, it just requires adjusting ANN-style weights between neurons. No one present knew what that might entail at the biological level. I was reminded that Hawkins claims to have made great strides in this area in recent decades, as described in his new book:

I also recommended Clark in the chat:

We discussed recent discoveries that upended long-held assumptions about (lack of) neuroplasticity:

Since we’re running out of Eggplant material, Evan (I think) humorously suggested that we join the Evolving Ground community to hassle him to write more

We pivoted to a discussion on the history of Buddhism and the relations between various lineages and traditions. Evan compared Rime to Common Lisp, as a movement to bring together the various extent strains.

Valeria mentioned this classic:

Without looking Evan thought it was either that one or this one:

More book recos:

next week:

Aug 14 #41

Valeria started by noting two areas seemingly actively undergoing ontological remodeling these days: the LGBTQIA2S++ community of ever-expanding inclusion, and the realm of psychological disorders.

This enabled @Evan_McMullen to get our mandatory SSC/ACX refs out of the way right at the start :slight_smile:

I drew attention to a possible connection between Ontological Remodeling and Ontological Design as articulated by Fraga and Cox on the Stoa:

The first rabbit hole we went down was talking about education, and in particular meta-education, i.e. the education of teachers and the state of the art, and why it seems to be so bad. We converged on a few related pessimistic takes: education majors statistically have the lowest educational standards in university, and that tends to be self-reinforcing as the ones that stay on to do research and teach are maybe not the best to make progress in the field. Add to that the fact that teachers today are relatively poorly paid civil servants, making difficult to imagine how the field can improve in the near term. The bureacracy inevitably leads to a coalition of blankfaces, a concept from Scott Aaronson’s blog:

https://www.scottaaronson.com/blog/?p=5675

Though the word was new to me, I mentioned that I had read complaints about these types of personalities dating back 2200 years to the Qin Dynasty

Evan said he wouldn’t be surprised if there were much older written complaints on Sumerian tablets.

A side discussion in the chat touched on Nozick’s utility monster:

After reading this passage…

For instance, eating a cookie might bring only one unit of pleasure to an ordinary person but could bring 100 units of pleasure to a utility monster.

…I will always picture the Utility Monster like this
image

Somehow it escaped me that we have an actual utilitarian in our midst (Sahil). I suggested this impossibility theorem might be bad news

Evan boldly came out against thought experiments, saying

they’re basically fancy tautologies, i think, and are used to smuggle in questionable metaphysical assumptions

I mentioned that I just finished reading a wonderful introduction to Constructor Theory which introduces counterfactuals into fundamental physics:

I was surprised to learn that Daniel had started the book but wasn’t impressed so far. YMMV. Perhaps for David Deutsch fans only?

We ended with a discussion on how most of us (all besides me?) subscribe to Buddhism in one form or another.

Worth revisiting The Bridge sessions:

We should try to employ the rationalist double crux more because we tend to be so close in our stances it is more interesting to discover where we differ:

Welcome @red_leaf !

Next week:

Aug 21 #42

Sahil kicked off the discussion with a good question:

Chapman says “Those are designed to minimize nebulosity.” in the Fitting the Territory to the Map section. How literally does he mean this? Can you do that, or only squirrel away nebulosity for a while?

The rabbit hole of the week was an exploration of a potentially new idea brought up by Evan and named by me, a “conservation of nebulosity”. I suggested there might be a relation to entropy there, considering that is physically impossible to decrease entropy in a system without increasing entropy in its environment.

Some topics that came up:

TIL “chthonic”

Teilhard de Chardin and the Noosphere

Christopher quoted:

“Does this matter? Everyone does understand that “the map is not the territory” is only a metaphor. But metaphors shape thought. The phrase imports a mass of implicit, embodied experiences of using maps, and additional associated concepts and practices that “representation” and “model” don’t. That unhelpfully directs attention away from ways maps are atypical.”

…and asked how much does this matter? I responded with another question: “How much does choice of programming language matter?” I hoped my question implied that it one sense it doesn’t matter, all (Turning complete) languages are equivalent, but at the same time it matters a great deal because choice of language impacts actual software development in a myriad of ways.

Sahil supported my point by referencing:

ob. LW ref, the 12th virtue…

The good regulator theorem makes another appearance. Controls systems require a model of what they are controlling.

The image is not the imagined

maps as hash tables

maps as functions

A related concept attributed to SF author Neal Stephenson:

full house!

Aug 28 #43

Before we got into today’s topic we discussed Christopher’s skepticism that those interested in Kegan stages would be mostly stage 5 considering something less than 1% of the population is by Kegan’s estimate. The way I think of it that people tend to behave as different stages at different times and if you imagine taking a sample over a period of time (say the past year) then you will get a histogram of their stages, which could be interpreted as a profile. The people interested in Kegan stages would likely be weighted toward the right side, but few would be entirely stage 5.

Somehow that led to us agreeing that children were primarily for psychological experiments

and Valeria was quick with the ob. xkcd

On the main topic of ethnomethodologies, Christopher referenced a meta-study:

Evan lamented that Chapman had not taken the opportunity to mention that Kary Mullis who got a Nobel prize for his invention of the polymerase chain reaction (PCR) method claimed that LSD had “helped him develop the polymerase chain reaction that helps amplify specific DNA sequences”.

Chapman, somewhat surprisingly, never mentions psychedelics, though the character based on him allegedly was not unfamiliar with them in Ken Wilbur’s book:

Turns out Evan and I were both exposed to weird books from our dad’s bookshelves when we were young, including this one in common:

I recommended watching Vervaeke on Peterson’s podcast, though I had to admit that JBP would do better by letting his guests talk more:

Two Stephenson novels got high recommendations:

We heard that Evan and Josh are discussing a version of Landry’s Immanent Metaphysics inspired by the AI book in The Diamond Age. I suggested a Roam book would be a good start.

We introduced Roam to Christopher

Valeria recommends a free alternative:

Maybe the first Roam book:

I’m working on my own, Metamind

Some other topics that came up:

I brought up real estate agents in the context of the agent/principle problem:

Christopher referenced A methodological systematic review of meta-ethnography conduct to articulate the complex analytical phases

Next week:

Sep 4 #44

The general consensus was Chapman was largely on point in his criticisms of AI research over its history. The only mitigating factor might be that AI is a relatively young discipline, so maybe it shouldn’t be surprising that so many rookie errors were made along the way.

Chapman links the paperclip maximizer meme to a lesswrong article that attributes it to Bostrom.
I added a comment that references the probable origin to a post by Yudkowsky to the extropians mailing list, which I was hosting at the time (from 1996 to 2003).

Christopher was unfamiliar with the Extropians but discovered a related article.

The extropians mailing list was quite influential. Robin Hanson met Eliezer Yudkowsky and started the Overcoming Bias blog where the LessWrong sequences were originally published, leading to the Rationalist movement. Hal Finney, Nick Szabo, and Wei Dai were all regular contributers and instrumental in the blockchain and cryptocurrency movement. Maybe Satoshi was a subscriber too. :wink:

I mentioned I am still working with one of the Extropians co-founders, Tom Bell, on his open source legal system Ulex

Christopher referenced an article he published while working on a PhD… ShapeShift: A Projector-Guided Sculpture System

I mentioned that I started my research in AI around the same time that GOFAI was collapsing in the early 90s. My dept at the U of Calgary was all working in GOFAI except for me, the outcast working with neural nets and genetic algorithms. I was mainly influenced by Randal Beer’s Intelligence as Adaptive Behavior and David Goldberg’s Genetic Algorithms in Search, Optimization and Machine Learning.

I mentioned on Discord that was delighted to discover that Joscha Bach and I shared an AI prof, Ian Witten. It was in Ian’s intro course that I recapitulated the block world toy AI of Winograd’s SHRDLU which exemplifies impressive results through mostly smoke and mirrors.

Some other topics that came up:

Sahil referenced

I wonder if he noticed that SciAm article was written by Ed Regis, the same author of the Extropians article in Wired linked above? If not, that’s a pretty odd coincidence!

Interesting that Valeria’s boyfriend worked for Ben Goertzel at WebMind. Goertzel was also a regular on the extropians list. Another connection is that I worked with Shane Legg at AGI startup a2i2 after he worked for WebMind, and before Shane went on to co-found DeepMind. Shane did his PhD under Marcus Hutter on models of superhuman intelligence including AIXI.

I described my Turing Test variant, the Pike test

I noted that Yudkowsky seems as pessimistic as ever on the alignment problem…

Christopher referenced:

next week:

Sep 11 #45

We started with a meta-level discussion of how Chapman is obviously somewhat bitter that cognitive science is only recently rediscovering the advances he and his colleagues made in the mid- to late-80s. I wondered if that kind of thing happens all the time in science, some group forms around a memeplex that gets forgotten and rediscovered some time later. The rationalist community in particular seems prone to starting over from scratch, perhaps because they tend to be smart contrarians who prefer first principle thinking…

The rabbit hole du jour was about the problem of social media algorithms optimizing engagement by generating outrage. Evan was in favor of some kind of ban, though perhaps through private guilds rather than state enforced. I suggested that this was just the most recent iteration of a moral panic around a new form a media destroying society by making people engage in the wrong kind of behavior (by previous standards), same as what happened previously with novels, radio, movies, and TV. We discussed whether it was really different this time.

We eventually returned to the article with a discussion about what Chapman might have meant by “not computational”:

We will not be surprised if the mind is similarly made of a stuff that is not computational, though it emerges from a computational medium.

Some topics that came up along the way…

ob. SSC

next week:

Sep 18 #46

We started with a discussion of how Kegan stages 3, 4, and 5 relate to the major categories of rationalism critiques: ignorant, irrelevant, and inscrutable respectively. I don’t recall the word “scrutable” being used unironically before in a conversation.

Dunning-Kruger not reproducible?

I was surprised to learn that others here (Valeria and Evan) pine for Orkut

ob. SSC clocking in at the 38 minute mark, perhaps a new record

We went down a bit of a rabbit hole on the nature of causality

quotable from @Valeria

correlation does not imply causation, but correlation is correlated to causation

I illustrated my contention with a new meme

Evan referenced

h/t Valeria for

Daniel pointed us to

and

@red_leaf brought us back to the main topic, leading a discussion on how we can address Chapman’s claim:

Meta-rationalism is inscrutable because we meta-rationalists have failed to explain our claims in a way anyone can understand.

We discussed some funny reactions to Evan’s thread on the meaning crisis:

Always appreciate these summaries @davidmc

1 Like

Sep 25 #47

We led with a discussion of whether cargo cults are more in line with Kegan stage 3 or 4. The original cargo cults were clearly from early (<4) stage cultures, but much of the article argued that stage 4 science is largely cargo cultish.

Evan pointed out footnote 21 mentioned that stages explicitly:

Another way of putting this, in the language of adult developmental theory, is that airport operations require stage 4 (systematic) cognition, but scientific innovation requires stage 5 (meta-systematic) cognition.

We spent most of the session on a deep dive discussing whether modern democracies are cargo cults as Evan contends, mere simulacra of more authentic democracies of the past (a reference, I presume, to Baudrilliard). Are we just going through the motions? Does the appearance of a democracy provide cover for an elite class systematically domesticating and “farming” the population like so much livestock?

We talked about the relatively recent and largely unexamined history of the nation state, in the context of egregores and memeplexes.

Some topics that came up along the way:

(tl;dr the US went off the gold standard in 1971, a favourite topic of bitcoin maxis)

There’s no escaping the Egregores! (even if you think you have selected one, it is only because it has possessed you)

ob. SSC

double feature

The Iron Law of Bureacracy is due to SF writer Jerry Pournelle

Plugging my own taking on egregores as memetic subcultures

Later in the session we pivoted to a discussion (maybe a lament) on how wokism has taken over the DNC and many other formerly fine institutions (e.g. SciAm and the ACLU), and an examination of elite counter-signaling behavior.

Oct 2 #48

Written in 2015, this article predates and to a large extent presages The Eggplant. If this curriculum actually became a course, what would that entail. We agreed almost certainly more than a single university course, and perhaps less than a full degree. Probably more like a Masters degree with a several courses, assuming the student starts with a STEM undergrad degree.

Some topics that came up along the way…

Apparently Douglas Lenat’s Cyc project is still going…

In his interview with Lex, Knuth recounts the origins of his Surreal Numbers book on a napkin scrawled by the John Conway in Calgary

We pivoted to a discussion of Vervaeke’s AFTMC series:

I still have to read Vervaeke’s and Chapman’s exchange on letter.wiki Unpacking The Meaning Crisis

I had a hard stop after only one hour, but at the time I left we were discussing whether rationality requires language as what constitutes a language in this context. I wanted to point out that even honey bees have a language: